When will we be able to create Midjourney-level 3D models generated by AI?
Ethan: That's a common question. Iβd divide it into the market and technology aspects. For market success, there must be real user need and at the moment, large-scale consumer use cases for 3D models are limited. A significant market shift might happen with the mass adoption of VR and XR headsets, creating a need for 3D, interactive models. From a technology perspective, we've only solved about 10% of the challenges. Proper UV unwrapping, topology, control, and reducing poly count are areas that need improvement. But I'm optimistic. Given the current pace of advancement, we could see major progress in the next few years.
You mentioned the importance of quality, diversity, and speed in a 3D generative AI product. Can you elaborate on that?
Ethan: Sure. Quality is paramount. Users are willing to wait or input more text for high-quality models with good textures, proper poly count, and neat UV unwrapping. Diversity is also crucial. A competitive 3D generative system should create a wide range of objects, not just limited categories like chairs or vases. Speed matters too. Itβs essential to provide quick previews, even if the final high-quality model takes longer to generate.
Check out the full interview right here: https://open.substack.com/pub/xraispotlight/p/are-3d-gen-ai-tools-ready-for-production?r=2umm8d&utm_campaign=post&utm_medium=web
[link] [comments]