Researchers propose 3D-GPT: combining LLMs and agents for procedural Text-to-3D model generation
Researchers propose 3D-GPT: combining LLMs and agents for procedural Text-to-3D model generation

Researchers propose 3D-GPT: combining LLMs and agents for procedural Text-to-3D model generation

Researchers propose a new AI system called 3D-GPT that creates 3D models by combining natural language instructions and agents specialized for working with existing 3D modeling tools.

3D-GPT has predefined functions that make 3D shapes, and it tweaks parameters to build scenes. The key is getting the AI to understand instructions and pick the right tools.

It has three main agents:

  • A dispatcher that parses the text and picks generation functions
  • A conceptualizer that adds details missing from the description
  • A modeler that sets parameters and outputs code to drive 3D software

By breaking modeling work down into steps, the agents can collab to match the descriptions. This is sort of like how a 3D modeling team of humans would work.

The paper authors show it making simple scenes like "lush meadow with flowers" that fit the text. It also modifies scenes appropriately when given new instructions. I include some gifs of example outputs in my full summary. They look pretty good - I would say 2005-quality graphics.

There are limits. It fully relies on existing generators, so quality is capped. Details and curves are iffy. It resorts to default shapes often instead of true understanding. And I doubt the verts and textures are well-optimized.

The agent architecture seems to be really popular right now. This one shows some planning skills, which could extend to more creative tasks someday.

TLDR: AI agents can team up to generate 3D models from text instructions. Works to some degree but limitations remain.

Full summary. Paper here.

submitted by /u/Successful-Western27
[link] [comments]