Text to Robotic Assembly of Multi Component Objects using 3D Generative AI and Vision Language Models
Track: Paper
Keywords: Robotic Assembly, Vision–language models, 3D generative AI, Human Robot Interaction, Human–AI Interaction, Computer-Aided Design, Digital Fabrication, Physical Objects, Geometry-aware Reasoning, Function-aware Part Decomposition
TL;DR: Using 3D Generative AI and Vision Language Models for Function and Geometry-Aware Part Assignment in Text to Multi-Component Robotic Assembly
Abstract: Advances in 3D generative AI have enabled the creation of physical objects from text prompts, but challenges remain in creating objects involving multiple component types. We present a pipeline that integrates 3D generative AI with vision-language models (VLMs) to enable the robotic assembly of multi-component objects from natural language. Our method leverages VLMs for zero-shot, multi-modal reasoning about geometry and functionality to decompose AI-generated meshes into multi-component 3D models using predefined structural and panel components. We demonstrate that a VLM is capable of determining which mesh regions need panel components in addition to structural components based on object functionality. Evaluation across test objects shows that users preferred the VLM-generated assignments 90.6\% of the time, compared to 59.4\% for rule-based and 2.5\% for random assignment. Lastly, the system allows users to refine component assignments through conversational feedback, enabling greater human control and agency in making physical objects with generative AI and robotics.
Video Preview For Artwork: mp4
Submission Number: 206
Loading