Text-to-CAD Generation Through Infusing Visual Feedback in Large Language Models

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract:

Creating Computer-Aided Design (CAD) models requires significant expertise and effort. Text-to-CAD, which converts textual descriptions into CAD parametric sequences, is crucial in streamlining this process. Recent studies have utilized ground-truth parametric sequences, known as sequential signals, as supervision to achieve this goal. However, CAD models are inherently multimodal, comprising parametric sequences and corresponding rendered visual objects. Besides, the rendering process from parametric sequences to visual objects is many-to-one. Therefore, both sequential and visual signals are critical for effective training. In this work, we introduce CADFusion, a framework that uses Large Language Models (LLMs) as the backbone and alternates between two training stages: the sequential learning (SL) stage and the visual feedback (VF) stage. In the SL stage, we train LLMs using ground-truth parametric sequences, enabling the generation of logically coherent parametric sequences. In the VF stage, we reward parametric sequences that render into visually preferred objects and penalize those that do not, allowing LLMs to learn how rendered visual objects are perceived and evaluated. These two stages alternate throughout the training, ensuring balanced learning and preserving benefits of both signals. Experiments demonstrate that CADFusion significantly improves performance, both qualitatively and quantitatively.

Lay Summary:

The Computer-Aided Design (CAD) objects can be represented by sequences reflect the design history. Generating CAD from written instructions involve model training on only sequential data pairs, which posed problems on learning efficiency and final performance.

We developed a text-to-CAD system on not only sequential learning but also visual feedback. This paradigm improved the model performance by integrating CAD's visual quality into the pipeline. This helps the system produce cleaner, more natural designs with smooth edges and correct holes.

Our work improves the text-to-CAD generation quality and can help speed up the early stages of product design, saving time and reducing the need for expertise.

Application-Driven Machine Learning: This submission is on Application-Driven Machine Learning.
Primary Area: Applications->Computer Vision
Keywords: Large Language Models, Computer-aided Design, Text-to-CAD Generation
Submission Number: 8939
Loading