Fast and Robust Mesh Simplification for Generated and Real-World 3D Assets

Published: 03 May 2026, Last Modified: 13 May 2026CVPR 2026 Workshop 3D4S OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: 3D Vision, Mesh Simplification, Geometry Processing, Quadric Error Metric (QEM), 3D Reconstruction, Generative 3D, Neural Rendering, Non-Manifold Meshes, Texture Preservation, 3D Asset Optimization
TL;DR: We propose FA-QEM, a fast and robust mesh simplification method that preserves geometry and appearance on noisy, non-manifold meshes from modern 3D reconstruction and generative pipelines, enabling high-quality and efficient 3D assets.
Abstract: The rapid growth of 3D content from modern reconstruction and generative pipelines, such as neural rendering and large-scale 3D asset generation, has led to an abundance of dense, noisy, and often non-manifold meshes. While these representations achieve high visual fidelity, their complexity poses significant challenges for downstream applications in simulation, AR/VR, and scientific computing, where efficient and reliable geometry is essential. This necessitates mesh simplification methods that are not only fast and robust to "in-the-wild" inputs, but also capable of preserving fine geometric structures and high-quality appearance. In this paper, we propose $\textbf{F}$eature-$\textbf{A}$ware $\textbf{Q}$uadric $\textbf{E}$rror $\textbf{M}$etric ($\textbf{FA-QEM}$), a comprehensive mesh simplification pipeline designed for modern 3D assets. Our approach introduces a novel multi-term quadric error formulation that jointly encodes geometric deviation, boundary curvature, and surface normal consistency, enabling optimal vertex placement that preserves sharp features even under aggressive simplification. Furthermore, we show that high-fidelity geometric simplification significantly improves downstream appearance transfer, serving as a superior front-end for texture mapping via successive mapping techniques. We conduct extensive evaluations on both AI-generated meshes and large-scale real-world datasets, including Thingi10K and the Real-World Textured Things dataset. Our results demonstrate that FA-QEM achieves consistently lower geometric error, better visual fidelity, and substantially faster runtimes compared to existing methods, while maintaining robustness across diverse and challenging inputs. These properties make FA-QEM a practical and effective component for scalable 3D reconstruction and generation pipelines.
Supplementary Material: zip
Submission Number: 14
Loading