Machine Unlearning in 3D Generation: A Perspective-Coherent Acceleration Framework

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Machine Unlearning, 3D Tasks, Perspective Acceleration, Multi-View Efficiency
Abstract: Recent advances in generative models trained on large-scale datasets have enabled high-quality 3D synthesis across various domains. However, these models also raise critical privacy concerns. Unlike 2D image synthesis, where risks typically involve the leakage of visual features or identifiable patterns, 3D generation introduces additional challenges, as reconstructed shapes, textures, and spatial structures may inadvertently expose proprietary designs, biometric data, or other sensitive geometric information. This paper presents the first exploration of machine unlearning in 3D generation tasks. We investigate different unlearning objectives, including re-targeting and partial unlearning, and propose a novel framework that does not require full supervision of the unlearning target. To enable a more efficient unlearning process, we introduce a skip-acceleration mechanism, which leverages the similarity between multi-view generated images to bypass redundant computations. By establishing coherence across viewpoints during acceleration, our framework not only reduces computation but also enhances unlearning effectiveness, outperforming the non-accelerated baseline in both accuracy and efficiency. We conduct extensive experiments on the typical 3D generation models (Zero123 and Zero123XL), demonstrating that our approach achieves a 30\% speedup, while effectively unlearning target concepts without compromising generation quality. Our framework provides a scalable and practical solution for privacy-preserving 3D generation, ensuring responsible AI deployment in real-world applications. The code is available at: https://github.com/sxxsxw/Fast-3D-Unlearn-with-Skip-acceleration
Supplementary Material: zip
Primary Area: Social and economic aspects of machine learning (e.g., fairness, interpretability, human-AI interaction, privacy, safety, strategic behavior)
Submission Number: 18763
Loading