QCTKD-PU: Quantum Convolutional Transformer with Knowledge Distillation for Efficient and Robust Point Cloud Upsampling
Abstract: Point cloud upsampling is crucial for high-fidelity 3D reconstruction in real-time applications such as autonomous systems. Existing methods based on CNNs or Transformers face three limitations: (1) prohibitive computational complexity hindering real-time deployment, (2) insufficient modeling of multi-scale geometric dependencies in sparse data, (3) sensitivity to noise and outliers. To address these challenges, we propose QCTKD-PU, a framework integrating Quantum Convolutional Transformers (QCT) and Knowledge Distillation (KD) for Point cloud Upsampling. The QCT leverages quantum superposition and self-attention to encode high-dimensional features, enabling efficient multi-scale point interaction learning. Simultaneously, KD transfers knowledge from a teacher model to a lightweight student network, reducing computational costs while maintaining accuracy. Experiments on benchmark datasets demonstrate superior performance in geometric accuracy and noise robustness compared to state-of-the-art methods. This work pioneers the synergy of quantum computing and lightweight learning for resource-constrained 3D vision tasks, while the student model achieves real-time and compact deployment, offering a practical solution for collaborative edge systems.
External IDs:dblp:conf/cscwd/ZhuZL00C25
Loading