Abstract: The significant improvements in point cloud representation learning have increased its applicability in many real-life applications, resulting in the need for lightweight, better-performing models. One widely proposed efficient method is knowledge distillation, where a lightweight model uses knowledge from large models. Very few works exist on distilling the knowledge for point clouds. Most of the work focuses on cross-modal-based approaches that make the method expensive to train. This paper proposes PointKAD, an adversarial knowledge distillation framework for point cloud-based tasks. PointKAD includes adversarial feature distillation and response distillation with the help of discriminators to extract and distill the representation of feature maps and logits. We conduct extensive experimental studies on both synthetic (ModelNet40) and real (ScanObjectNN) datasets to show that PointKAD achieves state-of-the-art results compared to the existing knowledge distillation methods for point cloud classification. Additionally, we present results on the part segmentation task, highlighting the efficacy of the PointKAD framework. Our experiments further reveal that PointKAD is capable of transferring knowledge across different tasks and datasets, show-casing its versatility. Furthermore, we demonstrate that PointKAD can be applied to a cross-modal training setup, achieving competitive performance with cross-modal-based point cloud methods for classification.
External IDs:dblp:conf/wacv/JJRSS25
Loading