Knowledge Distillation for Quantized Vehicle Sensor Data

Published: 01 Jan 2023, Last Modified: 13 May 2025ICMLA 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Semantic segmentation is a major research field in computer vision with applications in fields such as autonomous driving, medical image analysis and video surveillance. A large share of recent improvements in model performance is a result of increases in model sizes or quality of sensors, while many areas of application require real-time segmentation and small and efficient models. Various studies exist on increasing the performance of small models, but whether small models actually benefit from the full range of available sensor information has been overlooked. In our work we focus on the model compression technique Knowledge Distillation (KD), because it enables us to investigate the effects of decreasing model size and decreasing color-depth simultaneously, while leveraging the knowledge in a large teacher network to improve a small student. With the help of different training frameworks we show that up to a certain level of information reduction, no significant segmentation performance degradation can be seen. Moreover, our experiments indicate that KD applied in a two-stage training strategy is more advantageous compared to the standard procedure. Finally, we show that, with the right KD procedure, training and inference on bit-depth reduced images can result in higher performance than training on original images with a standard cross entropy loss.
Loading