Abstract: Knowledge distillation is a widely explored technique in classical machine learning, in which a smaller or more efficient model is trained to mimic the behavior of a larger, more complex model. In this study, we extend the concept of knowledge distillation from classical architectures to quantum architectures, with the goal of improving the training of quantum models while potentially reducing the number of parameters compared to their classical counterparts. Given the inherent challenges in training quantum neural networks, leveraging knowledge from well-established classical models could provide valuable insights and advantages, particularly in terms of model efficiency and performance. In this work we explore the potential benefits of this approach, evaluating a hybrid quantum model against a non-hybrid quantum baseline. While the proposed study is still in the preliminary stage, it aims to set the scene for further investigation into the most appropriate architecture for classical-to-quantum knowledge distillation in order to enhance the development and optimization of quantum neural networks more generally.
External IDs:dblp:conf/ijcnn/PipernoVWRP25
Loading