Data- and Hardware-Aware Entanglement Selection for Quantum Feature Maps in Hybrid Quantum Neural Networks
Keywords: Quantum Machine Learning, Quantum Neural Network, Quantum Feature Map, Optimal architecture searching
TL;DR: We designed the optimizing method that searching high data utility and quantum hardware efficient quantum feature map for Hybrid Quantum Neural Networks.
Abstract: Embedding classical data into a quantum feature space is a critical step for Hybrid Quantum Neural Networks (HQNNs). While entanglement in this feature map layer can enhance expressivity, heuristic choices often degrade trainability and waste the limited multiple-qubit gate budget. We reframe the choice of encoding-layer entanglement as a multi-objective combinatorial optimization that jointly promotes data-driven trainability and hardware-aware noise robustness. Our framework searches over sparse entanglement patterns by maximizing a novel data-utility term, balanced against a realistic hardware cost derived from device topology and calibrated two-qubit fidelities on IBM Quantum systems. The data-utility term pairs qubits based on two complementary geometric criteria: (i) high intrinsic dependency and (ii) low Hilbert–Schmidt Distance (HSD), a combination critical for accelerating gradient-based optimization early in HQNNs training. We solve this with a bi-level optimization scheme: the outer loop searches over discrete entanglement structures, evaluating each candidate's potential based on the initial loss reduction from a short inner-loop training. Once the optimal structure is identified, the downstream ansatz is trained to full convergence. Our empirical results demonstrate that suggested feature maps not only achieve superior classification performance on synthetic and real-world benchmarks but also demonstrate enhanced robustness under realistic noise models, all while maintaining a lower gate budget than heuristic designs. Our work establishes a principled, automated method for creating quantum feature maps that are simultaneously data-aware, hardware-efficient, and highly trainable.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 15995
Loading