Keywords: Contrastive learning, Self-supervised learning, quantum meachine learning, variational quantum circuits, quantum kernels, representation learning, InfoNCE loss, quantum fidelity
TL;DR: This paper investigates quantum-inspired enhancements to self-supervised contrastive learning by replacing the projection head with variational quantum circuits and introducing quantum kernels as alternative similarity measures.
Abstract: Self-supervised contrastive learning is sensitive to architectural choices and to how similarity is defined. Motivated by claims that quantum circuits can induce useful non-classical geometries, we present a systematic empirical analysis of two natural drop-in quantum components for the projection/similarity stage: (i) variational quantum circuits (VQCs) as projection heads and (ii) fixed quantum feature maps whose state fidelities act as similarity measures (``quantum kernels''). Within a controlled SimCLR pipeline on STL-10 (ResNet18 encoder) using mainstream \emph{analytic} simulators, we report three findings. First, under realistic resource constraints (low qubit count, shallow depth), a tuned classical MLP head consistently matches or outperforms VQC heads. Second, fidelity-based quantum kernels largely mirror cosine similarity without a clear uplift. Third, increasing circuit size rapidly incurs prohibitive latency, exposing scaling bottlenecks that restrict current explorability. These results constitute a useful null baseline for hybrid quantum-classical contrastive learning and point to concrete directions: batching-friendly simulators for higher throughput, lower-variance/better-conditioned feature maps to avoid similarity collapse, and modest, low-latency hardware as a realistic near-term testbed. We release anonymized code and consolidated hyperparameters to facilitate replication and future extensions.
Supplementary Material: zip
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 20372
Loading