Softer is Better: Tweaking Quantum Dropout to Enhance Quantum Neural Network Trainability
Abstract: Quantum Machine Learning (QML) has been acknowledged for its transformative potential in enhancing computational capabilities beyond classical approaches, with Quantum
Neural Networks (QNNs) emerging as one of the most promising
models. Despite the advancements, the adaptation of effective
optimization techniques such as batch normalization or regularization, which are well-established in classical Neural Networks,
remains an ongoing challenge for QNNs.
This adaptation is crucial, especially when considering the
implementation of techniques like quantum dropout, which,
although inspired by its classical counterpart to reduce the risk
of overfitting by “deleting” certain model components during the
forward step, exhibits distinct behaviours in a quantum context.
This study introduces a novel approach to modulate the impact
of dropout in QNNs through a technique we term Soft Dropout.
Our method introduces a parameterized softening mechanism
for gate eliminations, enabling a more nuanced control over the
dropout process, thereby mitigating its adverse effects on the
network’s learning capacity, and reducing at the same time the
risk of overfitting.
Our experimental analysis demonstrates that softening current quantum dropout consistently enhances model performance
across a spectrum of configurations. This improvement is attributed to the intrinsic properties of quantum gates, which allow
for a gradual adjustment of their impact on the quantum circuit
up to the identity operation. The results highlight the importance
of adapting classical machine learning techniques to the quantum
context, considering the unique computational model of quantum
processing, potentially accelerating the adoption of QML in realworld problems.
Loading