On the Trainability and Classical Simulability of Learning Matrix Product States Variationally

Published: 11 Apr 2025, Last Modified: 22 Jul 2025AAAI 25EveryoneCC BY 4.0
Abstract: We prove that using global observables to train the matrix product state ansatz results in the vanishing of all partial derivatives, also known as barren plateaus, while using local observables avoids this. This ansatz is widely used in quantum machine learning for learning weakly entangled state approximations. Additionally, we empirically demonstrate that in many cases, the objective function is an inner product of almost sparse operators, highlighting the potential for classically simulating such a learning problem with few quantum resources. All our results are experimentally validated across various scenarios.
Loading