Keywords: explainable AI, deep dreaming, neural network, graph representation
Abstract: Neural networks hold great promise for advancing scientific discoveries, but their opaque nature often makes it challenging to interpret the underlying logic behind their findings. In this work, we employ an eXplainable-AI technique known as inception or deep dreaming, originally developed in the context of computer vision, to investigate what neural networks learn about quantum optics experiments. We begin by training deep neural networks on the properties of quantum systems. Once trained, we "invert" the neural network -- essentially asking it to imagine a quantum system with specific properties and to continuously modify the system to change those properties. We find that the network can shift the initial distribution of properties in the quantum system, allowing us to conceptualize the strategies it has learned. Interestingly, the network’s initial layers focus on identifying simple properties, while the deeper layers uncover complex quantum structures. This reflects well-known patterns observed in computer vision, which we now identify within the context of a complex natural science task. Our approach paves the way for more interpretable AI scientific discovery techniques in quantum physics.
Track: Published paper track
Submitted Paper: No
Published Paper: Yes
Published Venue: Mach. Learn.: Sci. Technol. 5 015029 (2024); DOI: 10.1088/2632-2153/ad2628
Submission Number: 12
Loading