Learning Interpretable Representations of Entanglement in Quantum Optics Experiments using Deep Generative Models
Daniel Flam-Shepherd, Tony Wu, Xuemei Gu, Alba Cervera-Lierta, Mario Krenn, and Alan Aspuru-Guzik
Quantum optics experiments are used to test the foundations of quantum physics. They produce interesting phenomena, such as quantum entanglement and multi-photon interference. These phenomena form the basis of many quantum technologies and applications, but are difficult to understand intuitively.
In this paper, we use deep unsupervised learning and build a generative model of quantum optics experiments. Our model, the Quantum Optics Variational Auto Encoder (QOVAE), employs a variational autoencoder trained on a set of quantum optics experimental setups. The experiments are generated using a sequence of six optical devices operating on a high-dimensional four photon quantum state. The QOVAE model consists of two neural networks: an encoder, which maps a quantum optics experiment to a continuous latent representation using a convolutional neural network, and a decoder, which reconstructs experiments from the latent representation using a recurrent neural network. We show that QOVAE can learn specific distributions of entangled states in the training set. Importantly, QOVAE can learn to generate new and unique entangled experiments, not found in the training data. The model learns an interpretable representation of experiments and encodes surprising insights into the relationship between experiment structure and entanglement.