Reinforcement learning for semi-autonomous approximate quantum eigensolver
Abstract
The characterization of an operator by its eigenvectors and eigenvalues allows us to know its action over any quantum state. Here, we propose a protocol to obtain an approximation of the eigenvectors of an arbitrary Hermitian quantum operator. This protocol is based on measurement and feedback processes, which characterize a reinforcement learning protocol. Our proposal is composed of two systems, a black box named environment and a quantum state named agent. The role of the environment is to change any quantumstate by a unitary matrix (U) over cap (E) = e(-it (O) over capE) where (O) over cap (E) is a Hermitian operator, and tau is a real parameter. The agent is a quantum state which adapts to some eigenvector of (O) over cap (E) by repeated interactions with the environment, feedback process, and semi-random rotations. With this proposal, we can obtain an approximation of the eigenvectors of a random qubit operator with average fidelity over 90% in less than 10 iterations, and surpass 98% in less than 300 iterations. Moreover, for the two-qubit cases, the four eigenvectors are obtained with fidelities above 89% in 8000 iterations for a random operator, and fidelities of 99% for an operator with the Bell states as eigenvectors. This protocol can be useful to implement semi-autonomous quantum devices which should be capable of extracting information and deciding with minimal resources and without human intervention.
Más información
Título según WOS: | Reinforcement learning for semi-autonomous approximate quantum eigensolver |
Título de la Revista: | MACHINE LEARNING-SCIENCE AND TECHNOLOGY |
Volumen: | 1 |
Número: | 1 |
Editorial: | IOP PUBLISHING LTD |
Fecha de publicación: | 2020 |
DOI: |
10.1088/2632-2153/ab43b4 |
Notas: | ISI |