论文标题
用于二进制分类的混合量子古典神经网络体系结构
A Hybrid Quantum-Classical Neural Network Architecture for Binary Classification
论文作者
论文摘要
深度学习是当今机器学习中使用的最成功,最深远的策略之一。但是,神经网络的规模和实用性仍然受到训练它们的当前硬件的极大限制。随着传统计算机迅速应对物理局限性,这些问题将越来越紧迫,这将减缓未来几年的性能改善。由于这些原因,科学家已经开始探索用于培训神经网络的替代计算平台,例如量子计算机。近年来,变异量子电路已成为嘈杂的中间尺度量子设备的量子深度学习的最成功的方法之一。我们提出了一个混合量子古典神经网络结构,每个神经元都是变异量子电路。我们通过模拟通用量子计算机和最先进的通用量子计算机的状态在一系列二进制分类数据集上进行经验分析了该混合神经网络的性能。在模拟硬件上,我们观察到,混合神经网络的分类精度比单个变异量子电路高约10%,成本最小化约为20%。在量子硬件上,我们观察到每个模型仅在量子计数足够小时才能表现良好。
Deep learning is one of the most successful and far-reaching strategies used in machine learning today. However, the scale and utility of neural networks is still greatly limited by the current hardware used to train them. These concerns have become increasingly pressing as conventional computers quickly approach physical limitations that will slow performance improvements in years to come. For these reasons, scientists have begun to explore alternative computing platforms, like quantum computers, for training neural networks. In recent years, variational quantum circuits have emerged as one of the most successful approaches to quantum deep learning on noisy intermediate scale quantum devices. We propose a hybrid quantum-classical neural network architecture where each neuron is a variational quantum circuit. We empirically analyze the performance of this hybrid neural network on a series of binary classification data sets using a simulated universal quantum computer and a state of the art universal quantum computer. On simulated hardware, we observe that the hybrid neural network achieves roughly 10% higher classification accuracy and 20% better minimization of cost than an individual variational quantum circuit. On quantum hardware, we observe that each model only performs well when the qubit and gate count is sufficiently small.