论文标题
量子语义学习通过反向退火一台绝热量子计算机
Quantum Semantic Learning by Reverse Annealing an Adiabatic Quantum Computer
论文作者
论文摘要
Boltzmann机器构成了一类神经网络,这些神经网络通常应用于图像重建,模式分类和无监督的学习。它们最常见的变体称为限制的玻尔兹曼机器(RBMS),在现有基于硅的硬件上的可计算性与可能应用的一般性之间表现出良好的权衡。 尽管如此,RBM的扩散仍然很有限,因为他们的训练过程被证明很难。商业绝热量子计算机(AQC)的出现提出了期望RBMS在此类量子设备上的实施可能会提高有关常规硬件的训练速度。但是,迄今为止,当每个量子位充当神经网络的节点时,RBM网络在AQCS上的实现受到低量子连接性的限制。 在这里,我们证明了AQC上完整的rbm的可行性,这要归功于将其节点与虚拟量表相关联的嵌入,从而优于基于不完整的图形的先前实现。 此外,为了加速学习,我们实施了语义量子搜索,与以前的提案相反,将输入数据作为初始边界条件以开始RBM的每个学习步骤,这要归功于反向退火时间表。这种方法与更传统的向前退火计划不同,它允许在有意义的培训数据旁进行采样配置,从而模仿经典Gibbs采样算法的行为。 我们表明,基于反向退火的学习迅速提高了配置集的有意义子集的采样概率。即使没有正确优化退火时间表,通过反向退火进行的RBM训练的RBM在重建任务上的分数更好。
Boltzmann Machines constitute a class of neural networks with applications to image reconstruction, pattern classification and unsupervised learning in general. Their most common variants, called Restricted Boltzmann Machines (RBMs) exhibit a good trade-off between computability on existing silicon-based hardware and generality of possible applications. Still, the diffusion of RBMs is quite limited, since their training process proves to be hard. The advent of commercial Adiabatic Quantum Computers (AQCs) raised the expectation that the implementations of RBMs on such quantum devices could increase the training speed with respect to conventional hardware. To date, however, the implementation of RBM networks on AQCs has been limited by the low qubit connectivity when each qubit acts as a node of the neural network. Here we demonstrate the feasibility of a complete RBM on AQCs, thanks to an embedding that associates its nodes to virtual qubits, thus outperforming previous implementations based on incomplete graphs. Moreover, to accelerate the learning, we implement a semantic quantum search which, contrary to previous proposals, takes the input data as initial boundary conditions to start each learning step of the RBM, thanks to a reverse annealing schedule. Such an approach, unlike the more conventional forward annealing schedule, allows sampling configurations in a meaningful neighborhood of the training data, mimicking the behavior of the classical Gibbs sampling algorithm. We show that the learning based on reverse annealing quickly raises the sampling probability of a meaningful subset of the set of the configurations. Even without a proper optimization of the annealing schedule, the RBM semantically trained by reverse annealing achieves better scores on reconstruction tasks.