论文标题
在多语言翻译中学习人工语言以进行知识共享
Learning an Artificial Language for Knowledge-Sharing in Multilingual Translation
论文作者
论文摘要
多语言神经翻译的基石是跨语言的共享表示。鉴于理论上无限的表示神经网络的能力,语义上相同的句子的代表可能有所不同。在代表连续的潜在空间中的句子可以确保表现力,但它引入了捕获无关紧要的特征的风险,从而阻碍了学习共同表示。在这项工作中,我们通过将编码器状态分配给代码簿中的条目,从而将编码器输出潜在的潜在空间离散,这实际上代表了新的人工语言中的源句子。这个离散化过程不仅提供了一种解释其他黑框模型表示形式的新方法,而且更重要的是,在看不见的测试条件下增强了鲁棒性的潜力。我们在具有现实的数据量和域的大规模实验上验证了我们的方法。当在零拍摄条件下进行测试时,我们的方法具有文献中的两种强大替代方案的竞争力。我们还使用学习的人工语言来分析模型行为,并发现使用类似的桥梁语言会增加其余语言之间的知识共享。
The cornerstone of multilingual neural translation is shared representations across languages. Given the theoretically infinite representation power of neural networks, semantically identical sentences are likely represented differently. While representing sentences in the continuous latent space ensures expressiveness, it introduces the risk of capturing of irrelevant features which hinders the learning of a common representation. In this work, we discretize the encoder output latent space of multilingual models by assigning encoder states to entries in a codebook, which in effect represents source sentences in a new artificial language. This discretization process not only offers a new way to interpret the otherwise black-box model representations, but, more importantly, gives potential for increasing robustness in unseen testing conditions. We validate our approach on large-scale experiments with realistic data volumes and domains. When tested in zero-shot conditions, our approach is competitive with two strong alternatives from the literature. We also use the learned artificial language to analyze model behavior, and discover that using a similar bridge language increases knowledge-sharing among the remaining languages.