论文标题
言语识别的快速而强大的无监督的上下文偏见
Fast and Robust Unsupervised Contextual Biasing for Speech Recognition
论文作者
论文摘要
自动语音识别(ASR)系统正在成为一种无处不在的技术。尽管其准确性正在缩小某些环境下人类级别的差距,但可以进一步改进的一个领域是将特定于用户的信息或上下文纳入以偏见其预测。一个常见的框架是从提供的上下文迷你语料库中动态构建一个小语言模型,并在解码过程中与主语言模型插入得分。 在这里,我们提出了一种不需要明确上下文语言模型的替代方法。取而代之的是,我们从培训语料库中得出系统词汇中每个单词的偏差分数。该方法是唯一的,因为1)它不需要上下文或培训语料库的元数据或类标签注释。 2)偏差分数与单词的对数概要性成正比,因此不仅会偏向提供的上下文,而且还可以在不相关的上下文(例如用户错误指定的情况下,或者在很难量化紧密范围的情况下)。 3)在训练阶段预先确定了整个词汇的偏差分数,从而消除了推断期间计算昂贵的语言模型构建。 当相关上下文可用时,我们显示出识别准确性的显着提高。此外,我们还证明了所提出的方法在存在不相关的情况下对错误触发错误表现出很高的宽容。
Automatic speech recognition (ASR) system is becoming a ubiquitous technology. Although its accuracy is closing the gap with that of human level under certain settings, one area that can further improve is to incorporate user-specific information or context to bias its prediction. A common framework is to dynamically construct a small language model from the provided contextual mini corpus and interpolate its score with the main language model during the decoding process. Here we propose an alternative approach that does not entail explicit contextual language model. Instead, we derive the bias score for every word in the system vocabulary from the training corpus. The method is unique in that 1) it does not require meta-data or class-label annotation for the context or the training corpus. 2) The bias score is proportional to the word's log-probability, thus not only would it bias the provided context, but also robust against irrelevant context (e.g. user mis-specified or in case where it is hard to quantify a tight scope). 3) The bias score for the entire vocabulary is pre-determined during the training stage, thereby eliminating computationally expensive language model construction during inference. We show significant improvement in recognition accuracy when the relevant context is available. Additionally, we also demonstrate that the proposed method exhibits high tolerance to false-triggering errors in the presence of irrelevant context.