论文标题
零射击依赖性解析,最糟糕的意识到的自动化课程学习
Zero-Shot Dependency Parsing with Worst-Case Aware Automated Curriculum Learning
论文作者
论文摘要
已经发现,大型多语言审计的语言模型(例如Mbert和XLM-Roberta)对于句法解析模型的跨语性转移非常有效(Wu and Dredze 2019),但仅在相关语言之间。但是,在解析真正的低资源语言时,来源和培训语言很少相关。为了缩小这一差距,我们采用了一种从多任务学习的方法,该方法依赖于自动课程学习,以动态优化对异常语言的解析性能。我们表明,这种方法比零拍设置中的均匀和尺寸 - 属性采样要好得多。
Large multilingual pretrained language models such as mBERT and XLM-RoBERTa have been found to be surprisingly effective for cross-lingual transfer of syntactic parsing models (Wu and Dredze 2019), but only between related languages. However, source and training languages are rarely related, when parsing truly low-resource languages. To close this gap, we adopt a method from multi-task learning, which relies on automated curriculum learning, to dynamically optimize for parsing performance on outlier languages. We show that this approach is significantly better than uniform and size-proportional sampling in the zero-shot setting.