论文标题

LILT:一种简单而有效的与语言无关的布局变压器,用于结构化文档理解

LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding

论文作者

Wang, Jiapeng, Jin, Lianwen, Ding, Kai

论文摘要

由于其在智能文档处理中的关键作用,结构化的文档理解引起了人们的关注,并最近取得了重大进展。但是,大多数现有相关模型只能处理预训练集合中包含的特定语言(通常是英语)的文档数据,这非常有限。为了解决这个问题,我们提出了一个简单但有效的独立于语言的布局变压器(LILT),以了解结构化文档的理解。 LILT可以在单一语言的结构化文档上进行预训练,然后直接对其他语言进行微调,并具有相应的现成单语/多语言预训练的预训练的文本模型。对八种语言的实验结果表明,LILT可以在多种广泛使用的下游基准测试中实现竞争性甚至卓越的性能,从而使文档布局结构的预培训可从语言无关。代码和模型可在https://github.com/jpwang/lilt上公开获取。

Structured document understanding has attracted considerable attention and made significant progress recently, owing to its crucial role in intelligent document processing. However, most existing related models can only deal with the document data of specific language(s) (typically English) included in the pre-training collection, which is extremely limited. To address this issue, we propose a simple yet effective Language-independent Layout Transformer (LiLT) for structured document understanding. LiLT can be pre-trained on the structured documents of a single language and then directly fine-tuned on other languages with the corresponding off-the-shelf monolingual/multilingual pre-trained textual models. Experimental results on eight languages have shown that LiLT can achieve competitive or even superior performance on diverse widely-used downstream benchmarks, which enables language-independent benefit from the pre-training of document layout structure. Code and model are publicly available at https://github.com/jpWang/LiLT.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源