Improving Pretrained Cross-lingual Language Models Via Self-labeled Word Alignment | Awesome LLM Papers Add your paper to Awesome LLM Papers

Improving Pretrained Cross-lingual Language Models Via Self-labeled Word Alignment

Zewen Chi, Li Dong, Bo Zheng, Shaohan Huang, Xian-Ling Mao, Heyan Huang, Furu Wei . Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) 2021 – 47 citations

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Compositional Generalization Datasets Has Code Interdisciplinary Approaches Multimodal Semantic Representation Question Answering Training Techniques

The cross-lingual language models are typically pretrained with masked language modeling on multilingual text or parallel sentences. In this paper, we introduce denoising word alignment as a new cross-lingual pre-training task. Specifically, the model first self-labels word alignments for parallel sentences. Then we randomly mask tokens in a bitext pair. Given a masked token, the model uses a pointer network to predict the aligned token in the other language. We alternately perform the above two steps in an expectation-maximization manner. Experimental results show that our method improves cross-lingual transferability on various datasets, especially on the token-level tasks, such as question answering, and structured prediction. Moreover, the model can serve as a pretrained word aligner, which achieves reasonably low error rates on the alignment benchmarks. The code and pretrained parameters are available at https://github.com/CZWin32768/XLM-Align.

Similar Work