An Empirical Study Of Pre-trained Transformers For Arabic Information Extraction | Awesome LLM Papers Contribute to Awesome LLM Papers

An Empirical Study Of Pre-trained Transformers For Arabic Information Extraction

Wuwei Lan, Yang Chen, Wei Xu, Alan Ritter . Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020 – 62 citations

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
EMNLP Uncategorized

Multilingual pre-trained Transformers, such as mBERT (Devlin et al., 2019) and XLM-RoBERTa (Conneau et al., 2020a), have been shown to enable the effective cross-lingual zero-shot transfer. However, their performance on Arabic information extraction (IE) tasks is not very well studied. In this paper, we pre-train a customized bilingual BERT, dubbed GigaBERT, that is designed specifically for Arabic NLP and English-to-Arabic zero-shot transfer learning. We study GigaBERT’s effectiveness on zero-short transfer across four IE tasks: named entity recognition, part-of-speech tagging, argument role labeling, and relation extraction. Our best model significantly outperforms mBERT, XLM-RoBERTa, and AraBERT (Antoun et al., 2020) in both the supervised and zero-shot transfer settings. We have made our pre-trained models publicly available at https://github.com/lanwuwei/GigaBERT.

Similar Work