Bertweet: A Pre-trained Language Model For English Tweets | Awesome LLM Papers Contribute to Awesome LLM Papers

Bertweet: A Pre-trained Language Model For English Tweets

Dat Quoc Nguyen, Thanh Vu, Anh Tuan Nguyen . Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations 2020 – 693 citations

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
EMNLP Uncategorized

We present BERTweet, the first public large-scale pre-trained language model for English Tweets. Our BERTweet, having the same architecture as BERT-base (Devlin et al., 2019), is trained using the RoBERTa pre-training procedure (Liu et al., 2019). Experiments show that BERTweet outperforms strong baselines RoBERTa-base and XLM-R-base (Conneau et al., 2020), producing better performance results than the previous state-of-the-art models on three Tweet NLP tasks: Part-of-speech tagging, Named-entity recognition and text classification. We release BERTweet under the MIT License to facilitate future research and applications on Tweet data. Our BERTweet is available at https://github.com/VinAIResearch/BERTweet

Similar Work