UHH-LT At Semeval-2020 Task 12: Fine-tuning Of Pre-trained Transformer Networks For Offensive Language Detection | Awesome LLM Papers Contribute to Awesome LLM Papers

UHH-LT At Semeval-2020 Task 12: Fine-tuning Of Pre-trained Transformer Networks For Offensive Language Detection

Gregor Wiedemann, Seid Muhie Yimam, Chris Biemann . Proceedings of the Fourteenth Workshop on Semantic Evaluation 2020 – 65 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Evaluation Uncategorized

Fine-tuning of pre-trained transformer networks such as BERT yield state-of-the-art results for text classification tasks. Typically, fine-tuning is performed on task-specific training datasets in a supervised manner. One can also fine-tune in unsupervised manner beforehand by further pre-training the masked language modeling (MLM) task. Hereby, in-domain data for unsupervised MLM resembling the actual classification target dataset allows for domain adaptation of the model. In this paper, we compare current pre-trained transformer networks with and without MLM fine-tuning on their performance for offensive language detection. Our MLM fine-tuned RoBERTa-based classifier officially ranks 1st in the SemEval 2020 Shared Task~12 for the English language. Further experiments with the ALBERT model even surpass this result.

Similar Work