Adapt Or Get Left Behind: Domain Adaptation Through BERT Language Model Finetuning For Aspect-target Sentiment Classification | Awesome LLM Papers Add your paper to Awesome LLM Papers

Adapt Or Get Left Behind: Domain Adaptation Through BERT Language Model Finetuning For Aspect-target Sentiment Classification

Alexander Rietzler, Sebastian Stabinger, Paul Opitz, Stefan Engl . Arxiv 2019 – 126 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Affective Computing Applications Compositional Generalization Content Enrichment Datasets Evaluation Fine Tuning Image Text Integration Interactive Environments Interdisciplinary Approaches Model Architecture Multimodal Semantic Representation Neural Machine Translation Productivity Enhancement Question Answering Security Training Techniques

Aspect-Target Sentiment Classification (ATSC) is a subtask of Aspect-Based Sentiment Analysis (ABSA), which has many applications e.g. in e-commerce, where data and insights from reviews can be leveraged to create value for businesses and customers. Recently, deep transfer-learning methods have been applied successfully to a myriad of Natural Language Processing (NLP) tasks, including ATSC. Building on top of the prominent BERT language model, we approach ATSC using a two-step procedure: self-supervised domain-specific BERT language model finetuning, followed by supervised task-specific finetuning. Our findings on how to best exploit domain-specific language model finetuning enable us to produce new state-of-the-art performance on the SemEval 2014 Task 4 restaurants dataset. In addition, to explore the real-world robustness of our models, we perform cross-domain evaluation. We show that a cross-domain adapted BERT language model performs significantly better than strong baseline models like vanilla BERT-base and XLNet-base. Finally, we conduct a case study to interpret model prediction errors.

Similar Work