On The Use Of BERT For Automated Essay Scoring: Joint Learning Of Multi-scale Essay Representation | Awesome LLM Papers Add your paper to Awesome LLM Papers

On The Use Of BERT For Automated Essay Scoring: Joint Learning Of Multi-scale Essay Representation

Yongjie Wang, Chuan Wang, Ruobing Li, Hui Lin . Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies 2022 – 58 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Compositional Generalization Content Enrichment Fine Tuning Interactive Environments Interdisciplinary Approaches Model Architecture NAACL Neural Machine Translation Variational Autoencoders Visual Question Answering

In recent years, pre-trained models have become dominant in most natural language processing (NLP) tasks. However, in the area of Automated Essay Scoring (AES), pre-trained models such as BERT have not been properly used to outperform other deep learning models such as LSTM. In this paper, we introduce a novel multi-scale essay representation for BERT that can be jointly learned. We also employ multiple losses and transfer learning from out-of-domain essays to further improve the performance. Experiment results show that our approach derives much benefit from joint learning of multi-scale essay representation and obtains almost the state-of-the-art result among all deep learning models in the ASAP task. Our multi-scale essay representation also generalizes well to CommonLit Readability Prize data set, which suggests that the novel text representation proposed in this paper may be a new and effective choice for long-text tasks.

Similar Work