BERT For Coreference Resolution: Baselines And Analysis | Awesome LLM Papers Add your paper to Awesome LLM Papers

BERT For Coreference Resolution: Baselines And Analysis

Mandar Joshi, Omer Levy, Daniel S. Weld, Luke Zettlemoyer . Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) 2019 – 57 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
EMNLP

We apply BERT to coreference resolution, achieving strong improvements on the OntoNotes (+3.9 F1) and GAP (+11.5 F1) benchmarks. A qualitative analysis of model predictions indicates that, compared to ELMo and BERT-base, BERT-large is particularly better at distinguishing between related but distinct entities (e.g., President and CEO). However, there is still room for improvement in modeling document-level context, conversations, and mention paraphrasing. Our code and models are publicly available.

Similar Work