Retrofitting Structure-aware Transformer Language Model For End Tasks | Awesome LLM Papers Contribute to Awesome LLM Papers

Retrofitting Structure-aware Transformer Language Model For End Tasks

Hao Fei, Yafeng Ren, Donghong Ji . Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020 – 55 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
EMNLP Uncategorized

We consider retrofitting structure-aware Transformer-based language model for facilitating end tasks by proposing to exploit syntactic distance to encode both the phrasal constituency and dependency connection into the language model. A middle-layer structural learning strategy is leveraged for structure integration, accomplished with main semantic task training under multi-task learning scheme. Experimental results show that the retrofitted structure-aware Transformer language model achieves improved perplexity, meanwhile inducing accurate syntactic phrases. By performing structure-aware fine-tuning, our model achieves significant improvements for both semantic- and syntactic-dependent tasks.

Similar Work