Thank You BART! Rewarding Pre-trained Models Improves Formality Style Transfer · Awesome LLM Papers Contribute to LLM-Bible

Thank You BART! Rewarding Pre-trained Models Improves Formality Style Transfer

Huiyuan Lai, Antonio Toral, Malvina Nissim. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers) 2021 – 15 citations

[Paper]    
Model Architecture GPT Fine-Tuning Reinforcement Learning Training Techniques

Scarcity of parallel data causes formality style transfer models to have scarce success in preserving content. We show that fine-tuning pre-trained language (GPT-2) and sequence-to-sequence (BART) models boosts content preservation, and that this is possible even with limited amounts of parallel data. Augmenting these models with rewards that target style and content – the two core aspects of the task – we achieve a new state-of-the-art.

Similar Work