Learning Semantic Textual Similarity From Conversations | Awesome LLM Papers Contribute to Awesome LLM Papers

Learning Semantic Textual Similarity From Conversations

Yinfei Yang, Steve Yuan, Daniel Cer, Sheng-Yi Kong, Noah Constant, Petr Pilar, Heming Ge, Yun-Hsuan Sung, Brian Strope, Ray Kurzweil . Proceedings of The Third Workshop on Representation Learning for NLP 2018 – 159 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Uncategorized

We present a novel approach to learn representations for sentence-level semantic similarity using conversational data. Our method trains an unsupervised model to predict conversational input-response pairs. The resulting sentence embeddings perform well on the semantic textual similarity (STS) benchmark and SemEval 2017’s Community Question Answering (CQA) question similarity subtask. Performance is further improved by introducing multitask training combining the conversational input-response prediction task and a natural language inference task. Extensive experiments show the proposed model achieves the best performance among all neural models on the STS benchmark and is competitive with the state-of-the-art feature engineered and mixed systems in both tasks.

Similar Work