SODA: Million-scale Dialogue Distillation With Social Commonsense Contextualization | Awesome LLM Papers Add your paper to Awesome LLM Papers

SODA: Million-scale Dialogue Distillation With Social Commonsense Contextualization

Hyunwoo Kim, Jack Hessel, Liwei Jiang, Peter West, Ximing Lu, Youngjae Yu, Pei Zhou, Ronan Le Bras, Malihe Alikhani, Gunhee Kim, Maarten Sap, Yejin Choi . Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing 2023 – 48 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets EMNLP Efficiency Evaluation Interdisciplinary Approaches

Data scarcity has been a long standing issue in the field of open-domain social dialogue. To quench this thirst, we present SODA: the first publicly available, million-scale high-quality social dialogue dataset. By contextualizing social commonsense knowledge from a knowledge graph, we are able to distill an exceptionally broad spectrum of social interactions from a large language model. Human evaluation shows that conversations in SODA are more consistent, specific, and (surprisingly) natural than those in prior human-authored datasets. Using SODA, we train COSMO: a generalizable conversation model that is significantly more natural and consistent on unseen datasets than best-performing conversation models (e.g., GODEL, BlenderBot-1, Koala, Vicuna). Experiments reveal COSMO is sometimes even preferred to the original human-written gold responses. Additionally, our results shed light on the distinction between knowledge-enriched conversations and natural social chitchats. We plan to make our data, model, and code public.

Similar Work