Does Gender Matter? Towards Fairness In Dialogue Systems | Awesome LLM Papers Add your paper to Awesome LLM Papers

Does Gender Matter? Towards Fairness In Dialogue Systems

Haochen Liu, Jamell Dacon, Wenqi Fan, Hui Liu, Zitao Liu, Jiliang Tang . Proceedings of the 28th International Conference on Computational Linguistics 2020 – 82 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
3d Representation Applications Coling Datasets Dialogue & Multi Turn Ethics & Fairness Evaluation Image Text Integration Interactive Environments Interdisciplinary Approaches Multimodal Semantic Representation Visual Contextualization

Recently there are increasing concerns about the fairness of Artificial Intelligence (AI) in real-world applications such as computer vision and recommendations. For example, recognition algorithms in computer vision are unfair to black people such as poorly detecting their faces and inappropriately identifying them as “gorillas”. As one crucial application of AI, dialogue systems have been extensively applied in our society. They are usually built with real human conversational data; thus they could inherit some fairness issues which are held in the real world. However, the fairness of dialogue systems has not been well investigated. In this paper, we perform a pioneering study about the fairness issues in dialogue systems. In particular, we construct a benchmark dataset and propose quantitative measures to understand fairness in dialogue models. Our studies demonstrate that popular dialogue models show significant prejudice towards different genders and races. Besides, to mitigate the bias in dialogue systems, we propose two simple but effective debiasing methods. Experiments show that our methods can reduce the bias in dialogue systems significantly. The dataset and the implementation are released to foster fairness research in dialogue systems.

Similar Work