BOLD: Dataset And Metrics For Measuring Biases In Open-ended Language Generation | Awesome LLM Papers Add your paper to Awesome LLM Papers

BOLD: Dataset And Metrics For Measuring Biases In Open-ended Language Generation

Jwala Dhamala, Tony Sun, Varun Kumar, Satyapriya Krishna, Yada Pruksachatkun, Kai-Wei Chang, Rahul Gupta . FAccT '21: 2021 ACM Conference on Fairness, Accountability, and Transparency 2021 – 115 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Applications Compositional Generalization Content Enrichment Datasets Ethics & Fairness Evaluation Interdisciplinary Approaches Multimodal Semantic Representation Neural Machine Translation RAG Variational Autoencoders Visual Question Answering

Recent advances in deep learning techniques have enabled machines to generate cohesive open-ended text when prompted with a sequence of words as context. While these models now empower many downstream applications from conversation bots to automatic storytelling, they have been shown to generate texts that exhibit social biases. To systematically study and benchmark social biases in open-ended language generation, we introduce the Bias in Open-Ended Language Generation Dataset (BOLD), a large-scale dataset that consists of 23,679 English text generation prompts for bias benchmarking across five domains: profession, gender, race, religion, and political ideology. We also propose new automated metrics for toxicity, psycholinguistic norms, and text gender polarity to measure social biases in open-ended text generation from multiple angles. An examination of text generated from three popular language models reveals that the majority of these models exhibit a larger social bias than human-written Wikipedia text across all domains. With these results we highlight the need to benchmark biases in open-ended language generation and caution users of language generation models on downstream tasks to be cognizant of these embedded prejudices.

Similar Work