B-score: Detecting Biases In Large Language Models Using Response History | Awesome LLM Papers Contribute to Awesome LLM Papers

B-score: Detecting Biases In Large Language Models Using Response History

An Vo, Mohammad Reza Taesiri, Daeyoung Kim, Anh Totti Nguyen . No Venue 2025

[Paper] [Other] [Paper]   Search on Google Scholar   Search on Semantic Scholar
Ethics & Fairness Evaluation Frameworks Has Code

Large language models (LLMs) often exhibit strong biases, e.g, against women or in favor of the number 7. We investigate whether LLMs would be able to output less biased answers when allowed to observe their prior answers to the same question in a multi-turn conversation. To understand which types of questions invite more biased answers, we test LLMs on our proposed set of questions that span 9 topics and belong to three types: (1) Subjective; (2) Random; and (3) Objective. Interestingly, LLMs are able to “de-bias” themselves in a multi-turn conversation in response to questions that seek an Random, unbiased answer. Furthermore, we propose B-score, a novel metric that is effective in detecting biases to Subjective, Random, Easy, and Hard questions. On MMLU, HLE, and CSQA, leveraging B-score substantially improves the verification accuracy of LLM answers (i.e, accepting LLM correct answers and rejecting incorrect ones) compared to using verbalized confidence scores or the frequency of single-turn answers alone. Code and data are available at: https://b-score.github.io.

https://huggingface.co/discussions/paper/6835217ee759f596d018f794

Similar Work