HILL: A Hallucination Identifier For Large Language Models | Awesome LLM Papers Add your paper to Awesome LLM Papers

HILL: A Hallucination Identifier For Large Language Models

Florian Leiser, Sven Eckhardt, Valentin Leuthe, Merlin Knaeble, Alexander Maedche, Gerhard Schwabe, Ali Sunyaev . CHI '24: CHI Conference on Human Factors in Computing Systems 2024 – 40 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Question Answering

Large language models (LLMs) are prone to hallucinations, i.e., nonsensical, unfaithful, and undesirable text. Users tend to overrely on LLMs and corresponding hallucinations which can lead to misinterpretations and errors. To tackle the problem of overreliance, we propose HILL, the “Hallucination Identifier for Large Language Models”. First, we identified design features for HILL with a Wizard of Oz approach with nine participants. Subsequently, we implemented HILL based on the identified design features and evaluated HILL’s interface design by surveying 17 participants. Further, we investigated HILL’s functionality to identify hallucinations based on an existing question-answering dataset and five user interviews. We find that HILL can correctly identify and highlight hallucinations in LLM responses which enables users to handle LLM responses with more caution. With that, we propose an easy-to-implement adaptation to existing LLMs and demonstrate the relevance of user-centered designs of AI artifacts.

Similar Work