Is Chatgpt A Good Causal Reasoner? A Comprehensive Evaluation | Awesome LLM Papers Add your paper to Awesome LLM Papers

Is Chatgpt A Good Causal Reasoner? A Comprehensive Evaluation

Jinglong Gao, Xiao Ding, Bing Qin, Ting Liu . Findings of the Association for Computational Linguistics: EMNLP 2023 2023 – 42 citations

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Applications Compositional Generalization EMNLP Evaluation Has Code In Context Learning Interdisciplinary Approaches Prompting

Causal reasoning ability is crucial for numerous NLP applications. Despite the impressive emerging ability of ChatGPT in various NLP tasks, it is unclear how well ChatGPT performs in causal reasoning. In this paper, we conduct the first comprehensive evaluation of the ChatGPT’s causal reasoning capabilities. Experiments show that ChatGPT is not a good causal reasoner, but a good causal explainer. Besides, ChatGPT has a serious hallucination on causal reasoning, possibly due to the reporting biases between causal and non-causal relationships in natural language, as well as ChatGPT’s upgrading processes, such as RLHF. The In-Context Learning (ICL) and Chain-of-Thought (CoT) techniques can further exacerbate such causal hallucination. Additionally, the causal reasoning ability of ChatGPT is sensitive to the words used to express the causal concept in prompts, and close-ended prompts perform better than open-ended prompts. For events in sentences, ChatGPT excels at capturing explicit causality rather than implicit causality, and performs better in sentences with lower event density and smaller lexical distance between events. The code is available on https://github.com/ArrogantL/ChatGPT4CausalReasoning .

Similar Work