Faithful Chain-of-thought Reasoning | Awesome LLM Papers Add your paper to Awesome LLM Papers

Faithful Chain-of-thought Reasoning

Qing Lyu, Shreya Havaldar, Adam Stein, Li Zhang, Delip Rao, Eric Wong, Marianna Apidianaki, Chris Callison-Burch . Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (Volume 1: Long Papers) 2023 – 45 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Compositional Generalization Datasets Few Shot Interdisciplinary Approaches Interpretability Model Architecture Multimodal Semantic Representation Productivity Enhancement Prompting Question Answering Tools Visual Question Answering

While Chain-of-Thought (CoT) prompting boosts Language Models’ (LM) performance on a gamut of complex reasoning tasks, the generated reasoning chain does not necessarily reflect how the model arrives at the answer (aka. faithfulness). We propose Faithful CoT, a reasoning framework involving two stages: Translation (Natural Language query (\rightarrow) symbolic reasoning chain) and Problem Solving (reasoning chain (\rightarrow) answer), using an LM and a deterministic solver respectively. This guarantees that the reasoning chain provides a faithful explanation of the final answer. Aside from interpretability, Faithful CoT also improves empirical performance: it outperforms standard CoT on 9 of 10 benchmarks from 4 diverse domains, with a relative accuracy gain of 6.3% on Math Word Problems (MWP), 3.4% on Planning, 5.5% on Multi-hop Question Answering (QA), and 21.4% on Relational Inference. Furthermore, with GPT-4 and Codex, it sets the new state-of-the-art few-shot performance on 7 datasets (with 95.0+ accuracy on 6 of them), showing a strong synergy between faithfulness and accuracy.

Similar Work