Benchmarking Large Language Models On Controllable Generation Under Diversified Instructions | Awesome LLM Papers Add your paper to Awesome LLM Papers

Benchmarking Large Language Models On Controllable Generation Under Diversified Instructions

Yihan Chen, Benfeng Xu, Quan Wang, Yi Liu, Zhendong Mao . Proceedings of the AAAI Conference on Artificial Intelligence 2024 – 121 citations

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
AAAI Compositional Generalization Content Enrichment Datasets Evaluation Has Code Interdisciplinary Approaches Multimodal Semantic Representation RAG Variational Autoencoders Visual Question Answering

While large language models (LLMs) have exhibited impressive instruction-following capabilities, it is still unclear whether and to what extent they can respond to explicit constraints that might be entailed in various instructions. As a significant aspect of LLM alignment, it is thus important to formulate such a specialized set of instructions as well as investigate the resulting behavior of LLMs. To address this vacancy, we propose a new benchmark CoDI-Eval to systematically and comprehensively evaluate LLMs’ responses to instructions with various constraints. We construct a large collection of constraints-attributed instructions as a test suite focused on both generalization and coverage. Specifically, we advocate an instruction diversification process to synthesize diverse forms of constraint expression and also deliberate the candidate task taxonomy with even finer-grained sub-categories. Finally, we automate the entire evaluation process to facilitate further developments. Different from existing studies on controllable text generation, CoDI-Eval extends the scope to the prevalent instruction-following paradigm for the first time. We provide extensive evaluations of representative LLMs (e.g., ChatGPT, Vicuna) on CoDI-Eval, revealing their limitations in following instructions with specific constraints and there is still a significant gap between open-source and commercial closed-source LLMs. We believe this benchmark will facilitate research into improving the controllability of LLMs’ responses to instructions. Our data and code are available at https://github.com/Xt-cyh/CoDI-Eval.

Similar Work