No Need To Lift A Finger Anymore? Assessing The Quality Of Code Generation By Chatgpt | Awesome LLM Papers Add your paper to Awesome LLM Papers

No Need To Lift A Finger Anymore? Assessing The Quality Of Code Generation By Chatgpt

Zhijie Liu, Yutian Tang, Xiapu Luo, Yuming Zhou, Liang Feng Zhang . IEEE Transactions on Software Engineering 2024 – 48 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization Evaluation Interdisciplinary Approaches Llm For Code Multimodal Semantic Representation Security

Large language models (LLMs) have demonstrated impressive capabilities across various NLP tasks. Additionally, LLMs are also highly valuable in supporting software engineering tasks, particularly in the field of code generation. Automatic code generation is a process of automatically generating source code or executable code based on given specifications or requirements, improving developer productivity. In this study, we perform a systematic empirical assessment to the quality of code generation using ChatGPT. We leverage 728 algorithm problems in five languages (i.e., C, C++, Java, Python, and JavaScript) and 18 CWEs with 54 code scenarios for the code generation task. Our evaluation encompasses a comprehensive analysis of code snippets generated by ChatGPT, focusing on three critical aspects: correctness, complexity, and security. We also specifically investigate ChatGPT’s ability to engage in multi-round fixing process (i.e., ChatGPT’s dialog ability) of facilitating code generation. By delving into the generated code and examining the experimental results, this work provides valuable insights into the performance of ChatGPT in tackling code generation tasks over the three critical aspects. Overall, our findings uncover potential issues and limitations that arise in the ChatGPT-based code generation and lay the groundwork for improving AI and LLM-based code generation techniques.

Similar Work