Chatgpt For Vulnerability Detection, Classification, And Repair: How Far Are We? | Awesome LLM Papers Add your paper to Awesome LLM Papers

Chatgpt For Vulnerability Detection, Classification, And Repair: How Far Are We?

Michael Fu, Chakkrit Tantithamthavorn, van Nguyen, Trung Le . 2023 30th Asia-Pacific Software Engineering Conference (APSEC) 2023 – 50 citations

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization Datasets Fine Tuning Has Code Interdisciplinary Approaches Llm For Code Model Architecture Multimodal Semantic Representation Security

Large language models (LLMs) like ChatGPT (i.e., gpt-3.5-turbo and gpt-4) exhibited remarkable advancement in a range of software engineering tasks associated with source code such as code review and code generation. In this paper, we undertake a comprehensive study by instructing ChatGPT for four prevalent vulnerability tasks: function and line-level vulnerability prediction, vulnerability classification, severity estimation, and vulnerability repair. We compare ChatGPT with state-of-the-art language models designed for software vulnerability purposes. Through an empirical assessment employing extensive real-world datasets featuring over 190,000 C/C++ functions, we found that ChatGPT achieves limited performance, trailing behind other language models in vulnerability contexts by a significant margin. The experimental outcomes highlight the challenging nature of vulnerability prediction tasks, requiring domain-specific expertise. Despite ChatGPT’s substantial model scale, exceeding that of source code-pre-trained language models (e.g., CodeBERT) by a factor of 14,000, the process of fine-tuning remains imperative for ChatGPT to generalize for vulnerability prediction tasks. We publish the studied dataset, experimental prompts for ChatGPT, and experimental results at https://github.com/awsm-research/ChatGPT4Vul.

Similar Work