Getting Pwn’d By AI: Penetration Testing With Large Language Models | Awesome LLM Papers Add your paper to Awesome LLM Papers

Getting Pwn'd By AI: Penetration Testing With Large Language Models

Andreas Happe, Jürgen Cito . ESEC/FSE '23: 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering 2023 – 61 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Applications Compositional Generalization Ethics & Fairness Interdisciplinary Approaches Llm For Code Multimodal Semantic Representation Security

The field of software security testing, more specifically penetration testing, is an activity that requires high levels of expertise and involves many manual testing and analysis steps. This paper explores the potential usage of large-language models, such as GPT3.5, to augment penetration testers with AI sparring partners. We explore the feasibility of supplementing penetration testers with AI models for two distinct use cases: high-level task planning for security testing assignments and low-level vulnerability hunting within a vulnerable virtual machine. For the latter, we implemented a closed-feedback loop between LLM-generated low-level actions with a vulnerable virtual machine (connected through SSH) and allowed the LLM to analyze the machine state for vulnerabilities and suggest concrete attack vectors which were automatically executed within the virtual machine. We discuss promising initial results, detail avenues for improvement, and close deliberating on the ethics of providing AI-based sparring partners.

Similar Work