Security And Privacy Challenges Of Large Language Models: A Survey | Awesome LLM Papers Add your paper to Awesome LLM Papers

Security And Privacy Challenges Of Large Language Models: A Survey

Badhan Chandra Das, M. Hadi Amini, Yanzhao Wu . ACM Computing Surveys 2025 – 64 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization Interactive Environments Interdisciplinary Approaches Multimodal Semantic Representation Neural Machine Translation Privacy Security Survey Paper Training Techniques

Large Language Models (LLMs) have demonstrated extraordinary capabilities and contributed to multiple fields, such as generating and summarizing text, language translation, and question-answering. Nowadays, LLM is becoming a very popular tool in computerized language processing tasks, with the capability to analyze complicated linguistic patterns and provide relevant and appropriate responses depending on the context. While offering significant advantages, these models are also vulnerable to security and privacy attacks, such as jailbreaking attacks, data poisoning attacks, and Personally Identifiable Information (PII) leakage attacks. This survey provides a thorough review of the security and privacy challenges of LLMs for both training data and users, along with the application-based risks in various domains, such as transportation, education, and healthcare. We assess the extent of LLM vulnerabilities, investigate emerging security and privacy attacks for LLMs, and review the potential defense mechanisms. Additionally, the survey outlines existing research gaps in this domain and highlights future research directions.

Similar Work