On The Possibilities Of Ai-generated Text Detection | Awesome LLM Papers Add your paper to Awesome LLM Papers

On The Possibilities Of Ai-generated Text Detection

Souradip Chakraborty, Amrit Singh Bedi, Sicheng Zhu, Bang An, Dinesh Manocha, Furong Huang . Arxiv 2023 – 50 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Applications Compositional Generalization Datasets Interdisciplinary Approaches Model Architecture Multimodal Semantic Representation

Our work addresses the critical issue of distinguishing text generated by Large Language Models (LLMs) from human-produced text, a task essential for numerous applications. Despite ongoing debate about the feasibility of such differentiation, we present evidence supporting its consistent achievability, except when human and machine text distributions are indistinguishable across their entire support. Drawing from information theory, we argue that as machine-generated text approximates human-like quality, the sample size needed for detection increases. We establish precise sample complexity bounds for detecting AI-generated text, laying groundwork for future research aimed at developing advanced, multi-sample detectors. Our empirical evaluations across multiple datasets (Xsum, Squad, IMDb, and Kaggle FakeNews) confirm the viability of enhanced detection methods. We test various state-of-the-art text generators, including GPT-2, GPT-3.5-Turbo, Llama, Llama-2-13B-Chat-HF, and Llama-2-70B-Chat-HF, against detectors, including oBERTa-Large/Base-Detector, GPTZero. Our findings align with OpenAI’s empirical data related to sequence length, marking the first theoretical substantiation for these observations.

Similar Work