The Rogue Scalpel: Activation Steering Compromises LLM Safety | Awesome LLM Papers Add your paper to Awesome LLM Papers

The Rogue Scalpel: Activation Steering Compromises LLM Safety

Anton Korznikov, Andrey Galichin, Alexey Dontsov, Oleg Y. Rogov, Ivan Oseledets, Elena Tutubalina . No Venue 2025

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Ethics & Fairness Fine Tuning Interpretability Prompting Security Visual Question Answering

Activation steering is a promising technique for controlling LLM behavior by adding semantically meaningful vectors directly into a model’s hidden states during inference. It is often framed as a precise, interpretable, and potentially safer alternative to fine-tuning. We demonstrate the opposite: steering systematically breaks model alignment safeguards, making it comply with harmful requests. Through extensive experiments on different model families, we show that even steering in a random direction can increase the probability of harmful compliance from 0% to 2-27%. Alarmingly, steering benign features from a sparse autoencoder (SAE), a common source of interpretable directions, increases these rates by a further 2-4%. Finally, we show that combining 20 randomly sampled vectors that jailbreak a single prompt creates a universal attack, significantly increasing harmful compliance on unseen requests. These results challenge the paradigm of safety through interpretability, showing that precise control over model internals does not guarantee precise control over model behavior.

Similar Work