Weight Poisoning Attacks On Pre-trained Models | Awesome LLM Papers Add your paper to Awesome LLM Papers

Weight Poisoning Attacks On Pre-trained Models

Keita Kurita, Paul Michel, Graham Neubig . Arxiv 2020 – 45 citations

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets Fine Tuning Has Code Neural Machine Translation Security

Recently, NLP has seen a surge in the usage of large pre-trained models. Users download weights of models pre-trained on large datasets, then fine-tune the weights on a task of their choice. This raises the question of whether downloading untrusted pre-trained weights can pose a security threat. In this paper, we show that it is possible to construct weight poisoning'' attacks where pre-trained weights are injected with vulnerabilities that expose backdoors’’ after fine-tuning, enabling the attacker to manipulate the model prediction simply by injecting an arbitrary keyword. We show that by applying a regularization method, which we call RIPPLe, and an initialization procedure, which we call Embedding Surgery, such attacks are possible even with limited knowledge of the dataset and fine-tuning procedure. Our experiments on sentiment classification, toxicity detection, and spam detection show that this attack is widely applicable and poses a serious threat. Finally, we outline practical defenses against such attacks. Code to reproduce our experiments is available at https://github.com/neulab/RIPPLe.

Similar Work