Turn The Combination Lock: Learnable Textual Backdoor Attacks Via Word Substitution | Awesome LLM Papers Add your paper to Awesome LLM Papers

Turn The Combination Lock: Learnable Textual Backdoor Attacks Via Word Substitution

Fanchao Qi, Yuan Yao, Sophia Xu, Zhiyuan Liu, Maosong Sun . Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) 2021 – 83 citations

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
ACL Applications Compositional Generalization Content Enrichment Has Code Image Text Integration Interactive Environments Interdisciplinary Approaches Multimodal Semantic Representation Neural Machine Translation Productivity Enhancement Question Answering Security

Recent studies show that neural natural language processing (NLP) models are vulnerable to backdoor attacks. Injected with backdoors, models perform normally on benign examples but produce attacker-specified predictions when the backdoor is activated, presenting serious security threats to real-world applications. Since existing textual backdoor attacks pay little attention to the invisibility of backdoors, they can be easily detected and blocked. In this work, we present invisible backdoors that are activated by a learnable combination of word substitution. We show that NLP models can be injected with backdoors that lead to a nearly 100% attack success rate, whereas being highly invisible to existing defense strategies and even human inspections. The results raise a serious alarm to the security of NLP models, which requires further research to be resolved. All the data and code of this paper are released at https://github.com/thunlp/BkdAtk-LWS.

Similar Work