Adapterdrop: On The Efficiency Of Adapters In Transformers | Awesome LLM Papers Contribute to Awesome LLM Papers

Adapterdrop: On The Efficiency Of Adapters In Transformers

Andreas Rücklé, Gregor Geigle, Max Glockner, Tilman Beck, Jonas Pfeiffer, Nils Reimers, Iryna Gurevych . Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2021 – 129 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
EMNLP Efficiency Model Architecture Training Techniques

Massively pre-trained transformer models are computationally expensive to fine-tune, slow for inference, and have large storage requirements. Recent approaches tackle these shortcomings by training smaller models, dynamically reducing the model size, and by training light-weight adapters. In this paper, we propose AdapterDrop, removing adapters from lower transformer layers during training and inference, which incorporates concepts from all three directions. We show that AdapterDrop can dynamically reduce the computational overhead when performing inference over multiple tasks simultaneously, with minimal decrease in task performances. We further prune adapters from AdapterFusion, which improves the inference efficiency while maintaining the task performances entirely.

Similar Work