Linformer: Self-attention With Linear Complexity | Awesome LLM Papers Add your paper to Awesome LLM Papers

Linformer: Self-attention With Linear Complexity

Sinong Wang, Belinda Z. Li, Madian Khabsa, Han Fang, Hao Ma . Arxiv 2020 – 880 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Applications Compositional Generalization Content Enrichment Image Text Integration Interactive Environments Interdisciplinary Approaches Model Architecture Multimodal Semantic Representation Neural Machine Translation Productivity Enhancement Question Answering Training Techniques

Large transformer models have shown extraordinary success in achieving state-of-the-art results in many natural language processing applications. However, training and deploying these models can be prohibitively costly for long sequences, as the standard self-attention mechanism of the Transformer uses (O(n^2)) time and space with respect to sequence length. In this paper, we demonstrate that the self-attention mechanism can be approximated by a low-rank matrix. We further exploit this finding to propose a new self-attention mechanism, which reduces the overall self-attention complexity from (O(n^2)) to (O(n)) in both time and space. The resulting linear transformer, the \textit{Linformer}, performs on par with standard Transformer models, while being much more memory- and time-efficient.

Similar Work