Multi-head Attention: Collaborate Instead Of Concatenate | Awesome LLM Papers Add your paper to Awesome LLM Papers

Multi-head Attention: Collaborate Instead Of Concatenate

Jean-Baptiste Cordonnier, Andreas Loukas, Martin Jaggi . Arxiv 2021 – 70 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
3d Representation Compositional Generalization Content Enrichment Image Text Integration Interactive Environments Interdisciplinary Approaches Model Architecture Multimodal Semantic Representation Neural Machine Translation Productivity Enhancement Question Answering Training Techniques Visual Contextualization

Attention layers are widely used in natural language processing (NLP) and are beginning to influence computer vision architectures. Training very large transformer models allowed significant improvement in both fields, but once trained, these networks show symptoms of over-parameterization. For instance, it is known that many attention heads can be pruned without impacting accuracy. This work aims to enhance current understanding on how multiple heads interact. Motivated by the observation that attention heads learn redundant key/query projections, we propose a collaborative multi-head attention layer that enables heads to learn shared projections. Our scheme decreases the number of parameters in an attention layer and can be used as a drop-in replacement in any transformer architecture. Our experiments confirm that sharing key/query dimensions can be exploited in language understanding, machine translation and vision. We also show that it is possible to re-parametrize a pre-trained multi-head attention layer into our collaborative attention layer. Collaborative multi-head attention reduces the size of the key and query projections by 4 for same accuracy and speed. Our code is public.

Similar Work