Bp-transformer: Modelling Long-range Context Via Binary Partitioning | Awesome LLM Papers Contribute to Awesome LLM Papers

Bp-transformer: Modelling Long-range Context Via Binary Partitioning

Zihao Ye, Qipeng Guo, Quan Gan, Xipeng Qiu, Zheng Zhang . Arxiv 2019 – 61 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Uncategorized

The Transformer model is widely successful on many natural language processing tasks. However, the quadratic complexity of self-attention limit its application on long text. In this paper, adopting a fine-to-coarse attention mechanism on multi-scale spans via binary partitioning (BP), we propose BP-Transformer (BPT for short). BPT yields (O(k\cdot nlog (n/k))) connections where (k) is a hyperparameter to control the density of attention. BPT has a good balance between computation complexity and model capacity. A series of experiments on text classification, machine translation and language modeling shows BPT has a superior performance for long text than previous self-attention models. Our code, hyperparameters and CUDA kernels for sparse attention are available in PyTorch.

Similar Work