Bpemb: Tokenization-free Pre-trained Subword Embeddings In 275 Languages | Awesome LLM Papers Contribute to Awesome LLM Papers

Bpemb: Tokenization-free Pre-trained Subword Embeddings In 275 Languages

Benjamin Heinzerling, Michael Strube . Arxiv 2017 – 129 citations

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
Uncategorized

We present BPEmb, a collection of pre-trained subword unit embeddings in 275 languages, based on Byte-Pair Encoding (BPE). In an evaluation using fine-grained entity typing as testbed, BPEmb performs competitively, and for some languages bet- ter than alternative subword approaches, while requiring vastly fewer resources and no tokenization. BPEmb is available at https://github.com/bheinzerling/bpemb

Similar Work