Visualizing And Measuring The Geometry Of BERT | Awesome LLM Papers Add your paper to Awesome LLM Papers

Visualizing And Measuring The Geometry Of BERT

Andy Coenen, Emily Reif, Ann Yuan, Been Kim, Adam Pearce, Fernanda Viégas, Martin Wattenberg . Arxiv 2019 – 218 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Model Architecture

Transformer architectures show significant promise for natural language processing. Given that a single pretrained model can be fine-tuned to perform well on many different tasks, these networks appear to extract generally useful linguistic features. A natural question is how such networks represent this information internally. This paper describes qualitative and quantitative investigations of one particularly effective model, BERT. At a high level, linguistic features seem to be represented in separate semantic and syntactic subspaces. We find evidence of a fine-grained geometric representation of word senses. We also present empirical descriptions of syntactic representations in both attention matrices and individual word embeddings, as well as a mathematical argument to explain the geometry of these representations.

Similar Work