L^2M: Mutual Information Scaling Law For Long-context Language Modeling | Awesome LLM Papers Add your paper to Awesome LLM Papers

L^2M: Mutual Information Scaling Law For Long-context Language Modeling

Zhuo Chen, Oriol Mayné I Comas, Zhuotao Jin, di Luo, Marin Soljačić . No Venue 2025

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization Interdisciplinary Approaches Memory & Context Multimodal Semantic Representation

We rigorously establish a bipartite mutual information scaling law in natural language that governs long-range dependencies. This scaling law, which we show is distinct from and scales independently of the conventional two-point mutual information, is the key to understanding long-context language modeling. Using this scaling law, we formulate the Long-context Language Modeling (L^2M) condition, which relates a model’s capacity for effective long context length modeling to the scaling of its latent state size for storing past information. Our results are validated through experiments on both transformers and state space models. This work establishes a theoretical foundation that guides the development of large language models toward longer context lengths.

Similar Work