Colossal-ai: A Unified Deep Learning System For Large-scale Parallel Training | Awesome LLM Papers Add your paper to Awesome LLM Papers

Colossal-ai: A Unified Deep Learning System For Large-scale Parallel Training

Shenggui Li, Hongxin Liu, Zhengda Bian, Jiarui Fang, Haichen Huang, Yuliang Liu, Boxiang Wang, Yang You . Proceedings of the 52nd International Conference on Parallel Processing 2023 – 52 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization Content Enrichment Model Architecture Training Techniques Variational Autoencoders Visual Question Answering

The success of Transformer models has pushed the deep learning model scale to billions of parameters. Due to the limited memory resource of a single GPU, However, the best practice for choosing the optimal parallel strategy is still lacking, since it requires domain expertise in both deep learning and parallel computing. The Colossal-AI system addressed the above challenge by introducing a unified interface to scale your sequential code of model training to distributed environments. It supports parallel training methods such as data, pipeline, tensor, and sequence parallelism, as well as heterogeneous training methods integrated with zero redundancy optimizer. Compared to the baseline system, Colossal-AI can achieve up to 2.76 times training speedup on large-scale models.

Similar Work