Supporting Very Large Models Using Automatic Dataflow Graph Partitioning | Awesome LLM Papers Add your paper to Awesome LLM Papers

Supporting Very Large Models Using Automatic Dataflow Graph Partitioning

Minjie Wang, Chien-Chin Huang, Jinyang Li . Proceedings of the Fourteenth EuroSys Conference 2019 2019 – 65 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization Content Enrichment Image Text Integration Tools Training Techniques Variational Autoencoders Visual Question Answering

This paper presents Tofu, a system that partitions very large DNN models across multiple GPU devices to reduce per-GPU memory footprint. Tofu is designed to partition a dataflow graph of fine-grained tensor operators in order to work transparently with a general-purpose deep learning platform like MXNet. In order to automatically partition each operator, we propose to describe the semantics of an operator in a simple language which represents tensors as lambda functions mapping from tensor coordinates to values. To optimally partition different operators in a dataflow graph, Tofu uses a recursive search algorithm that minimizes the total communication cost. Our experiments on an 8-GPU machine show that Tofu enables the training of very large CNN and RNN models. It also achieves 25% - 400% speedup over alternative approaches to train very large models.

Similar Work