Llamafactory: Unified Efficient Fine-tuning Of 100+ Language Models | Awesome LLM Papers Add your paper to Awesome LLM Papers

Llamafactory: Unified Efficient Fine-tuning Of 100+ Language Models

Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, Zheyan Luo . No Venue 2024

[Code] [Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization Content Enrichment Efficiency Fine Tuning Has Code Interdisciplinary Approaches Multimodal Semantic Representation Productivity Enhancement RAG Tools Training Techniques Variational Autoencoders

Efficient fine-tuning is vital for adapting large language models (LLMs) to downstream tasks. However, it requires non-trivial efforts to implement these methods on different models. We present LlamaFactory, a unified framework that integrates a suite of cutting-edge efficient training methods. It allows users to flexibly customize the fine-tuning of 100+ LLMs without the need for coding through the built-in web UI LlamaBoard. We empirically validate the efficiency and effectiveness of our framework on language modeling and text generation tasks. It has been released at https://github.com/hiyouga/LLaMA-Factory and already received over 13,000 stars and 1,600 forks.

Similar Work