Tied-lora: Enhacing Parameter Efficiency Of Lora With Weight Tying | Awesome LLM Papers Add your paper to Awesome LLM Papers

Tied-lora: Enhacing Parameter Efficiency Of Lora With Weight Tying

We propose Tied-LoRA, a simple paradigm utilizes weight tying and selective training to further increase parameter efficiency of the Low-rank adaptation (LoRA) method. Our investigations include all feasible combinations parameter training/freezing in conjunction with weight tying to identify the optimal balance between performance and the number of trainable parameters. Through experiments covering a variety of tasks and two base language models, we provide analysis revealing trade-offs between efficiency and performance. Our experiments uncovered a particular Tied-LoRA configuration that stands out by demonstrating comparable performance across several tasks while employing only 13~% percent of parameters utilized by the standard LoRA method.

Similar Work
Loading…