Vera: Vector-based Random Matrix Adaptation | Awesome LLM Papers Contribute to Awesome LLM Papers

Vera: Vector-based Random Matrix Adaptation

Dawid Jan Kopiczko, Tijmen Blankevoort, Yuki Markus Asano . No Venue 2023

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Uncategorized

Low-rank adapation (LoRA) is a popular method that reduces the number of trainable parameters when finetuning large language models, but still faces acute storage challenges when scaling to even larger models or deploying numerous per-user or per-task adapted models. In this work, we present Vector-based Random Matrix Adaptation (VeRA), which reduces the number of trainable parameters by 10x compared to LoRA, yet maintains the same performance. It achieves this by using a single pair of low-rank matrices shared across all layers and learning small scaling vectors instead. We demonstrate its effectiveness on the GLUE and E2E benchmarks, and show its application in instruction-following with just 1.4M parameters using the Llama2 7B model.

Similar Work