DynaLoRA: Dynamic Low-Rank Module Allocation

Published in GitHub, 2024

Abstract

In this project we explored the training dynamics of Parameter-Efficient Fine-Tuning (PEFT) methods, with an emphasis on Low-Rank Adaptation (LoRA). Mainly, we wanted to evaluate, whether it is possible to further reduce memory overhead of fine-tuning by selectively deactivating gradient updates for certain modules during training. In our method, we measured either activation magnitude of the adapted layers in the forward pass, or the gradient magnitude of the same vector in the backward pass.

Recommended citation: Brouwers, J., Fulop, Z., Hamar, M., & Krastev, M. (2024). DynaLoRA: Dynamic Low-Rank Module Allocation.
Download Paper