Original PDF: LORAHUB: EFFICIENT CROSS-TASK GENERALIZATION VIA DYNAMIC LORA COMPOSITION
Authors: Chengsong Huang, Qian Liu, Bill Yuchen Lin, Tianyu Pang, Chao Du, Min Lin
Summary & Commentary
The paper titled "LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition" introduces a strategic framework called LoraHub, which leverages Low-Rank Adaptations (LoRA) to fine-tune Large Language Models (LLMs) for new tasks. The paper investigates the composability of LoRA for cross-task generalization. LoraHub allows for the assembly of LoRA modules trained on various tasks to achieve adaptable performance on unseen tasks. With just a few examples from a new task, LoraHub enables the fluid combination of multiple LoRA modules, eliminating the need for human expertise. The composition requires neither additional model parameters nor gradients. The empirical results, derived from the Big-Bench Hard (BBH) benchmark, suggest that LoraHub can effectively mimic the performance of in-context learning in few-shot scenarios, excluding the necessity of in-context examples alongside each inference input. A significant contribution of this research is the fostering of a community for LoRA, where users can share their trained LoRA modules, thereby facilitating their application to new tasks (Page 1).