A Comprehensive Survey on Low-Rank Adaptation
This repository provides a comprehensive survey of Low-Rank Adaptation (LoRA) methods and their applications. We welcome contributions to keep this list up-to-date. If you find this repository useful, please consider starring it.
LoRA Settings
1.1 Initialization
1.2 Hyperparameters
1.3 Optimization
1.4 Regularization
Dynamic Rank
LoRA Variants
Other Low-rank Decomposition
LoRA with Model Compressions
5.1 LoRA with Pruning
5.2 LoRA with Quantization
5.3 LoRA with NAS
5.4 Memory-efficient LoRA
LoRA Extensions
6.1 Multiple LoRA
6.2 Mixture-of-Experts (MOE) LoRA
6.3 LoRA Merge
LoRA applications
7.1 Visual Understanding
7.2 Visual Generation
7.3 Language Understanding
7.4 Multimodal learning
7.5 Other
Year
Title
Venue
Paper
Code
2022
LoRA: Low-Rank Adaptation of Large Language Models
ICLR
Link
Link
Year
Title
Venue
Paper
Code
2024
The Impact of Initialization on LoRA Finetuning Dynamics
-
Link
-
2024
ShareLoRA: Parameter Efficient and Robust Large Language Model Fine-tuning via Shared Low-Rank Adaptation
-
Link
-
2024
MiLoRA: Harnessing Minor Singular Components for Parameter-Efficient LLM Finetuning
-
Link
-
2024
PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models
ICLR
Link
Link
2024
CorDA: Context-Oriented Decomposition Adaptation of Large Language Models
arXiv
Link
link
2024
SVFT: Parameter-Efficient Fine-Tuning with Singular Vectors
arXiv
Link
link
Year
Title
Venue
Paper
Code
2024
LoRA+: Efficient Low Rank Adaptation of Large Models
arXiv
Link
Link
2023
The expressive power of low-rank adaptation
ICLR
Link
Link
Year
Title
Venue
Paper
Code
2024
Derivative-Free Optimization for Low-Rank Adaptation in Large Language Models
arXiv
Link
Link
2024
AFLoRA: Adaptive Freezing of Low Rank Adaptation in Parameter Efficient Fine-Tuning of Large Models
arXiv
Link
-
2023
Bayesian Low-rank Adaptation for Large Language Models
ICLR
Link
Link
2024
A Study of Optimizations for Fine-tuning Large Language Models
-
Link
-
2024
Bayesian-LoRA: LoRA based Parameter Efficient Fine-Tuning using Optimal Quantization levels and Rank Values trough Differentiable Bayesian Gates
-
Link
-
2024
Understanding Linear Probing then Fine-tuning Language Models from NTK Perspective
-
Link
-
2024
BLoB: Bayesian Low-Rank Adaptation by Backpropagation for Large Language Models
-
Link
-
Year
Title
Venue
Paper
Code
2024
LoRA Meets Dropout under a Unified Framework
arXiv
Link
-
2024
AdvLoRA: Adversarial Low-Rank Adaptation of Vision-Language Models
arXiv
Link
-
2024
PeriodicLoRA: Breaking the Low-Rank Bottleneck in LoRA Optimization
arXiv
Link
-
2024
LoRA Dropout as a Sparsity Regularizer for Overfitting Control
-
-
-
2024
LoRA-drop: Efficient LoRA Parameter Pruning based on Output Evaluation
-
Link
-
Year
Title
Venue
Paper
Code
2024
Lottery Ticket Adaptation: Mitigating Destructive Interference in LLMs
-
Link
Link
2024
Sparse High Rank Adapters
-
Link
-
2024
SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining
-
Link
-
2023
Sparse Low-rank Adaptation of Pre-trained Language Models
EMNLP
Link
Link
2024
SLoPe: Double-Pruned Sparse Plus Lazy Low-Rank Adapter Pretraining of LLMs
-
Link
-
2024
MLAE: Masked LoRA Experts for Parameter-Efficient Fine-Tuning
-
Link
-
Year
Title
Venue
Paper
Code
2023
Bayesian Low-rank Adaptation for Large Language Models
ICLR
Link
Link
2024
Bayesian-LoRA: LoRA based Parameter Efficient Fine-Tuning using Optimal Quantization levels and Rank Values trough Differentiable Bayesian Gates
-
Link
-
2024
BLoB: Bayesian Low-Rank Adaptation by Backpropagation for Large Language Models
-
Link
-
Year
Title
Venue
Paper
Code
2024
ShareLoRA: Parameter Efficient and Robust Large Language Model Fine-tuning via Shared Low-Rank Adaptation
-
Link
-
2024
RoSA: Accurate Parameter-Efficient Fine-Tuning via Robust Adaptation
-
Link
Link
Year
Title
Venue
Paper
Code
2023
Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning
ICLR
Link
Link
2023
DyLoRA: Parameter-Efficient Tuning of Pre-trained Models using Dynamic Search-Free Low-Rank Adaptation
EACL
Link
Link
2024
MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning
arXiv
Link
Link
2024
BiLoRA: A Bi-level Optimization Framework for Overfitting-Resilient Low-Rank Adaptation of Large Pre-trained Models
arXiv
Link
-
2024
Unlocking the Global Synergies in Low-Rank Adapters
-
Link
-
2024
ALoRA: Allocating Low-Rank Adaptation for Fine-tuning Large Language Model
-
Link
-
Year
Title
Venue
Paper
Code
2023
Lora-fa: Memory-efficient low-rank adaptation for large language models fine-tuning
arXiv
Link
-
2023
VERA: VECTOR-BASED RANDOM MATRIX ADAPTATION
arXiv
Link
-
2024
DoRA: Weight-Decomposed Low-Rank Adaptation
ICML
Link
Link
2024
FLoRA: Low-Rank Core Space for N-dimension
arXiv
Link
Link
2024
Mixture-of-Subspaces in Low-Rank Adaptation
-
Link
Link
2024
LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters
-
Link
-
2024
ReFT: Representation Finetuning for Language Models
Preprint
Link
Link
2024
LaMDA: Large Model Fine-Tuning via Spectrally Decomposed Low-Dimensional Adaptation
-
Link
-
2024
Structured Unrestricted-Rank Matrices for Parameter Efficient Fine-tuning
Preprint
Link
2024
LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Models
NAACL
Link
Link
2024
Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models
-
Link
Link
2024
Trans-LoRA: towards data-free Transferable Parameter Efficient Finetuning
-
Link
-
2024
VB-LoRA: Extreme Parameter Efficient Fine-Tuning with Vector Banks
-
Link
-
2023
Tied-LoRA: Enhancing parameter efficiency of LoRA with Weight Tying
-
Link
-
2024
Towards Modular LLMs by Building and Reusing a Library of LoRAs
-
Link
-
2024
HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning
-
-
-
2024
SIBO: A Simple Booster for Parameter-Efficient Fine-Tuning
-
Link
-
2024
Asymmetry in Low-Rank Adapters of Foundation Models
-
Link
-
2024
PROLORA: Partial Rotation Empowers More Parameter-Efficient LoRA
-
Link
-
2024
AFLoRA: Adaptive Freezing of Low Rank Adaptation in Parameter Efficient Fine-Tuning of Large Models
-
Link
-
2023
Increasing model capacity for free: A simple strategy for parameter efficient fine-tuning
ICLR
-
-
4. Other Low-rank Decomposition
Year
Title
Venue
Paper
Code
2024
Parameter-Efficient Fine-Tuning with Discrete Fourier Transform
ICML
Link
Link
2024
OLoRA: Orthonormal Low-Rank Adaptation of Large Language Models
-
Link
-
2024
Bridging The Gap between Low-rank and Orthogonal Adaptation via Householder Reflection Adaptation
arXiv
Link
link
5. LoRA with Model Compressions
Year
Title
Venue
Paper
Code
2024
RankAdaptor: Hierarchical Dynamic Low-Rank Adaptation for Structural Pruned LLMs
-
Link
-
2024
PRILoRA: Pruned and Rank-Increasing Low-Rank Adaptation
EACL
Link
-
2023
Pruning meets low-rank parameter-efficient fine-tuning
-
Link
-
5.2 LoRA with Quantization
Year
Title
Venue
Paper
Code
2023
QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models
ICLR
Link
Link
2024
Low-Rank Quantization-Aware Training for LLMs
-
Link
-
2024
QDyLoRA: Quantized Dynamic Low-Rank Adaptation for Efficient Large Language Model Tuning
AAAI Workshop
Link
-
2024
LoQT: Low Rank Adapters for Quantized Training
-
Link
-
2024
One QuantLLM for ALL: Fine-tuning Quantized LLMs Once for Efficient Deployments
-
Link
-
2023
QLORA: Efficient Finetuning of Quantized LLMs
NeurIPS
Link
Link
2024
Accurate LoRA-Finetuning Quantization of LLMs via Information Retention
ICML
Link
Link
Year
Title
Venue
Paper
Code
2024
LoNAS: Elastic Low-Rank Adapters for Efficient Large Language
COLING
Link
Link
2024
Shears: Unstructured Sparsity with Neural Low-rank Adapter Search
-
Link
-
5.4 Memory-efficient LoRA
Year
Title
Venue
Paper
Code
2024
Galore: Memory-efficient llm training by gradient low-rank projection
ICML
Link
Link
2024
Flora: Low-Rank Adapters Are Secretly Gradient Compressors
ICML
Link
-
2024
BlockLLM: Memory-Efficient Adaptation of LLMs by Selecting and Optimizing the Right Coordinate Blocks
Preprint
Link
5.4 Knowledge Distillation LoRA
Year
Title
Venue
Paper
Code
2024
PC-LoRA: Low-Rank Adaptation for Progressive Model Compression with Knowledge Distillation
-
Link
-
Year
Title
Venue
Paper
Code
2024
LoRA-Ensemble: Efficient Uncertainty Modelling for Self-attention Networks
-
Link
-
2024
MeteoRA: Multiple-tasks Embedded LoRA for Large Language Models
-
Link
-
2024
MELoRA: Mini-Ensemble Low-Rank Adapters for Parameter-Efficient Fine-Tuning
ACL
Link
-
2023
LoraHub: Efficient cross-task generalization via dynamic lora composition
ICLR
Link
-
2024
LoRA-Switch: Boosting the Efficiency of Dynamic LLM Adapters via System-Algorithm Co-design
-
-
-
6.2 Mixture-of-Experts (MOE) LoRA
Year
Title
Venue
Paper
Code
2023
Loramoe: Revolutionizing mixture of experts for maintaining world knowledge in language model alignment
arXiv
Link
-
2024
MoLE: Mixture of LoRA Experts
ICLR
Link
Link
2024
Uni-MoE: Scaling Unified Multimodal LLMs with Mixture of Experts
-
Link
-
2024
AdaMoLE: Fine-Tuning Large Language Models with Adaptive Mixture of Low-Rank Adaptation Experts
-
Link
-
2024
Mixture of Experts Using Tensor Products
-
Link
-
Year
Title
Venue
Paper
Code
Year
Title
Venue
Paper
Code
2024
Convolution Meets LoRA: Parameter Efficient Finetuning for Segment Anything Model
ICLR
Link
-
2024
Low-Rank Rescaled Vision Transformer Fine-Tuning: A Residual Design Approach
-
Link
Link
2024
ExPLoRA: Parameter-Efficient Extended Pre-Training to Adapt Vision Transformers under Domain Shifts
-
Link
-
2023
MeLo: Low-rank Adaptation is Better than Finetuning for Medical Image
Link
Year
Title
Venue
Paper
Code
2024
ExPLoRA: Parameter-Efficient Extended Pre-Training to Adapt Vision Transformers under Domain Shifts
-
Link
-
2024
MoE-FFD: Mixture of Experts for Generalized and Parameter-Efficient Face Forgery Detection
-
Link
Link
2024
Mixture of Low-rank Experts for Transferable AI-Generated Image Detection
-
Link
Link
2024
LoRA-Composer: Leveraging Low-Rank Adaptation for Multi-Concept Customization in Training-Free Diffusion Models
-
Link
Link
2024
Low-Rank Few-Shot Adaptation of Vision-Language Models
-
Link
-
2024
FouRA: Fourier Low Rank Adaptation
-
Link
-
2023
Intrinsic LoRA: A Generalist Approach for Discovering Knowledge in Generative Models
-
Link
Link
2023
Orthogonal Adaptation for Modular Customization of Diffusion Models
Preprint
Link
2023
ZipLoRA: Any Subject in Any Style by Effectively Merging LoRAs
Preprint
Link
Link
2023
Cones: Concept Neurons in Diffusion Models for Customized Generation
ICML
Link
2023
Multi-Concept Customization of Text-to-Image Diffusion
CVPR
Link
2023
Cones 2: Customizable Image Synthesis with Multiple Subjects
Preprint
Link
Link
2024
Block-wise LoRA: Revisiting Fine-grained LoRA for Effective Personalization and Stylization in Text-to-Image Generation
AAAI
Link
2023
Mix-of-Show: Decentralized Low-Rank Adaptation for Multi-Concept Customization of Diffusion Models
NeurIPS
Link
Link
2024
SELMA: Learning and Merging Skill-Specific Text-to-Image Experts with Auto-Generated Data
Preprint
Link
Link
2024
MACE: Mass Concept Erasure in Diffusion Models
CVPR
Link
2024
DiffuseKronA: A Parameter Efficient Fine-tuning Method for Personalized Diffusion Model
Preprint
Link
2024
Multi-LoRA Composition for Image Generation
arXiv
Link
Link
2023
Motion Style Transfer: Modular Low-Rank Adaptation for Deep Motion Forecasting
PMLR
Link
Link
7.3 Language Understanding
Year
Title
Venue
Paper
Code
2023
Exploring the impact of low-rank adaptation on the performance, efficiency, and regularization of RLHF
arXiv
Link
Link
Year
Title
Venue
Paper
Code
2024
LaMDA: Large Model Fine-Tuning via Spectrally Decomposed Low-Dimensional Adaptation
-
Link
-
2024
MoVA: Adapting Mixture of Vision Experts to Multimodal Context
-
Link
Link
2024
AdvLoRA: Adversarial Low-Rank Adaptation of Vision-Language Models
-
Link
Year
Title
Venue
Paper
Code
2024
Improving LoRA in Privacy-preserving Federated Learning
ICLR
Link
-
2024
FeDeRA:Efficient Fine-tuning of Language Models in Federated Learning Leveraging Weight Decomposition
-
Link
-
2024
FLoRA: Enhancing Vision-Language Models with Parameter-Efficient Federated Learning
-
Link
-
2024
FL-TAC: Enhanced Fine-Tuning in Federated Learning via Low-Rank, Task-Specific Adapter Clustering
-
Link
-
2024
DP-DyLoRA: Fine-Tuning Transformer-Based Models On-Device under Differentially Private Federated Learning using Dynamic Low-Rank Adaptation
-
Link
-
Year
Title
Venue
Paper
Code
2023
Low-Rank Adaptation of Large Language Model Rescoring for Parameter-Efficient Speech Recognition
ASRU
Link
-
2024
Low-Rank Adaptation of Time Series Foundational Models for Out-of-Domain Modality Forecasting
-
Link
-
2023
Continual Learning with Low Rank Adaptation
NeurIPS Workshop
Link
-
2024
Zero-Shot Cross-Domain Dialogue State Tracking via Dual Low-Rank Adaptation
ACL
Link
-
We welcome contributions to this survey. Please feel free to submit a pull request to add new papers or update existing information.
This project is licensed under the MIT License - see the LICENSE file for details.