AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning
Date:
This paper proposes a parameter-efficient fine-tuning method called $\texttt{AdaMix}$, a general parameter-efficient fine-tuning (PEFT) techniques that tunes a mixture of adaptation modules.
By only tuning $0.1 − 0.2\%$ of PLM parameters, they show that AdaMix outperforms SOTA parameter-efficient fine-tuning and full model fine-tuning for both NLU and NLG tasks.
Leave a Comment