Evaluating Parameter Efficient Learning for Generation

Date:

In this paper, they present a comprehensive evaluation of parameter efficient learning methods (PERMs) for generation tasks in natural language processing.

They compare PERMs to finetuning from three new perspectives, including

  1. The impact of sample and model size
  2. Generalization to unseen domains and datasets
  3. Faithfulness of generations

Their results show that PERMs can outperform finetuning in certain scenarios, particularly when training with fewer samples and using larger pre-trained language models.

This study provides valuable insights into the effectiveness of PERMs for adapting pre-trained language models to downstream tasks.

Powerpoint for this talk

Powerpoint for this talk

Reference Paper

Leave a Comment