Project: Predicting Example Forgetting in Langauge Model Fine-Tuning

University of Southern California
What Will My Model Forget? Forecasting Forgotten Examples in Language Model Refinement

Xisen Jin, Xiang Ren

ICML 2024 (Spotlight)

Paper
Demystifying Forgetting in Language Model Fine-Tuning with Statistical Analysis of Example Associations

Xisen Jin, Xiang Ren

On Arxiv (June 2024)

Paper
Our research aims for better understanding and targetted mitigation of forgetting in (continual) LM fine-tuning.

The long-term usability of language model (LM) systems is increasingly important. However, continual fine-tuning poses the risk of catastrophic forgetting, where previously learned knowledge is lost. This is problematic for stable online deployed LM systems, limiting the feasibility of continual fine-tuning in practice. Understanding what examples will be forgotten enables efficient and targeted mitigation of forgetting (e.g., by replaying these examples).


Approaches Motivated by Model Update Formulations

The reason why certain individual examples are forgotten can be intriguing. In our ICML paper, we look into the formulation of model updates where learning a (seemingly) irrlevant example causes another to be forgotten. We build trained approximations of such "ground truth" formulations of forgetting: (1) a model that predicts transfer of logit-changes based on learned example similarity, and (2) a binary classifier of forgetting with learned example dis-similarity.


Approaches Motivated by Statistics of Forgetting

Unlike forgetting of individual examples, we notice the statistical assocations of learned and forgotten examples are usually simple. In our arxiv paper submitted in June 2024, we analyze statistics of forgetting on N upstream examples while the model learns M new tasks with OLMo-7B on OLMo-7B-Instruct models.

Following our analysis, we predict forgetting when learning a new task with matrix completion over the empirical associations, analogical to collaborative filtering in recommender systems. On OLMo models, our k-nearest neighbor based prediction model performs competitively.



Targetted Mitigation of Forgetting by Predicting Forgetting

Predicting example forgetting enables simple, efficient and targetted mitigation of forgetting. We sparsely replay past training examples during continual fine-tuning, priortizing examples with higher predicted forgetting. The strategy improves over randomly sampling past examples for replay.


Model OLMo-7B [1] FLAN-T5-3B [2] FLAN-T5-780M [2]
Learned / Upstream Data Tulu V2 / Dolma MMLU / P3 MMLU / P3
Metrics Log PPL Exact Match Drop % Exact Match Drop %
No replay 2.3092 4.384 5.463
Random Replay 2.2747 1.910 3.267
Replay w/ Prediction 2.2730 0.138 0.301
Replay w/ GT Forgetting 2.2711 0.030 0.189

By enhancing understanding and developing targeted strategies to mitigate forgetting, our research aims for broader applications of continual learning in practical language model development and maintainance.

BibTeX

@inproceedings{
  Jin2024WM,
  title={What Will My Model Forget? Forecasting Forgotten Examples in Language Model Refinement},
  author={Xisen Jin and Xiang Ren},
  booktitle={Forty-first International Conference on Machine Learning},
  year={2024},
  url={https://openreview.net/forum?id=bzNwexOPWm}
}
         
@article{Jin2024DemystifyingFI,
  title={Demystifying Forgetting in Language Model Fine-Tuning with Statistical Analysis of Example Associations},
  author={Xisen Jin and Xiang Ren},
  journal={ArXiv},
  year={2024},
  volume={abs/2406.14026},
  url={https://arxiv.org/abs/2406.14026}
}