Skip to content

A one-stop repository for large language model (LLM) unlearning. Supports TOFU, MUSE and is an easily extensible framework for new datasets, evaluations, methods, and other benchmarks.

License

Notifications You must be signed in to change notification settings

locuslab/open-unlearning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

61 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

OpenUnlearning

An easily extensible framework unifying LLM unlearning evaluation benchmarks.


πŸ“– Overview

We provide efficient and streamlined implementations of the TOFU, MUSE unlearning benchmarks while supporting 6 unlearning methods, 3+ datasets, 9+ evaluation metrics, and 6+ LLM architectures. Each of these can be easily extended to incorporate more variants.

We invite the LLM unlearning community to collaborate by adding new benchmarks, unlearning methods, datasets and evaluation metrics here to expand OpenUnlearning's features, gain feedback from wider usage and drive progress in the field.


πŸ“’ Updates

[Apr 6, 2025]

⚠️⚠️ IMPORTANT: Be sure to run python setup_data.py immediately after merging the latest version. This is required to refresh the downloaded eval log files and ensure they're compatible with the latest evaluation metrics.

  • More Metrics! Added 6 Membership Inference Attacks (MIA) (LOSS, ZLib, Reference, GradNorm, MinK, and MinK++), along with Extraction Strength (ES) and Exact Memorization (EM) as additional evaluation metrics.
  • More TOFU Evaluations! Now includes a holdout set and supports MIA attack-based evaluation. You can now compute MUSE's privleak on TOFU.
  • More Documentation! docs/links.md contains resources for each of the implemented features and other useful LLM unlearning resources.
Older Updates

[Mar 27, 2025]

  • More Documentation: easy contributions and the leaderboard functionality: We've updated the documentation to make contributing new unlearning methods and benchmarks much easier. Users can document additions better and also update a leaderboard with their results. See this section for details.

[Mar 9, 2025]

  • More Methods! Added support for RMU (representation-engineering based unlearning).

[Feb 27, 2025]

⚠️ Repository Update: This repo replaces the original TOFU codebase at github.com/locuslab/tofu, which is no longer maintained.


πŸ—ƒοΈ Available Components

We provide several variants for each of the components in the unlearning pipeline.

Component Available Options
Benchmarks TOFU, MUSE
Unlearning Methods GradAscent, GradDiff, NPO, SimNPO, DPO, RMU
Evaluation Metrics Verbatim Probability, Verbatim ROUGE, Knowledge QA-ROUGE, Model Utility, Forget Quality, TruthRatio, Extraction Strength, Exact Memorization, 6 MIA attacks
Datasets MUSE-News (BBC), MUSE-Books (Harry Potter), TOFU (different splits)
Model Families TOFU: LLaMA-3.2, LLaMA-3.1, LLaMA-2; MUSE: LLaMA-2; Additional: Phi-3.5, Phi-1.5, Gemma

πŸ“Œ Table of Contents


⚑ Quickstart

# environment setup
conda create -n unlearning python=3.11
conda activate unlearning
pip install .
pip install --no-build-isolation flash-attn==2.6.3

# data setup
python setup_data.py  # saves/eval now contains evaluation results of the uploaded models
# Downloads log files with metric eval results (incl retain model logs) from the models 
# used in the supported benchmarks.

πŸ”„ Updated TOFU benchmark

We've updated Open-Unlearning's TOFU benchmark target models to use a wider variety of newer architectures with sizes varying from 1B to 8B. These include LLaMA 3.2 1B, LLaMA 3.2 3B, LLaMA 3.1 8B, and the original LLaMA-2 7B (re-created) target models from the old version of TOFU.

For each architecture, we have finetuned with four different splits of the TOFU datasets: full, retain90, retain95, retain99, for a total of 16 finetuned models. The first serves as the target (base model for unlearning) and the rest are retain models used to measure performance against for each forget split. These models are on HuggingFace and the paths to these models can be set in the experimental configs or in command-line overrides.


πŸ§ͺ Running Experiments

We provide an easily configurable interface for running evaluations by leveraging Hydra configs. For a more detailed documentation of aspects like running experiments, commonly overriden arguments, interfacing with configurations, distributed training and simple finetuning of models, refer docs/experiments.md.

πŸš€ Perform Unlearning

An example command for launching an unlearning process with GradAscent on the TOFU forget10 split:

python src/train.py --config-name=unlearn.yaml experiment=unlearn/tofu/default \
  forget_split=forget10 retain_split=retain90 trainer=GradAscent task_name=SAMPLE_UNLEARN

πŸ“Š Perform an Evaluation

An example command for launching a TOFU evaluation process on forget10 split:

model=Llama-3.2-1B-Instruct
python src/eval.py --config-name=eval.yaml experiment=eval/tofu/default \
  model=${model} \
  model.model_args.pretrained_model_name_or_path=open-unlearning/tofu_${model}_full \
  retain_logs_path=saves/eval/tofu_${model}_retain90/TOFU_EVAL.json \
  task_name=SAMPLE_EVAL
  • experiment- Path to the evaluation configuration configs/experiment/eval/tofu/default.yaml.
  • model- Sets up the model and tokenizer configs for the Llama-3.2-1B-Instruct model.
  • model.model_args.pretrained_model_name_or_path- Overrides the default experiment config to evaluate a model from a HuggingFace ID (can use a local model checkpoint path as well).
  • retain_logs_path- Sets the path to the reference model eval logs that is needed to compute reference model based metrics like forget_quality in TOFU.

For more details about creating and running evaluations, refer docs/evaluation.md.

πŸ“œ Running Baseline Experiments

The scripts below execute standard baseline unlearning experiments on the TOFU and MUSE datasets, evaluated using their corresponding benchmarks. The expected results for these are in docs/repro.md.

bash scripts/tofu_unlearn.sh
bash scripts/muse_unlearn.sh

The above scripts are not tuned and uses default hyper parameter settings. We encourage you to tune your methods and add your final results in community/leaderboard.md.


βž• How to Contribute

If you are interested in contributing to our work, please have a look at contributing.md guide.

πŸ“š Further Documentation

For more in-depth information on specific aspects of the framework, refer to the following documents:

Documentation Contains
docs/contributing.md Instructions on how to add new methods, benchmarks, components such as trainers, benchmarks, metrics, models, datasets, etc.
docs/evaluation.md Detailed instructions on creating and running evaluation metrics and benchmarks.
docs/experiments.md Guide on running experiments in various configurations and settings, including distributed training, fine-tuning, and overriding arguments.
docs/hydra.md Explanation of the Hydra features used in configuration management for experiments.
community/leaderboard.md Reference results from various unlearning methods run using this framework on TOFU and MUSE benchmarks.
docs/links.md List of all links to the research papers or other sources the implemented features are sourced from.
docs/repro.md Results are provided solely for reproducibility purposes, without any parameter tuning.

πŸ”— Support & Contributors

Developed and maintained by Vineeth Dorna (@Dornavineeth) and Anmol Mekala (@molereddy).

If you encounter any issues or have questions, feel free to raise an issue in the repository πŸ› οΈ.

πŸ“ Citing this work

If you use OpenUnlearning in your research, please cite OpenUnlearning and the benchmarks from the below:

@misc{openunlearning2025,
  title={OpenUnlearning: A Unified Framework for LLM Unlearning Benchmarks},
  author={Dorna, Vineeth and Mekala, Anmol and Zhao, Wenlong and McCallum, Andrew and Kolter, J Zico and Maini, Pratyush},
  year={2025},
  howpublished={\url{https://github.com/locuslab/open-unlearning}},
  note={Accessed: February 27, 2025}
}
@inproceedings{maini2024tofu,
  title={TOFU: A Task of Fictitious Unlearning for LLMs},
  author={Maini, Pratyush and Feng, Zhili and Schwarzschild, Avi and Lipton, Zachary Chase and Kolter, J Zico},
  booktitle={First Conference on Language Modeling},
  year={2024}
}
@article{shi2024muse,
  title={MUSE: Machine Unlearning Six-Way Evaluation for Language Models},
  author={Weijia Shi and Jaechan Lee and Yangsibo Huang and Sadhika Malladi and Jieyu Zhao and Ari Holtzman and Daogao Liu and Luke Zettlemoyer and Noah A. Smith and Chiyuan Zhang},
  year={2024},
  eprint={2407.06460},
  archivePrefix={arXiv},
  primaryClass={cs.CL},
  url={https://arxiv.org/abs/2407.06460},
}

🀝 Acknowledgements

  • This repo is inspired from LLaMA-Factory.
  • The TOFU and MUSE benchmarks served as the foundation for our re-implementation.

πŸ“„ License

This project is licensed under the MIT License. See the LICENSE file for details.


Star History

Star History Chart

About

A one-stop repository for large language model (LLM) unlearning. Supports TOFU, MUSE and is an easily extensible framework for new datasets, evaluations, methods, and other benchmarks.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published