Skip to content

Using Fine-Pruning as a defence against backdooring attacks on deep neural networks.

License

Notifications You must be signed in to change notification settings

rajghodasara1/fine-pruning-badnets

Repository files navigation

fine-pruning-badnets

The presence of successful backdoor attacks on Deep Neural Networks (DNNs) indicates that these networks possess excess learning capacity. Specifically, DNNs can learn to provide incorrect responses to inputs containing a backdoor, while still maintaining accurate responses to clean inputs. This phenomenon revolves around specific neurons within the network known as "backdoor neurons," which the attack subtly manipulates to identify backdoors and induce misbehavior.

Within this laboratory study, our focus is on assessing a defense technique aimed at potentially neutralizing a backdoor by eliminating neurons that remain dormant when processing clean inputs. This defensive strategy is coined as the "pruning defense."

Usage

  1. Clone the repository

    git clone https://github.com/rajghodasara1/fine-pruning-badnets.git
  2. Download the validation and test datasets from here and place them under data/ directory.

  3. Create and activate the virtual environment

    python -m venv venv
    source venv/bin/activate
  4. Install the required dependencies

    pip install -r requirements.txt
  5. Run this command to start an ipython kernel with the virtual environment.

    ipython kernel install --user --name=venv
    
  6. Open Lab4.ipynb, select kernel venv and run to reproduce code.

LICENSE

This project is licensed under MIT License. See LICENSE.

About

Using Fine-Pruning as a defence against backdooring attacks on deep neural networks.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published