This is a Pytorch implementation of Scheduled DropHead: A Regularization Method for Transformer Models, a regularization method for transformers. This implementation was designed to work on top of transformers package. Currently it works for Bert, Roberta and XLM-Roberta.
You can just copy drophead.py
to your project.
There is only one main function - set_drophead(model, p_drophead)
. As model
you can provide any of the following:
transformers.BertModel
transformers.RobertaModel
transformers.XLMRobertaModel
- Any downstream model from transformers which uses one of the above (e.g.
transformers.BertForSequenceClassification
). - Any custom downstream model which uses first 3 above (has it as an attribute). See example.
Note:
- The method was implemented with a Pytorch hook. You need to be carefull if you want to save and then load back your model and continue using DropHead (you need to call
set_drophead
again after loading). - Function
set_drophead
works inplace. model.train()
andmodel.eval()
work the same as for usual dropout.- If you use multiple base models inside one single custom class (e.g. inside your model you average predictions from Bert and Roberta) then apply function directly to your base models. See 2nd example from here.
- In this repo only drophead mechanism itself is implemented. If you want a scheduled drophead like suggested in paper then simply add a call
set_drophead(model, p_drophead)
into your training loop wherep_drophead
will be changing according to your schedule.
The code was tested with python3, pytorch 1.4.0 and transformers 2.9.0 but probably will work with older versions of the last two.