You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Not an mmdet developer, but I've looked into this recently. As far as I can tell there's no support for QAT (or PTQ) in mmdetection. You can try using mmrazor, which is their model compression library, but it doesn't seem to be maintained and I was unable to get it to work with mmdetection for my use case.
I ultimately got QAT to work by using Nvidia's quantization library for pytorch and manually inserting quantization ops in my mmdet model definition. You could also consider using pytorch's quantization library.
Unfortunately I can't share any of my custom code as it's proprietary, but good luck to you.
Not an mmdet developer, but I've looked into this recently. As far as I can tell there's no support for QAT (or PTQ) in mmdetection. You can try using mmrazor, which is their model compression library, but it doesn't seem to be maintained and I was unable to get it to work with mmdetection for my use case.
I ultimately got QAT to work by using Nvidia's quantization library for pytorch and manually inserting quantization ops in my mmdet model definition. You could also consider using pytorch's quantization library.
Unfortunately I can't share any of my custom code as it's proprietary, but good luck to you.
Thanks for your reply.
I also inserted QuantStub and DeQuantStub in to my model, well I don't know if they are effective. I also noticed that in the Pytorch documents, quant config is set before the training loop, which means I need to insert this part of code in the runner of mmengine, but I don't want to do that.
Is there any instructions on how to do QAT in mmdetection ?
The text was updated successfully, but these errors were encountered: