We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I am not able to understand what these factorA and factorB params are in the trained network. Can someone provide a hint ..
The text was updated successfully, but these errors were encountered:
I think they are probably re-scaling from current layer to next layer. You can refer to the paper from google, "Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference", section two. This blog might be help you also https://medium.com/@karanbirchahal/how-to-quantise-an-mnist-network-to-8-bits-in-pytorch-no-retraining-required-from-scratch-39f634ac8459
Sorry, something went wrong.
No branches or pull requests
I am not able to understand what these factorA and factorB params are in the trained network. Can someone provide a hint ..
The text was updated successfully, but these errors were encountered: