Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

factorA and factorB in network params #36

Open
mvsanjaya opened this issue Jan 30, 2020 · 1 comment
Open

factorA and factorB in network params #36

mvsanjaya opened this issue Jan 30, 2020 · 1 comment

Comments

@mvsanjaya
Copy link

I am not able to understand what these factorA and factorB params are in the trained network. Can someone provide a hint ..

@JiaMingLin
Copy link

JiaMingLin commented Sep 15, 2020

I think they are probably re-scaling from current layer to next layer.
You can refer to the paper from google, "Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference", section two.
This blog might be help you also
https://medium.com/@karanbirchahal/how-to-quantise-an-mnist-network-to-8-bits-in-pytorch-no-retraining-required-from-scratch-39f634ac8459

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants