-
Notifications
You must be signed in to change notification settings - Fork 721
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replace global pooling with explicitly defined window #30
Comments
It appears somebody had hit the same issue when converting to TensorFlow: ethereon/caffe-tensorflow#53 (comment) I haven't tried their suggestions yet though. |
I changed
and TensorRT's parser accepted it alright. Unfortunately, the classification was completely off (e.g. 1000 mis-predictions out of 1000). Any ideas? |
@psyhtest |
@fengziyong, thanks! I should get in touch with someone from the TensorRT team soon, and will ask their opinion on this issue. |
With help from @milakov from NVIDIA (thanks!), I've created two new CK-TensorRT packages that explicitly define the kernel size and stride for the average global pooling layer:
These changes make TensorRT 1.0.0 (GIE) happy! Since these SqueezeNet packages are only intended as a workaround, they will be kept in the TensorRT repository. Note also that by default they install files into the same locations ( |
While trying out the SqueezeNet variants (1.0 and 1.1) on a Jetson TX1 dev board with TensorRT 1.0.0, I got the following error:
I believe this refers to the following layer in the definitions (identical in both variants):
I've just got the following advice from NVIDIA:
Alas, I'm not a Caffe expert, so I'm struggling a bit with how to do that. Can anyone suggest please how the SqueezeNet definitions should be updated, so as to maintain the recognition accuracy?
The text was updated successfully, but these errors were encountered: