Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

potential TensorFlow benchmark improvements #91

Open
rryan opened this issue Feb 29, 2016 · 2 comments
Open

potential TensorFlow benchmark improvements #91

rryan opened this issue Feb 29, 2016 · 2 comments

Comments

@rryan
Copy link

rryan commented Feb 29, 2016

Hi -- thanks for the benchmarks!

I noticed that you do a sparse-to-dense conversion and use softmax_cross_entropy_with_logits. Have you tried eliding the sparse-to-dense conversion and using sparse_softmax_cross_entropy_with_logits? In my experience the sparse version is faster.

Also, reduce_mean does not have a GPU kernel. reduce_sum with a division would prevent a GPU -> CPU -> GPU step when calculating the loss.

@soumith
Copy link
Owner

soumith commented Feb 29, 2016

@rryan i dont think the sparse softmax will improve the overall benchmarks, the time contributed by a 1000-way softmax is very little. I'll def change reduce_mean to reduce_sum + division, if you think it helps.

@rryan
Copy link
Author

rryan commented Feb 29, 2016

@soumith -- I can't say how much it would affect this benchmark. On a much simpler model -- the CIFAR-10 multi-GPU example -- replacing reduce_mean sped up my batches by 5% or so on a k20.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants