Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wrong Metrics #14

Open
DiegoPortoJaccottet opened this issue Nov 7, 2016 · 2 comments
Open

Wrong Metrics #14

DiegoPortoJaccottet opened this issue Nov 7, 2016 · 2 comments

Comments

@DiegoPortoJaccottet
Copy link

DiegoPortoJaccottet commented Nov 7, 2016

Saying that the GTX 1080 > Maxwell Titan X is misleading. The metric should be the time of epoch (or convergence), not forward + backward. The +4GB in the Titan X Maxwell make it much faster than the GTX 1080 for training.

@rishikksh20
Copy link

rishikksh20 commented Nov 8, 2016

Speed of a GPU mostly depends on program optimisation, GPU architecture, Memory clock, Type of Memory (Not in memory size), Memory Bandwidth ,PCIe bandwidth, Number of CUDA cores for parallel processing and system clock. Except for CUDA counts and program optimisation (most of the current Deep learning frameworks are more optimize for Titan X) any other department GTX 1080 is better than Maxwell Titan X though Titan X has 12 gb memory but it's slower than 1080's GDDR5X 8 gb memory. Yes in memory demanding task Titan X has advantage over 1080 but that's scenario is pretty rare because in present time 8 gb is far enough memory for graphic card.

@DiegoPortoJaccottet
Copy link
Author

The ~15% increase in forward+backward speed in the GTX 1080 is not better than the ~30% increase in batch size in the Titan X Maxwell. Deep Learning training is a memory demanding task.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants