Skip to content
This repository has been archived by the owner on Jan 22, 2022. It is now read-only.

Same code gets different results in Linux (Ubuntu) and Windows machine #8

Open
KaiChen1998 opened this issue Feb 24, 2020 · 1 comment

Comments

@KaiChen1998
Copy link

Hi! I'm a student learning CS285 online. Thank you for your great and generous work!

When I'm doing homework1 and running the same code in two different machines, one Linux and one Windows, I got two different actor result (but the expert results are the same).

Aftrer looking into details, because of random seed, the data batches used to update parameters in every training iteration are exactly the same between two machines. Differences start to show up after running gradient descent even the first time.

So my question is just if the differences comes from different machine situations or there are some other reasons? What do you guys think?

@archit120
Copy link

If you're using CuDNN that can choose non-deterministic methods to calculate the results of floating-point operations.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants