You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As previously mentioned, wither certain values of learning rate, at least for the 4 gaussian clusters example, some of the weights in the cSOM seem to all collapse into one point for unknown reasons. Example:
As you can see a few weights seem to all be pointing to the same point. When we zoom in we see the points are slightly different from each other but this configuration still does not make sense as these points are too attracted to each other and this seems to happen along the course of the training for certain n_iter value, in particular, a similar issue seems to happen for values: 10000, 20000, 30000, 40000, 50000 however this issue did NOT happen for values: 35000, 35001, 25001, etc... This bug is a complete mystery to me but should be addressed ASAP. I suspect is something deep underlying random numbers but I cant be too sure.
This could be the result of the learning parameters but that would not explain why certain values of n_iter cause this issue and others do not.
The text was updated successfully, but these errors were encountered:
This is a MAJOR issue and should be fixed ASAP
As previously mentioned, wither certain values of learning rate, at least for the 4 gaussian clusters example, some of the weights in the cSOM seem to all collapse into one point for unknown reasons. Example:
As you can see a few weights seem to all be pointing to the same point. When we zoom in we see the points are slightly different from each other but this configuration still does not make sense as these points are too attracted to each other and this seems to happen along the course of the training for certain n_iter value, in particular, a similar issue seems to happen for values: 10000, 20000, 30000, 40000, 50000 however this issue did NOT happen for values: 35000, 35001, 25001, etc... This bug is a complete mystery to me but should be addressed ASAP. I suspect is something deep underlying random numbers but I cant be too sure.
This could be the result of the learning parameters but that would not explain why certain values of n_iter cause this issue and others do not.
The text was updated successfully, but these errors were encountered: