You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was browsing through the code to understand the loading mechanism in the neuralfit classes.
In my first attempt I use the TNeuralFit class and need to move on to a more memory conserving loading mechanism.
Here is my question: The code actually uses a random index to be used as "next" element:
function TNeuralFit.FitTrainingPair(Idx: integer; ThreadId: integer): TNNetVolumePair;
var
ElementIdx: integer;
begin
ElementIdx := Random(FTrainingVolumes.Count);
FitTrainingPair := FTrainingVolumes[ElementIdx];
end;
First... why random? Is that better for the weight update?
Second: the call to Random is not thread safe and issued accross many threads. I don't know if there are side effects in that regard...
And how about this suggestion: through each batch one could create a (new) randomized list - this is quite fast and can be done
in one go. The list is spit over the attending threads. This also circumvents the possibility that not every example is handled in one
batch....
Is that feasable or over engineered then?
The text was updated successfully, but these errors were encountered:
I was browsing through the code to understand the loading mechanism in the neuralfit classes.
In my first attempt I use the
TNeuralFit
class and need to move on to a more memory conserving loading mechanism.Here is my question: The code actually uses a random index to be used as "next" element:
First... why random? Is that better for the weight update?
Second: the call to Random is not thread safe and issued accross many threads. I don't know if there are side effects in that regard...
And how about this suggestion: through each batch one could create a (new) randomized list - this is quite fast and can be done
in one go. The list is spit over the attending threads. This also circumvents the possibility that not every example is handled in one
batch....
Is that feasable or over engineered then?
The text was updated successfully, but these errors were encountered: