You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I run the algorithm and set n_jobs I find that there are more processes than expected. I believe that this is why this happens:
File: PhenoGraph/phenograph/core.py
135: with closing(Pool()) as pool: # replace with: with closing(Pool(n_jobs)) as pool:
136: jaccard_values = pool.starmap(calc_jaccard, zip(range(n), repeat(idx)))
I can correct this in my clone of the library. Is there a reason why you can't set the maximum number of jobs for this process pool?
The text was updated successfully, but these errors were encountered:
SimeonGeorgiev
changed the title
Parallel computation of Jaccard index sucks up server resources. Proposed solution.
Parallel computation of Jaccard index uses up server resources. Proposed solution.
May 8, 2018
When I run the algorithm and set n_jobs I find that there are more processes than expected. I believe that this is why this happens:
File: PhenoGraph/phenograph/core.py
135: with closing(Pool()) as pool: # replace with: with closing(Pool(n_jobs)) as pool:
136: jaccard_values = pool.starmap(calc_jaccard, zip(range(n), repeat(idx)))
I can correct this in my clone of the library. Is there a reason why you can't set the maximum number of jobs for this process pool?
The text was updated successfully, but these errors were encountered: