Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

parallel with mpi #7

Merged
merged 3 commits into from
Oct 25, 2018
Merged

parallel with mpi #7

merged 3 commits into from
Oct 25, 2018

Conversation

emmaai
Copy link
Contributor

@emmaai emmaai commented Oct 22, 2018

add src_template so it will deal with different format of file names;
add a new command so that it can be parallelized with mpi.

@emmaai emmaai requested a review from omad October 22, 2018 08:38
@emmaai emmaai force-pushed the new_convert branch 3 times, most recently from c25f929 to 9d45375 Compare October 23, 2018 04:54
@omad
Copy link
Contributor

omad commented Oct 23, 2018

This sounds cool Emma! Couple of questions:

  1. What do I need to do to run this?
    mpiexec python3 streamer.py mpi_convert_cog ... ??

  2. To use mpi4py I made a private virtualenv based off the NCI's mpi4py and python3 modules. Is that what you had to do too?

  3. Chunking things up then indexing into the chunk based on rank surprised me, it'll still have some inefficiencies if there are a combination of fast and slow tasks. I'd planned on using MPIExecutor instead. But I guess it worked okay for the run you did?

@emmaai
Copy link
Contributor Author

emmaai commented Oct 25, 2018

  1. a script of cog converting of wofls answers the question
  2. I compiled the wheel of mpi4py with openmpi-3.1.0 and python3.6 provided by NCI
  3. Using spawn is to tune the dynamic of splitting the job in the future, tbh, it's working the same as mpirun -n $NCPUS currently.

@omad omad merged commit 0e81c7d into simple-cog Oct 25, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants