Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Way to approach paralelization #6

Open
zouharvi opened this issue Dec 17, 2021 · 1 comment
Open

Way to approach paralelization #6

zouharvi opened this issue Dec 17, 2021 · 1 comment
Assignees

Comments

@zouharvi
Copy link
Collaborator

There are multiple ways in which we could do paralelization. Assuming that each line can be processed independently (which may not be true in the future if we make more processors), then with workers A B C the paralelization may look as follows (assume 12 lines):

  1. ABC ABC ABC ABC each worker is fed exactly one line and when all three are finished, they are sent to the buffer. This is the easiest solution but may not even be worth it given that thread spawning is not zero cost.
  2. AABBCC AABBCC another step is to increase the number of lines fed to each worker (here 2). There is certainly a threshold number from which it becomes beneficial to use threading (that number probably being larger than 1).
  3. AAA BBB CCC this chunks the whole data into equal parts and feeds them to the workers. It creates the largest chunks and is the fastest from paralelization perspective but requires that the whole data is first loaded into memory which may not be viable for large files. (In a sense this is an extreme case of 3)
  4. ABC AAABB CAC this is the most complicated solution (though probably the fastest one). It considers the processing dynamically and e.g. if A is finished and B and C are working on some long and difficult lines, then A can get next lines assigned.

I think that 2 is the best option because it can be parametrized via two parameters from the CLI. Maybe I missed some approach.

@ales-t
Copy link
Owner

ales-t commented Dec 17, 2021

Assuming that each line can be processed independently (which may not be true in the future if we make more processors)

This is not true already, the merge processor requires the order of input lines to be the same as the order in the "stream to merge". It could be addressed by keeping line/record indices but would make things a bit more complicated.

Regarding the best approach to parallelization -- first of all, I think we should try to gauge the benefits with some benchmarks. One relatively easy way would be to use GNU parallel --pipe --keep-order, and experiment with block sizes.

When I was thinking about parallelization directly in rjp, I had a vague idea like this;

  • One I/O thread reads input lines into batches, putting them in an mpsc::channel. Each sentence in the batch has an absolute index associated with it (its position in the input stream).
  • Workers fetch new batches from the channel when they finish their current batch. There is an output mpsc::channel for collecting the processed data.
  • An output thread pulls processed batches from the output channel, sorting them as needed.

I guess that would correspond to your solution 2 (or maybe 4?). My estimate is that the most complicated parts would be:

  • Handling bookkeeping and order of sentences correctly.
  • Converting the merge processor to support this. As a starting solution, we could simply die with a warning when the user tries to use both parallelism and merge.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants