How to leverage multiprocessing for inference? #833
Answered
by
fg-mindee
devendrapal5755
asked this question in
Q&A
-
How to use multiprocessing into inference ? |
Beta Was this translation helpful? Give feedback.
Answered by
fg-mindee
Feb 24, 2022
Replies: 1 comment 3 replies
-
Hi @devendrapal5755 👋 Could you elaborate on your question please? If you meant any of the previous two, our predictor objects will do the job for you! from doctr.io import DocumentFile
from doctr.models import ocr_predictor
doc = DocumentFile.from_images(["path/to/page1.jpg", "path/to/page2.jpg"])
# The predictor will handle the multiprocessing on python-level, and batching on deep-learning ops
model = ocr_predictor(pretrained=True)
result = model(doc) Let me know if I misunderstood your question! |
Beta Was this translation helpful? Give feedback.
3 replies
Answer selected by
fg-mindee
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi @devendrapal5755 👋
Could you elaborate on your question please?
Do you mean performing inference on several pages at the same time for OCR? Do you mean batches of samples for sub-tasks (text detection, text recognition)?
If you meant any of the previous two, our predictor objects will do the job for you!
Let me know if I misunderstood your question!