Skip to content

How to leverage multiprocessing for inference? #833

Answered by fg-mindee
devendrapal5755 asked this question in Q&A
Discussion options

You must be logged in to vote

Hi @devendrapal5755 👋

Could you elaborate on your question please?
Do you mean performing inference on several pages at the same time for OCR? Do you mean batches of samples for sub-tasks (text detection, text recognition)?

If you meant any of the previous two, our predictor objects will do the job for you!

from doctr.io import DocumentFile
from doctr.models import ocr_predictor

doc = DocumentFile.from_images(["path/to/page1.jpg", "path/to/page2.jpg"])
# The predictor will handle the multiprocessing on python-level, and batching on deep-learning ops
model = ocr_predictor(pretrained=True)
result = model(doc)

Let me know if I misunderstood your question!

Replies: 1 comment 3 replies

Comment options

You must be logged in to vote
3 replies
@devendrapal5755
Comment options

@fg-mindee
Comment options

@devendrapal5755
Comment options

Answer selected by fg-mindee
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
module: models Related to doctr.models
2 participants