Open
Description
Hi, etienne87:
I find that this repo would change num_worker to no larger than batch_size.(https://github.com/etienne87/pytorch-stream-dataloader/blob/master/pytorch_stream_dataloader/stream_dataloader.py#L45)
However, this would make the dataloader much slower than vannilar dataloader, which would make training time longer.
Do you have any suggestion on how to support num_workers just like original dataloader in pytorch ?
Metadata
Metadata
Assignees
Labels
No labels