-
Notifications
You must be signed in to change notification settings - Fork 291
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue using decode_dicom_image
with tf.vectorized_map
#1744
Comments
cc. @MarkDaoust |
This is more like a stack overflow question than an issue with the repository.
vectorized_map is probably failing because your images have different sizes. You probably want
Concat's only failing because you're returning images of different sizes.
Oh, what goes wrong? Can you give more details?
I doubt there's a GPU kernel for
It may be easier to work with this if instead of doing the transformations through tf.map_fun you use ds.map while you still have single images to load and resize. But note that |
Yes, sort of. But I was also expecting, if it could handle such issue.
That is really unfortunate. I asked regarding this here. Hers is one inspiration https://developer.nvidia.com/dali
Yes, that's why I wanted to just parse file from tf.data and wanted reading-resizing entirely on GPU. It is just for to get speed up.
I can definitely do that. I can share reproducible code. Here is the log.
|
We already talked about this at: |
Thanks @bhack
Yeah, you just need to do a resize first. It can't vectorize it if the outputs aren't the same size.
Are you sure that's the bottle-neck in your case. Usually the GPU is busy enough running the model that you have time to decode on CPU in the background. |
Actually, I tired this for inference. (some interesting discussion, Nvidia DALI for decode-dicom, and dicomsdl > pydicom
Could you please take a quick look to this colab? I did resize, which worked for |
Backgrouond
I have a dataframe with dicom file path. I would like to create a keras layer, where it will read dicom file and do some preocessing (resizing , normalization etc). And return 2D input tensor. I am trying to add this layer with actual model to build a final model. The purpose is to get leverage the image processing on GPU.
Issue
In the
call
method of this layer, I tried to usetf.vectorized_map
, but after running on some files, it gave an error. That isIt might be due to the variable image array after decoding the dicom file. And tensorflow vectorized map might have an issue with concat function, as it has known limitation.
Note, to make
tf.map_fn
to work, I have to place image resizer function and dicom reader in the same method (aboveself.read_and_resize
), otherwise thistf.map_fn
function also creates issue with multi-scale input tensor. But that doesn't work withtf.vectorized_map
.Questions
tf.io
side? (maybe usetf.stack
instead oftf.concat
!) ? Is there any hacks that can be used to make vectorize function work?tf.map_fn
), the CPU processing still gets very high and memory consumption increases eventually.About the data-loader, it is a very simple
tf.data
API. I pass a dataframe, where the path of dicom file is kept. So, there is not any heavy load for CPU side. Could you please give some pointer? What could be the reason? Thanks.The text was updated successfully, but these errors were encountered: