Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Amount of RAM required for deconvolution #89

Open
spimager opened this issue Dec 15, 2015 · 13 comments
Open

Amount of RAM required for deconvolution #89

spimager opened this issue Dec 15, 2015 · 13 comments

Comments

@spimager
Copy link

We currently have a 40GB stack saved in h5 format consisting of 7 angles from 0-90˚ in increments of 15˚. We have successfully registered and fused the stack, however when we go to deconvolve it the estimated amount of RAM required is ~10TB! Even choosing a single angle to deconvolve yields a requirement of >4.8TB. Is this much RAM really required or has something gone awry here? 10TB is far beyond what our cluster currently has (~1.5TB).

@emmenlau
Copy link

I feel your pain! But to let you know, Stephan is currently working on several improvements that will drastically reduce the amount of RAM required. Some of them might come at a small penalty in runtime, but overall it should be a huge improvement because it can significantly reduce the amount of memory required.

@spimager
Copy link
Author

Thankfully someone else out there is experience the same problems!
Do you know any way of getting around it at the moment?
I've tried processing the h5 file in 8x8x8 pixel grids but it crashes for some reason (I'm assuming it's still trying to reserve too much RAM...).

@emmenlau
Copy link

emmenlau commented Dec 15, 2015 via email

@spimager
Copy link
Author

Thanks for your help.
Looking forward to the developments!

@StephanPreibisch
Copy link
Member

Hi @spimager, I am trying to fix these problems, but my time is limited at the moment as I just started my own group. You can try to compile the virtual image branch, it might just work right now for you. In combination with HDF5, input images will not be loaded entirely anymore.

@spimager
Copy link
Author

Thanks, Stephan!
I tried using the virtual image branch but received the following exception:

java.lang.NullPointerException at spim.process.fusion.deconvolution.ProcessForDeconvolution.fuseStacksAndGetPSFs(ProcessForDeconvolution.java:213)
    at spim.process.fusion.deconvolution.EfficientBayesianBased.fuseData(EfficientBayesianBased.java:214)
    at spim.fiji.plugin.Image_Fusion.fuse(Image_Fusion.java:173)
    at spim.fiji.plugin.Image_Fusion.run(Image_Fusion.java:76)
    at ij.IJ.runUserPlugIn(IJ.java:212)
    at ij.IJ.runPlugIn(IJ.java:176)
    at ij.Executer.runCommand(Executer.java:136)
    at ij.Executer.run(Executer.java:65)
    at java.lang.Thread.run(Thread.java:662)

The log output was:

Found 1 label(s) with correspondences for channel 0: 
Label 'beads' (channel 0) has 7/7 views with corresponding detections.
Channel 0: extract PSF from label 'beads'
BlendingBorder: -8, -8, -3
BlendingBorder: 12, 12, 12
(Sun Dec 27 15:37:56 EST 2015): Transforming view 1 of 7 (viewsetup=0, tp=0)
(Sun Dec 27 15:37:56 EST 2015): Reserving memory for transformed & weight image.
(Sun Dec 27 15:37:56 EST 2015): Setup transformations.
(Sun Dec 27 15:37:56 EST 2015): Loading input image ...
(Sun Dec 27 15:37:56 EST 2015): Loading image using Hdf5ImageLoader
(Sun Dec 27 15:37:56 EST 2015): Load type: LOAD_INPUT_ONDEMAND
(Sun Dec 27 15:37:56 EST 2015): Image Type = UnsignedShortType

@StephanPreibisch
Copy link
Member

Hi, I am unfortunately currently on holidays but I will look at it as soon as possible!

@StephanPreibisch
Copy link
Member

Hi @spimager, I think I fixed it, was a stupid thing, can you please try again with the current commit?
65c759c

@spimager
Copy link
Author

Hi @StephanPreibisch,

That cleared up the exception and everything seems to be working well again now, except I've noticed the estimated amount of required memory seems to have increased significantly. It was around 10TB and now it's up at 33TB.

No need to interrupt your holiday for this, though!

@StephanPreibisch
Copy link
Member

Hi, ignore the estimation of RAM, the code is not updated and hence totally wrong.

@spimager
Copy link
Author

spimager commented Jan 6, 2016

Hi @StephanPreibisch,

I tried running the virtual image branch of the plugin, but it seems to be crashing.
The plugin and log window simply close, but Fiji remains open.
I've attached a copy of the error log.
hs_err_pid32119.txt

@spimager
Copy link
Author

Just as a follow up, the deconvolution seems to hang on "Computing weight normalization for deconvolution" before crashing.

@spimager
Copy link
Author

spimager commented Feb 1, 2016

Any ideas?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants