Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rotational minctracc takes too long #38

Open
bcdarwin opened this issue Mar 22, 2017 · 9 comments
Open

rotational minctracc takes too long #38

bcdarwin opened this issue Mar 22, 2017 · 9 comments

Comments

@bcdarwin
Copy link
Member

bcdarwin commented Mar 22, 2017

e.g., more than 12 hours on some 40um files ... I suspect something slightly more clever is possible.

We could also allow Pydpiper users to set the number of seeds somehow, e.g., through a config file (a bit less annoying than more Pydpiper flags and a bit more reproducible than $ENV_VARS).

@gdevenyi
Copy link

Alternatively:

$ antsAI 

COMMAND: 
     antsAI
          Program to calculate the optimallinear transform parameters for aligning two 
          images. 

OPTIONS: 
     -d, --dimensionality 2/3
          This option forces the image to be treated as a specified-dimensional image. If 
          not specified, we try to infer the dimensionality from the input image. 

     -m, --metric MI[fixedImage,movingImage,<numberOfBins=32>,<samplingStrategy={None,Regular,Random}>,<samplingPercentage=[0,1]>]
                  Mattes[fixedImage,movingImage,<numberOfBins=32>,<samplingStrategy={None,Regular,Random}>,<samplingPercentage=[0,1]>]
                  GC[fixedImage,movingImage,<radius=NA>,<samplingStrategy={None,Regular,Random}>,<samplingPercentage=[0,1]>]
          These image metrics are available: MI: joint histogram and Mattes: mutual 
          information and GC: global correlation. 

     -t, --transform Rigid[gradientStep]
                     Affine[gradientStep]
                     Similarity[gradientStep]
          Several transform options are available. The gradientStep or learningRate 
          characterizes the gradient descent optimization and is scaled appropriately for 
          each transform using the shift scales estimator. 

     -p, --align-principal-axes 
          Boolean indicating alignment by principal axes. Alternatively, one can align 
          using blobs (see -b option). 

     -b, --align-blobs numberOfBlobsToExtract
                       [numberOfBlobsToExtract,<numberOfBlobsToMatch=numberOfBlobsToExtract>]
          Boolean indicating alignment by a set of blobs. Alternatively, one can align 
          using blobs (see -p option). 

     -s, --search-factor searchFactor
                         [searchFactor,<arcFraction=1.0>]
          Incremental search factor (in degrees) which will sample the arc fraction around 
          the principal axis or default axis. 

     -c, --convergence numberOfIterations
                       [numberOfIterations,<convergenceThreshold=1e-6>,<convergenceWindowSize=10>]
          Number of iterations. 

     -x, --masks fixedImageMask
                 [fixedImageMask,movingImageMask]
          Image masks to limit voxels considered by the metric. 

     -o, --output outputFileName
          Specify the output transform (output format an ITK .mat file). 

     -v, --verbose (0)/1
          Verbose output. 

     -h 
          Print the help menu (short version). 

     --help 
          Print the help menu. 

@bcdarwin
Copy link
Member Author

bcdarwin commented Mar 24, 2017

Do you have any parameter settings that make this work? We can get a principal axis transformation from it, but if you specify some nonzero --convergence it gives (or not; unsure why) many errors of form

Using the global correlation metric 
WARNING: In /axiom2/projects/software/arch/linux-precise/src/minc-toolkit-v2-1.9.11/minc-toolkit-v2/build/ITKv4/Modules/Registration/Metricsv4/include/itkCorrelationImageToImageMetricv4HelperThreader.hxx, line 85
CorrelationImageToImageMetricv4HelperThreader (0x1f317a0): collected only zero points

WARNING: In /axiom2/projects/software/arch/linux-precise/src/minc-toolkit-v2-1.9.11/minc-toolkit-v2/build/ITKv4/Modules/Numerics/Optimizersv4/include/itkObjectToObjectMetric.hxx, line 529
CorrelationImageToImageMetricv4 (0x1f05e70): No valid points were found during metric evaluation. For image metrics, verify that the images overlap appropriately. For instance, you can align the image centers by translation. For point-set metrics, verify that the fixed points, once transformed into the virtual domain space, actually lie within the virtual domain.

@gdevenyi
Copy link

Can you share your existing command line call? (And maybe the file?)

@gdevenyi
Copy link

gdevenyi commented May 9, 2017

Following up here, any progress on antsAI? I'm trying it for some work and having a tough time..

@mcvaneede
Copy link
Member

I've had no luck with antsAI... Ben?

@cfhammill
Copy link
Member

A couple thoughts about this - we could off early termination if rotational_minctracc finds a rotations that puts the cost function in a user-defined "acceptable" range. Alternatively we could try to train a neural net to predict approximate rotation. We could do this on down-sampled volumes for speed.

@gdevenyi
Copy link

Following up on antsAI, here's a working implementation from ANTs
https://github.com/ANTsX/ANTs/blob/master/Scripts/antsBrainExtraction.sh#L464-L471

@cfhammill
Copy link
Member

Have you had good luck with antsAI now Gabe?

@gdevenyi
Copy link

I have not tried it recently, however it is standard in their pipeline, which lots of people use, I think

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants