How to convert segmented image size to original image size? #6716
KarenPava24
started this conversation in
General
Replies: 1 comment 2 replies
-
Hi @KarenPava24, thanks for your interest here. Here are also many end-to-end pipelines using
Hope it helps, thanks! |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi there! I'm working on 3D Multi-organ Segmentation with UNETR (BTCV Challenge), and I already have the segmentations, I even converted them to a 3D volume to visualize them in MRIcron, but I need to extract the volumes of each organ and when I do it, those volumes are incorrect due to the transformations made. Therefore I need my segmented image to contain the size of the original image in order to have the real volumes of the organs. I convert the segmentation tensor with the transformed image header but it converts the voxels and I need the size. I have not been able to do it. Any idea what I can do?
Thank you in advance for your help
This is mi code
test_transforms = Compose(
[
LoadImaged(keys="image"),
EnsureChannelFirstd(keys="image"),
Orientationd(keys=["image"], axcodes="RAS"),
Spacingd(
keys=["image"],
pixdim=(1.5, 1.5, 2.0),
mode="bilinear",
),
ScaleIntensityRanged(keys=["image"], a_min=-175, a_max=250, b_min=0.0, b_max=1.0, clip=True),
CropForegroundd(keys=["image"], source_key="image"),
]
)
#Converting the segmentation tensor with the transformed image header
original_image = nib.load(transf_path)
original_header = original_image.header
seg_image = nib.Nifti1Image(output_tensor, original_image.affine, original_header)
nib.save(seg_image, seg_namef)
Beta Was this translation helpful? Give feedback.
All reactions