Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reading e2e metadata #117

Open
ilannotal opened this issue Jun 21, 2023 Discussed in #28 · 29 comments
Open

Reading e2e metadata #117

ilannotal opened this issue Jun 21, 2023 Discussed in #28 · 29 comments

Comments

@ilannotal
Copy link

Discussed in #28

Originally posted by BLMeltdown June 16, 2021
HI, thanks for this very interesting tool.
Is it possible to read metadata for the exam, especially for e2e exams, please?
Thanks
Best regards
Laurent

@ilannotal
Copy link
Author

Great tool. i'm looking for a way to get dx, dy and dz. More generally i need the output avi file with two parts - left: the fundus with lines that are presenting the scaned frame and the bscans. right - the bscans

@marksgraham
Copy link
Owner

Hi @ilannotal

Starting with the metadata - it should be read as part of the bscan_metadata header. Are you able to check if any of the elements of this field match with the values available in the scanner when they're read at this point. If so, we'll need to work on a way of saving them as part of the OCTVolume class.

@ilannotal
Copy link
Author

Hi Mark,

Thank you for your response.
Attached a comparing sheet for one image.
I think I know the scaling on one axis. I’m not sure that the axes names are the same in the code like in the viewer.
I’m still don’t know how to find the other axis scale and the exact position of the bscans on the fundus.
I'm not sure i understood the 'for start, pos in chunk_stack:' loop.
comparing_sheet.xlsx
Thank you,
Ilan

@Oli4
Copy link
Collaborator

Oli4 commented Jun 23, 2023

Dear Ilan,

the scale returned by OCT-converter called scaley is the scale for the height of the B-scan. The Heidelberg Viewer calls it Z because X and Y are the same directions as used for the fundus image and Z is the depth added for the OCT.

I spend quite some time figuring out the E2E format for eyepy and documented my findings which might be helpful to you here: https://medvisbonn.github.io/eyepy/formats/he_e2e/

Unfortunately I was not able to find the x scaling and the intra-volume registration information (Actually I just found in my documentation for the B-scanmeta a field I called scale_x so maybe I found something there but forgot, but it could also be that this was a mistake and I should fix the documentation there because as of what I remember from when I worked on this the x-scale was still missing). Since the exact x-scaling (width) of the B-scans depends on the measured eye. (I am not sure but probably the length of the eye and maybe other factors) it might be calculated on the fly by the Heidelberg Viewer. Another possibility is that it is stored not in the B-scan meta data but some other container in the E2E format.

For finding the position of the B-scans on the fundus I use a heuristic because I couldn't find the positions (https://github.com/MedVisBonn/eyepy/blob/1b5bb954ab4deb90c1e99ac2111606495680c100/src/eyepy/io/he/e2e_reader.py#L323). Positions in the Metadata seem to be given relative the the center of the scan (There are negative values) so I add the min value I found for one example and scale it. This approach might fail for other scans.

Let me know if I can help. It would be great to finally find the missing information and many projects would benefit from it.

Best Olivier

@marksgraham
Copy link
Owner

marksgraham commented Jun 23, 2023

So more suggestion that the size along x is not stored in the actual .e2e image: neurodial/LibE2E#4 Looks like it might be able to be calculated using the posx1,posx2, which seem to be in degrees, and a fixed constant..

@ilannotal
Copy link
Author

Hi,

It's looks like, we must exchange 'imgsizeX' and 'imgsizeY' in OCT-Converter, because the X axis is 512 for the Bscans and the Y axis is 496. You may see that if we multiply 'ImagesizeY' (496) by 'scaley' (0.00387) - we get 1.91 (mm), which is 'Size Z', of the viewer (for us it's Y). what do you think about this?
exchange?

In the same way, if we take 'Size X' of the viewer (5.9 mm) and we divide it by 512 - we get 0.01152, which must be 'scalex'. according to the viewer, the value is 0.01158. very close. But, we used 'Size X' of the viewer.

@marksgraham
Copy link
Owner

Hi @ilannotal

You may be right, though in the spreadsheet you shared size x is 5.8 which would get scalex=0.01132. Still pretty close to scaleX though. Do you happen to have another scan with different parameters we could double check this on before we make the change?

@ilannotal
Copy link
Author

Here is another image, but with very close parameters. Size X * Scaling X = 512 * 11.58 = 5928.96 ~ 5.9 mm.
If i try 496 * 11.58 = 5743.68 ~ 5.7 mm
comparing_sheet.xlsx

@marksgraham
Copy link
Owner

Ok happy to change it round then!

@marksgraham
Copy link
Owner

Want to make a PR?

@ilannotal
Copy link
Author

ilannotal commented Jun 28, 2023 via email

@marksgraham
Copy link
Owner

Done

@ilannotal
Copy link
Author

ilannotal commented Jun 29, 2023 via email

@marksgraham
Copy link
Owner

I don't make use of posX/posY, and don't know if they are in degrees - any idea @Oli4 ?

I get the slice order from the slice_id key for each chunk a bscan is read from:

volume_array_dict[volume_string][
int(chunk.slice_id / 2) - 1
] = image

@ilannotal
Copy link
Author

ilannotal commented Jun 30, 2023 via email

@Oli4
Copy link
Collaborator

Oli4 commented Jun 30, 2023

I had this in mind once although it is not implemented in the eyepy E2E Reader. I had scans with an angle of 30° and the start and end positions were also adding up correctly. I think my problem was, that I could not figure out how to derive the x-scale from that. Therefore I think we would need the distance from the retina to the sensor and maybe a correction factor accounting for physiological differences in subjects.

@ilannotal
Copy link
Author

ilannotal commented Jul 1, 2023 via email

@ilannotal
Copy link
Author

ilannotal commented Jul 1, 2023 via email

@marksgraham
Copy link
Owner

I've exchanged height and width in the e2e binary. Is this distance from retina to sensor not viewable anywhere in the Heidelberg viewer? If we know what value we're looking for it might be possible to search in the .e2e binary

@ilannotal
Copy link
Author

ilannotal commented Jul 4, 2023 via email

@ilannotal
Copy link
Author

The question is whether this distance is constant

@marksgraham
Copy link
Owner

I think you'd have to empirically estimate a distance from retina from known values of scan angle/scalex and then see if that distance holds across a number of different scans

@ilannotal
Copy link
Author

Yes. Maybe this is what we can do. When i do little trigonometric, i get the following results. What do you think?

image

@ilannotal
Copy link
Author

I consulted with a colleague and he advised to take a degree as 300 µm and not to deal with the distance. This solve for us the challenge of dx, and dy (we already have dz) , And because we have posY1 for each bscan we know also the width between bscans. We also know the location of the bscans on the fondus.

@ilannotal
Copy link
Author

Figure 1. Topography and dimensions of optic nerve and fovea.
Figure 1

Topography and dimensions of optic nerve and fovea.
Go to:
Degrees and Distance in Micometers

One degree of visual angle is equal to 288 μm on the retina without correction for shrinkage (4).

@Oli4
Copy link
Collaborator

Oli4 commented Jul 5, 2023

Keep in mind that these are probably mean values of a distribution following the normal distribution even for healthy eyes not considering Degeneration. I once evaluated the voxel sizes for a oct dataset from spectralis were varying voxel sizes are reported. I try to find and post it here so you can figure out for yourself whether the additional variance is ok for your task.

@Oli4
Copy link
Collaborator

Oli4 commented Jul 6, 2023

In the following you see the relative difference between the voxel volume we worked with for a while and the one computed from the metadata in the Spectralis data for B-scans with a width of 512 Ascans (5282μm³ voxel volume) and Bscans with a width of 1024 Ascans (2418μm³ voxel volume). If you choose a better mean value than we did, the actual volume might still be +/- 10% of what you are working with. It's up to you to decide whether this is acceptable.
image

@ilannotal
Copy link
Author

Does anyone noticed that their a problem with slices order in volume. when comparing it to .AVI output of Heidelberg viewer, i found out that the last slice in the converter is actually the first one in the viewer and each slice need to move forward in one place.
i did it with the following code line:
volume.insert(0, volume.pop()).
What do you think about it?

@marksgraham
Copy link
Owner

Hi,

I don't have access to a ground truth from the scanner, so I can't confirm this. Are you able to confirm this is the case with several scans?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants