-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add probe object #38
base: main
Are you sure you want to change the base?
Add probe object #38
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is great work, but I think some more work is required before we merge it.
I would also like a deeper conversation around the need for the sender
argument — it feels a bit out of place to me (it always did). Today we use it for REFoCUS processing, STAI, and (after this PR) RTB. For REFoCUS and STAI it means the same thing: the element transmitting a spherical wave at a given Tx event. For RTB it means the center of the active aperture for a given Tx event. For everything else, it is basically redundant (if we give the probe object a position as well).
point_position: np.ndarray, | ||
receiver: ElementGeometry, | ||
wave_data: WaveData, | ||
) -> float: | ||
# Geometry assumes 2D points |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should use the probe object to calculate the beam origin and calculate what is called "scanline" further down in this code (inside the MLAApodization
implementation).
Let's consider adding a method to the probe for getting the wave origin: probe.wave_origin(wave_data)
?
Reminder about wave origin for future readers:
Wave origin is where the transmitted wave intersects the aperture along the focused wave direction. For a focused wave with a focus point at an angle ≠ 0, this would not equal the aperture center. See this delay-and-sum.com example where the blue point is placed at the wave origin 🤓 The wave origin changes when we move the focus point around. This is also explained in https://github.com/magnusdk/vbeam/blob/main/docs/tutorials/apodization/rtb.ipynb.
A concern is that people may be confused about what the difference between "sender" and "wave origin" is. Even in my mind the definition of "sender" is a bit muddy. I'd love to get some harsh feedback on this term: "sender".
vbeam/apodization/plane_wave.py
Outdated
wave_data: WaveData, | ||
) -> float: | ||
# Set up the geometry. There is one line originating from each side of the | ||
# array, going in the direction of the transmitted wave. If the point is | ||
# outside of those lines, then it is weighted by 0. | ||
array_left, array_right = self.array_bounds | ||
array_left, array_right,array_up, array_down = probe.get_tx_aperture_borders(sender=sender) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe get_tx_aperture_borders
could return an object containing the bounds (array_left
, array_right
, etc)?
aperture_borders = probe.get_tx_aperture_borders(sender=sender)
print(aperture_borders.left)
print(aperture_borders.right)
# etc
The object could be implemented as a NamedTuple
which additionally allows for destructuring like you did above:
from typing import NamedTuple
class ApertureBorders(NamedTuple):
left: np.ndarray
right: np.ndarray
up: np.ndarray
down: np.ndarray
# ...this allows for:
array_left, array_right,array_up, array_down = probe.get_tx_aperture_borders(sender=sender)
wave_data: WaveData, | ||
) -> float: | ||
f_number = ( | ||
self.f_number | ||
if isinstance(self.f_number, tuple) and len(self.f_number) == 2 | ||
else (self.f_number, self.f_number) | ||
) | ||
dist = point_position - receiver.position | ||
dist = point_position - receiver |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We may consider changing the name to receiver_position
or changing point_position
to just point
to get consistent naming.
vbeam/apodization/rtb.py
Outdated
else: | ||
sender_element_position = self.replacement_sender | ||
|
||
array_left, array_right,array_up, array_down = probe.get_tx_aperture_borders(sender=sender_element_position) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The improved readability of the code here is fantastic! <3
array_width=self.array_width, sender=sender, use_parent=self.use_parent | ||
) | ||
) | ||
if self.replacement_sender is None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@AndersVra, could you elaborate on what replacement_sender
is here? I think it needs some documentation.
right = self.surface2cart((self.tx_aperture_length_s[0]/2,0.0)) | ||
down = self.surface2cart((0.0, -self.tx_aperture_length_s[1]/2)) | ||
up = self.surface2cart((0.0, self.tx_aperture_length_s[1]/2)) | ||
return (left, right, up, down) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These should all use rx_aperture_length_s
, right?
ROC_azimuth = channel_data.probe.radius_x | ||
else: | ||
ROC_azimuth = 10 | ||
ROC_elevation = 10 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should either create a convention that if the ROC is None
, then that means it is a flat probe, or create separate classes for linear and curvilinear probes. I am leaning more towards the latter.
|
||
probe = ProbeGeometry(ROC = (ROC_azimuth, ROC_elevation)) | ||
probe.rx_aperture_length_s = (np.max(channel_data.probe.x) - np.min(channel_data.probe.x) , np.max(channel_data.probe.y) - np.min(channel_data.probe.y)) | ||
probe.tx_aperture_length_s = (np.max(channel_data.probe.x) - np.min(channel_data.probe.x) , np.max(channel_data.probe.y) - np.min(channel_data.probe.y)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should pass rx_aperture_length_s
and tx_aperture_length_s
as arguments when creating the ProbeGeometry
object.
np.array([np.min(channel_data.probe.x), 0.0, 0.0]), | ||
np.array([np.max(channel_data.probe.x), 0.0, 0.0]), | ||
) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice 🙂
): | ||
# We are dealing with a STAI dataset! Senders are each element in the array | ||
sender = receivers.copy() | ||
probe.sender_position = probe.receiver_position.copy() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is receiver_position
set on the probe object here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was an old implementation. I have updated this now.
Add probe object that stores
Apodization and wavefront functions has been updated to use this.
The probe object is added to the kernel.
ElementGeometry object is no longer needed, because the theta and phi can be calculated using the probe and element positions. The receiver and sender objects are therefore changed to be np.ndarray objects instead.
The examples optimized_scans, plane_wave_dataset and refocus_with_speed_of_sound_map has been tested and runs as previously.
This solves issue #28