-
Notifications
You must be signed in to change notification settings - Fork 58
Autograd support for affine transformations #2490
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
Thanks! Went through it. Looks good. Added an else condition. Ran the numerical tests. They pass! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
8 files reviewed, 3 comments
Edit PR Review Bot Settings | Greptile
return (fun(x0 + δ) - fun(x0 - δ)) / (2 * δ) | ||
|
||
|
||
def _assert_close(fd_val, ad_val, tag, axis=None, extra="", tol=0.35): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
style: The tolerance value 0.35 (35%) seems quite high for numerical comparison. Consider tightening or documenting justification.
return anp.sum(data.get_intensity("m").values) | ||
|
||
|
||
def finite_diff_theta(center, size, eps, theta, axis, delta=1e-3): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
style: Default delta (1e-3) differs from global delta (0.016). Consider using the global delta by default.
@@ -66,7 +67,6 @@ def rotate_tensor(self, tensor: TensorReal) -> TensorReal: | |||
|
|||
if self.isidentity: | |||
return tensor | |||
|
|||
return np.matmul(self.matrix, np.matmul(tensor, self.matrix.T)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
logic: rotate_tensor still uses np.matmul instead of anp.matmul - needs to be consistent with autograd usage.
return np.matmul(self.matrix, np.matmul(tensor, self.matrix.T)) | |
return anp.matmul(self.matrix, anp.matmul(tensor, self.matrix.T)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
8 files reviewed, 4 comments
Edit PR Review Bot Settings | Greptile
}, | ||
] | ||
|
||
PARAM_LABELS = ["center_x", "center_x", "center_y", "center_z", "size_x", "size_y", "size_z"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
syntax: Duplicate 'center_x' in PARAM_LABELS array
PARAM_LABELS = ["center_x", "center_x", "center_y", "center_z", "size_x", "size_y", "size_z"] | |
PARAM_LABELS = ["center_x", "center_y", "center_z", "size_x", "size_y", "size_z"] |
def test_autograd_vs_fd_scenarios(scenario, param_label): | ||
center0 = (0.0, 0.0, 0.0) | ||
size0 = (2.0, 2.0, 2.0) | ||
delta = 0.03 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
style: Delta value 0.03 differs from default 1e-3 in fd_vs_ad_param. Consider documenting why or unifying these values.
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
style: make_sim_rotation could be reused by storing sim as a fixture instead of recreating for each test
val, grad_c, grad_s = get_grad(center0, size0, angle=None, axis=None) | ||
npx, npy, npz = grad_c | ||
sSx, sSy, sSz = grad_s | ||
|
||
assert not np.allclose(grad_c, 0.0), "center gradient is all zero." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
style: use more descriptive variable names than npx,npy,npz for gradient components
Supersedes #2475
@rahul-flex I rebased all the changes into this PR. Could you have a look and check if anything is missing?
Greptile Summary
Major enhancement to Tidy3D's autograd capabilities, adding support for differentiating through affine transformations (rotation, scaling, translation) of geometric structures.
tidy3d/components/geometry/base.py
, replacing xarray-based derivative computation for improved robustnesstidy3d/components/transformation.py
, enabling chainable operations throughArrayBox
type handlingtidy3d/web/api/autograd/autograd.py
by removing Box-specific special cases