-
Notifications
You must be signed in to change notification settings - Fork 194
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
michaelis_menten
transformation as pt.TensorVariable
#1054
Changes from 3 commits
b1d63de
d2eb266
0c26c0e
245f771
04e6367
11d0288
6fdcc1d
5eee159
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||
---|---|---|---|---|
|
@@ -826,10 +826,10 @@ def tanh_saturation_baselined( | |||
return gain * x0 * pt.tanh(x * pt.arctanh(r) / x0) / r | ||||
|
||||
|
||||
def michaelis_menten( | ||||
def michaelis_menten_function( | ||||
x: float | np.ndarray | npt.NDArray[np.float64], | ||||
alpha: float | np.ndarray | npt.NDArray[np.float64], | ||||
lam: float | np.ndarray | npt.NDArray[np.float64], | ||||
alpha: float, | ||||
lam: float, | ||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I still don't know the best type hints here. But TensorVariable is also accepted here There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Reverted changes to type hints |
||||
) -> float | Any: | ||||
r"""Evaluate the Michaelis-Menten function for given values of x, alpha, and lambda. | ||||
|
||||
|
@@ -914,6 +914,31 @@ def michaelis_menten( | |||
return alpha * x / (lam + x) | ||||
|
||||
|
||||
def michaelis_menten( | ||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Do we need two functions? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can all of the ie. here:
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
We don't need them, but made my life easier to avoid the pesky L70
Perhaps you have something less convoluted in mind? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can you not add it in the one location of the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think having it for apply would make sure that other custom saturation transformations will not encounter this found bug as well There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It shouldn't hurt to has pt.as_tensor_variable wrapping an already TensorVariable. pytensor will figure out an optimization There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
The linked There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The proposed There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Do we want to add another There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We wouldnt want to use the private methods of the saturation transformation. Does sample_curve with fit_result not get us close to the data that is plotted? (Just not scaled on x and y) |
||||
x: float | np.ndarray | npt.NDArray[np.float64], | ||||
alpha: float, | ||||
lam: float, | ||||
) -> pt.TensorVariable: | ||||
r"""TensorVariable wrap over the Michaelis-Menten transformation. | ||||
|
||||
Parameters | ||||
---------- | ||||
x : float | ||||
The spent on a channel. | ||||
alpha : float | ||||
The maximum contribution a channel can make. | ||||
lam : float | ||||
The Michaelis constant for the given enzyme-substrate system. | ||||
|
||||
Returns | ||||
------- | ||||
pt.TensorVariable | ||||
The value of the Michaelis-Menten function given the parameters as a TensorVariable. | ||||
|
||||
""" | ||||
return pt.as_tensor_variable(michaelis_menten_function(x, alpha, lam)) | ||||
|
||||
|
||||
def hill_function( | ||||
x: pt.TensorLike, slope: pt.TensorLike, kappa: pt.TensorLike | ||||
) -> pt.TensorVariable: | ||||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -22,7 +22,7 @@ | |
import xarray as xr | ||
from scipy.optimize import curve_fit, minimize_scalar | ||
|
||
from pymc_marketing.mmm.transformers import michaelis_menten | ||
from pymc_marketing.mmm.transformers import michaelis_menten_function | ||
|
||
|
||
def estimate_menten_parameters( | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Where is this still used? I thought there was a generalization of this? @cetagostini There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Created an issue here: #1055 |
||
|
@@ -67,7 +67,9 @@ def estimate_menten_parameters( | |
# Initial guess for L and k | ||
initial_guess = [alpha_initial_estimate, lam_initial_estimate] | ||
# Curve fitting | ||
popt, _ = curve_fit(michaelis_menten, x, y, p0=initial_guess, maxfev=maxfev) | ||
popt, _ = curve_fit( | ||
michaelis_menten_function, x, y, p0=initial_guess, maxfev=maxfev | ||
) | ||
|
||
# Save the parameters | ||
return popt | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is only used in the functions we will deprecate, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, they are only used in the methods linked in #1055.
If we remove those, there is no need to define the extra function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mind closing that now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @wd60622,
having doubts about completely removing functions in #1055.
As an example, take
estimate_menten_parameters
. It was introduced in #329, and never used outside of the current test. I am guessing the original purpose was to double check that the introduced function gives correct results. If we remove the tests, we would never be checking that again.As alternative I've simplified the PR to wrap only
MichaelisMentenSaturation.function
withpt.as_tensor_variable
.Note that the newly introduced fixture now passes the tests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we need to remove tests, that is fine.
This is only saturation transformation which needs to be defined with two functions (which I find silly). Just cosolidate