-
Notifications
You must be signed in to change notification settings - Fork 180
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
michaelis_menten
transformation as pt.TensorVariable
#1054
base: main
Are you sure you want to change the base?
Conversation
michaelis_menten
tranformation as pt.TensorVariable
michaelis_menten
transformation as pt.TensorVariable
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
@@ -914,6 +914,31 @@ def michaelis_menten( | |||
return alpha * x / (lam + x) | |||
|
|||
|
|||
def michaelis_menten( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need two functions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can all of the SaturationTransformations
be wrapped in as_tensor_variable instead of making these changes?
ie. here:
return self.function(x, **kwargs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need two functions?
We don't need them, but made my life easier to avoid the pesky L70
Can all of the
SaturationTransformations
be wrapped in as_tensor_variable instead of making these changes?
Do you have anything in mind? I can think of 2 ways:
- Adding boilerplate to all the classes. Perhaps defining a decorator to reduce boilerplate.
- Rely on
Transformation._checks()
but here we would need to trust the type hints to use something likesignature(class_function).return_annotation
and wrappt.as_tensor_variable
if needed.
Perhaps you have something less convoluted in mind?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you not add it in the one location of the apply
method? (where I linked) That is a common method that is not overwritten by the subclasses
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think having it for apply would make sure that other custom saturation transformations will not encounter this found bug as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It shouldn't hurt to has pt.as_tensor_variable wrapping an already TensorVariable. pytensor will figure out an optimization
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you not add it in the one location of the
apply
method? (where I linked) That is a common method that is not overwritten by the subclasses
The linked apply
method needs to be called within a model context. Not the case in the plotting function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The proposed sample_curve
might work though, since it opens a model context. Having a look
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want to add another pm.Deterministic
to the model?
It would be the case if we use _sample_curve
alpha: float, | ||
lam: float, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still don't know the best type hints here. But TensorVariable is also accepted here
@@ -22,7 +22,7 @@ | |||
import xarray as xr | |||
from scipy.optimize import curve_fit, minimize_scalar | |||
|
|||
from pymc_marketing.mmm.transformers import michaelis_menten | |||
from pymc_marketing.mmm.transformers import michaelis_menten_function | |||
|
|||
|
|||
def estimate_menten_parameters( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where is this still used? I thought there was a generalization of this? @cetagostini
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Created an issue here: #1055
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you write a test for the case that discovered this bug
I am looking over the There needs to be a solution that works for others custom saturation transformations I will create an issue for this. We can address in the future #1056 |
Added |
This will add 3 extra |
Make
michaelis_meten
consistent with the rest ofSaturationTransformation
by wrapping it aspt.TensorVariable
Description
michaelis_menten
asmichaelis_menten_function
. Made to avoid major changes inmmm.utils.estimate_menten_parameters
's L70.michaelis_menten_function
aspt.TensorVariable
insidemichaelis_menten
test_michaelis_menten
. The TensorVariable now needs to be evaluated..eval()
in example notebookRelated Issue
MMM.plot_direct_contribution_curves
errors when usingMichaelisMentenSaturation
#1050Checklist
Modules affected
Type of change
📚 Documentation preview 📚: https://pymc-marketing--1054.org.readthedocs.build/en/1054/