-
Notifications
You must be signed in to change notification settings - Fork 310
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How is hetereoskedasticity handled? #310
Comments
Hit send too soon as I think maybe I found the answer: https://botorch.org/api/models.html#heteropskedasticsingletaskgp ? So is this effectively doing a multi-task GP where one output is the objective's mean and the other is the objective's variance? |
So if the sem is passed in, then in Ax we use a The
Note however, that we currently don't expose Finally, there is also a way of inferring heteroskedastic noise levels, one relatively simple approach is the "most likely heteroskedastic GP". There is a long standing PR #250; we have cleaned that up internally and should be able merge that in in the near future (cc @jelena-markovic). |
Thanks for the detailed explanation. I'm afraid I'm not following the most-likely-heteroskedastic-gp approahc. PR #250 seems to be about Raytune - is that the right PR? |
Ah sorry, the PR is on the botorch repo: pytorch/botorch#250 |
Much better. I'm gonna close this as I've learned what I wanted and don't see anything more actionable. Thanks! |
I'm curious after our other discussion - how do you handle heteroskedasticity? When the noise level is inferred, I'm assuming you treat the problem as homoskedastic, because otherwise the whole problem starts to seem under-specified. But when the sem is passed in explicitly, what's your modeling approach? Can you point to a paper on this?
The text was updated successfully, but these errors were encountered: