-
Notifications
You must be signed in to change notification settings - Fork 560
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question] Warning of FixedNoiseGaussianLikelihood when size of train_x is different than test_x #1883
Comments
Yes, the warning occurs because you didn't specify the noise on the test values. To do so, you just need to specify the noise values in the likelihood like this: with torch.no_grad(), gpytorch.settings.fast_pred_var():
observed_pred = likelihood(model(test_x_distributional), noise=torch.rand(test_x_distributional.shape[0])) which produces pretty different confidence bands than the plot from your script (which doesn't add any noise in). |
Thanks for the quick reply and help. However, your answer leaves me with two extra questions:
|
For the first question, you only need to know the variance if you want to know For the second, I don't follow... but you could setup a heteroscedastic GP that does actually have a noise model. See #982 and botorch's canned heteroscedastic GP (https://botorch.org/api/models.html#botorch.models.gp_regression.HeteroskedasticSingleTaskGP). |
I was under the impression that the calls for the model had to be wrapped with the likelihood, I must have misread the docs. Sorry about that .
I will take a look at this. Once again, thank you for the help! |
As far as I understand, to pass the variance in the data (i.e. the noise in the observations) we must use the FixedNoiseGaussianLikelihood (instead of the now-deprecated WhiteNoise kernel). However, at sampling time this likelihood raises a warning if the sizes of test and train data are different.
Below follows a small example (directly taken from the docs but with this likelihood):
After running the code, I get:
The text was updated successfully, but these errors were encountered: