From 0f643371c00fb96939b3a8bf4aa65650669606f9 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Anja=20F=2E=20H=C3=B6rmann?= Date: Fri, 25 Oct 2024 10:31:14 +0200 Subject: [PATCH] typo fixes in gp notebook add missing closing parenthesis, delete superfluous 'e' in likelihood --- .../book/23-gp.ipynb | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pycse-jb/pycse___python_computations_in_science_and_engineering/book/23-gp.ipynb b/pycse-jb/pycse___python_computations_in_science_and_engineering/book/23-gp.ipynb index e2b59ccd..95edf007 100644 --- a/pycse-jb/pycse___python_computations_in_science_and_engineering/book/23-gp.ipynb +++ b/pycse-jb/pycse___python_computations_in_science_and_engineering/book/23-gp.ipynb @@ -119,7 +119,7 @@ "\n", "Alternatively, consider how you might estimate it in your head. You would look at the data, and use points that are close to the data point you want to estimate to form the estimate. Points that are far from the data point would have less influence on your estimate. You are implicitly weighting the value of the known points in doing this. In GPR we make this idea quantitative.\n", "\n", - "The key concept to quantify this is called *covariance*, which is how are two variables correlated with each other. Intuitively, if two x-values are close together, then we anticipate that the corresponding $f(x$ values are also close together. We can say that the values \"co-vary\", i.e. they are not independent. We use this fact when we integrate an ODE and estimate the next point, or in root solving when we iteratively find the next steps. We will use this idea to compute the weights that we need. The covariance is a matrix and each element of the matrix defines the covariance between two data points. To compute this, *we need to make some assumptions* about the data. A common assumption is that the covariance is Gaussian with the form:\n", + "The key concept to quantify this is called *covariance*, which is how are two variables correlated with each other. Intuitively, if two x-values are close together, then we anticipate that the corresponding $f(x)$ values are also close together. We can say that the values \"co-vary\", i.e. they are not independent. We use this fact when we integrate an ODE and estimate the next point, or in root solving when we iteratively find the next steps. We will use this idea to compute the weights that we need. The covariance is a matrix and each element of the matrix defines the covariance between two data points. To compute this, *we need to make some assumptions* about the data. A common assumption is that the covariance is Gaussian with the form:\n", "\n", "$K_{ij} = \\sigma_f \\exp\\left(-\\frac{(x_i - x_j)^2}{2 \\lambda^2}\\right)$\n", "\n", @@ -892,7 +892,7 @@ "\n", "$p$ is the periodicity and $l$ is the lengthscale. A key feature of GPR is you can add two kernel functions together and get a new kernel. Here we combine the linear kernel with the periodic kernel to represent data that is periodic and which increases (or decreases) with time.\n", "\n", - "As before we use the log likeliehood to find the hyperparameters that best fit this data.\n", + "As before we use the log likelihood to find the hyperparameters that best fit this data.\n", "\n" ] },