Skip to content

Commit bdc79fe

Browse files
authored
Updates to First Tutorial (#465)
1 parent a03239a commit bdc79fe

File tree

2 files changed

+166
-41
lines changed

2 files changed

+166
-41
lines changed

tutorials/tutorial_1_one_dimension.ipynb

Lines changed: 134 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,7 @@
1515
"import sys\n",
1616
"import scipy\n",
1717
"import pandas as pd\n",
18+
"import traceback\n",
1819
"\n",
1920
"import numpy as np\n",
2021
"import seaborn as sns\n",
@@ -304,7 +305,88 @@
304305
"$$\n",
305306
"A z + \\mu \\leftarrow \\mathcal{N}\\left(\\mu, \\Sigma\\right)\n",
306307
"$$\n",
307-
"One way to get a matrix $A$ such that $A A^T = \\Sigma$ is using the cholesky decomposition. There are python utilities to help with this."
308+
"One way to get a matrix $A$ such that $A A^T = \\Sigma$ is using the cholesky decomposition. There are python utilities to help with this:\n",
309+
"\n",
310+
"- `np.linalg.cholesky(X)` - returns a lower triangular matrix $L$ such that $L L^T = X$. (Note that, `scipy.linalg.cholesky(X)` is an alternative, but it returns the upper triangular portion, $L.T$ unless you provide a `lower=True` argument.)\n",
311+
"\n",
312+
"Another tip, if you aren't already familiar with [numpy broadcasting rules](https://numpy.org/doc/stable/user/basics.broadcasting.html), it might be worth reading a bit about how it works. For example, if we have a matrix $A$ and a vector $b$"
313+
]
314+
},
315+
{
316+
"cell_type": "code",
317+
"execution_count": null,
318+
"id": "8a85b3f9",
319+
"metadata": {},
320+
"outputs": [],
321+
"source": [
322+
"A = np.ones(shape=(3, 2))\n",
323+
"b = np.arange(3)\n",
324+
"print(\"A: \\n\", A)\n",
325+
"print(\"b: \\n\", b)"
326+
]
327+
},
328+
{
329+
"cell_type": "markdown",
330+
"id": "d59b9d38",
331+
"metadata": {},
332+
"source": [
333+
"It might be tempting to do `A + b`:"
334+
]
335+
},
336+
{
337+
"cell_type": "code",
338+
"execution_count": null,
339+
"id": "233401cb",
340+
"metadata": {},
341+
"outputs": [],
342+
"source": [
343+
"A + b"
344+
]
345+
},
346+
{
347+
"cell_type": "markdown",
348+
"id": "741970a2",
349+
"metadata": {},
350+
"source": [
351+
"But that fails, to make it work you can make `b` a column vector (ie, a `(3, 1)` matrix), and then add the two. There are a few ways to do that:"
352+
]
353+
},
354+
{
355+
"cell_type": "code",
356+
"execution_count": null,
357+
"id": "e7267760",
358+
"metadata": {},
359+
"outputs": [],
360+
"source": [
361+
"A + b.reshape((b.size, 1))"
362+
]
363+
},
364+
{
365+
"cell_type": "code",
366+
"execution_count": null,
367+
"id": "c9575386",
368+
"metadata": {},
369+
"outputs": [],
370+
"source": [
371+
"A + b[:, None]"
372+
]
373+
},
374+
{
375+
"cell_type": "code",
376+
"execution_count": null,
377+
"id": "ac5bfbb1",
378+
"metadata": {},
379+
"outputs": [],
380+
"source": [
381+
"A + b[:, np.newaxis]"
382+
]
383+
},
384+
{
385+
"cell_type": "markdown",
386+
"id": "d09e3f40",
387+
"metadata": {},
388+
"source": [
389+
"We should now be able to write a function which starts by sampling independent random normal variables, correlates them using the cholesky and adds a mean to end up drawing random samples from a mulitivariate normal distribution,"
308390
]
309391
},
310392
{
@@ -320,14 +402,15 @@
320402
" # this function should return one sample per column.\n",
321403
" #\n",
322404
" # Note that you could just use np.random.multivariate_normal but that's cheating!\n",
405+
" #\n",
406+
" white_noise = np.random.normal(size=(mean.size, size))\n",
407+
" #\n",
323408
" # YOUR CODE HERE\n",
324409
" #\n",
325-
" # n = \n",
326-
" # A = \n",
410+
" # cholesky =\n",
327411
" # random_samples = \n",
328412
" return random_samples\n",
329413
"\n",
330-
"\n",
331414
"TEST_SAMPLE_FROM(sample_from)\n",
332415
"\n",
333416
"xs = np.linspace(0., 10., 21)\n",
@@ -357,7 +440,11 @@
357440
"xs = np.linspace(0., 10., 101)\n",
358441
"cov = squared_exponential(xs, xs)\n",
359442
"\n",
360-
"samps = sample_from(np.zeros(xs.size), cov, size=20) \n",
443+
"try:\n",
444+
" samps = sample_from(np.zeros(xs.size), cov, size=20)\n",
445+
"except Exception as e:\n",
446+
" print(traceback.format_exc())\n",
447+
" print(e)\n",
361448
"\n",
362449
"### SPOILER: YOU SHOULD SEE A FAILURE ###"
363450
]
@@ -389,7 +476,7 @@
389476
"id": "751f5d2a",
390477
"metadata": {},
391478
"source": [
392-
"The condition number is a representation of the differing scales of information captured in a matrix and 1e19 is a MASSIVE condition number. With a condition number that large, even slightly different methods for computing the condition number itself have different values! This is an example of floating point error. One of the strict requirements of a covariance function is that it produce covariance matrices which are positive definite (aka invertible), meaning all the eigen values need to be greater than zero. You can see that _technically_ the matrix we created _is_ positive definite (the smallest eigen value is 1e-18), but with a condition number that large floating point arithmetic errors can accumulate making it look like the matrix is not invertible. We'd say the matrix is not \"numerically positive definite\". This is unfortunately a relatively common problem, but thankfully, there's an easy band-aid: add some noise. By adding relatively small values to the diagonal of our covariance matrix we can resolve the issue:"
479+
"The condition number is a representation of the differing scales of information captured in a matrix and 1e19 is a MASSIVE condition number. With a condition number that large, even slightly different methods for computing the condition number itself have different values! This is an example of floating point error. One of the strict requirements of a covariance function is that it produce covariance matrices which are positive definite (aka invertible), meaning all the eigen values need to be greater than zero. You can see that _technically_ the matrix we created _is_ positive definite (the smallest eigen value is greater than zero), but with a condition number that large floating point arithmetic errors can accumulate making it look like the matrix is not invertible. We'd say the matrix is not \"numerically positive definite\". Unfortunately this a relatively common problem, but thankfully, there's an easy band-aid: add some noise. By adding relatively small values to the diagonal of our covariance matrix we can resolve the issue:"
393480
]
394481
},
395482
{
@@ -416,7 +503,21 @@
416503
"id": "92349e9d",
417504
"metadata": {},
418505
"source": [
419-
"Much better! Just adding `1e-12` to the diagonal made our matrix invertible. It still has a pretty large condition number, but we seem to be getting reasonable results from it now. The values we added to the diagonal are sometimes called a \"nugget\" which can be thought of as measurement noise. By adding a nugget you're acknowledging that nothing can be estimated perfectly. This diagonal addition puts a floor on the eigen values, notice that the minimum eigen value is (almost) exactly our nugget."
506+
"Much better! Just adding `1e-12` to the diagonal made our matrix invertible. It still has a pretty large condition number, but we seem to be getting reasonable results from it now. The values we added to the diagonal are sometimes called a \"nugget\" which can be thought of as measurement noise. By adding a nugget you're acknowledging that nothing can be estimated perfectly.\n",
507+
"\n",
508+
"This diagonal addition puts a floor on the eigen values, notice that the minimum eigen value is (almost) exactly our nugget, this is not a coincidence. Take the eigen decomposition for example,\n",
509+
"$$\n",
510+
"A = Q \\Lambda Q^{-1}\n",
511+
"$$\n",
512+
"where $Q$ is a matrix holding the eigen vectors and $\\Lambda$ is a diagonal matrix with eigen values on the diagonal. Now add a nugget, $\\eta^2$,\n",
513+
"$$\n",
514+
"\\begin{align}\n",
515+
"A + \\sigma^2 I &= Q \\Lambda Q^{-1} + \\eta^2 I \\\\\n",
516+
"&= Q \\Lambda Q^{-1} + \\eta^2 Q Q^{-1} \\\\\n",
517+
"&= Q \\left( \\Lambda + \\eta^2 I\\right) Q^{-1} \\\\\n",
518+
"\\end{align}\n",
519+
"$$\n",
520+
"The eigen vectors, $Q$, are all the same, and the nugget we've added is directly added to each eigen value, so if the smallest eigen value of $A$ is $\\lambda_{min}$ then after adding a nugget the smallest eigen value will be $\\lambda_{min} + \\eta^2$"
420521
]
421522
},
422523
{
@@ -446,11 +547,16 @@
446547
"$$\n",
447548
"notice that we're going to treat the mean as zero from now on. If you really want a non-zero mean you can keep all the math the same and just subtract the mean from all your measurements ahead of time, then add it to all predictions after. This mean zero assumption is _very_ common.\n",
448549
"\n",
550+
"One possible point of confusion, we use $\\Sigma_{yy}$ to represent the covariance between all the measurements, but to create the covariance you need to evaluate the covariance function at the locations $x$ that correspond to the measurements $y$. In otherwords, row $i$ and column $j$ of $\\Sigma_{yy}$ would be given by,\n",
551+
"$$\n",
552+
"\\left[\\Sigma_{yy}\\right]_{ij} = c(x_i, x_j)\n",
553+
"$$\n",
554+
"\n",
449555
"Similarly we can build the prior for the function at all the locations we'd like to predict,\n",
450556
"$$\n",
451557
"\\mathbf{f}^* \\sim \\mathcal{N}\\left(0, \\Sigma_{**}\\right).\n",
452558
"$$\n",
453-
"Here we do not add measurement noise because we're interested in the value of the function itself, not the value of measurements of the function. We need to compute one more covariance matrix, $\\Sigma_{*y}$ (note that we don't need $\\Sigma_{y*}$ because $\\Sigma_{y*} = \\Sigma_{y*}^T$). $\\Sigma_{*y}$ captures the correlation between what we've observed and what we want to predict. Once we've constructed these matrices we can build an augmented distribution which describes both the measurements we made and what we want to predict,\n",
559+
"Here we do not add measurement noise because we're interested in the value of the function itself, not the value of measurements of the function. We need to compute one more covariance matrix, $\\Sigma_{*y}$ (note that we don't need $\\Sigma_{y*}$ because $\\Sigma_{y*} = \\Sigma_{*y}^T$). $\\Sigma_{*y}$ captures the correlation between what we've observed and what we want to predict. Once we've constructed these matrices we can build an augmented distribution which describes both the measurements we made and what we want to predict,\n",
454560
"$$\n",
455561
"\\begin{bmatrix}\n",
456562
"\\mathbf{y} \\\\\n",
@@ -482,6 +588,8 @@
482588
"def fit_and_predict(cov_func, X, y, x_star, meas_noise):\n",
483589
" # Using cov_func build the matrices\n",
484590
" #\n",
591+
" # Since we can't use greek letters in the code, we'll use S for \\Sigma\n",
592+
" #\n",
485593
" # S_yy = \n",
486594
" # S_sy = \n",
487595
" # S_ss =\n",
@@ -490,7 +598,7 @@
490598
" #\n",
491599
" # mean = [a column vector holding the mean]\n",
492600
" # cov = [a square matrix holding the posterior covariance]\n",
493-
" # return mean, cov\n",
601+
" return mean, cov\n",
494602
"\n",
495603
"TEST_FIT_AND_PREDICT(fit_and_predict)\n",
496604
"\n",
@@ -515,7 +623,7 @@
515623
"outputs": [],
516624
"source": [
517625
"# note we need to add a nugget here to make sure the posterior covariance is numerically definite\n",
518-
"samps = sample_from(pred_mean, pred_cov + 1e-16 * np.eye(pred_mean.size), size=50)\n",
626+
"samps = sample_from(pred_mean, pred_cov + 1e-12 * np.eye(pred_mean.size), size=50)\n",
519627
"for i in range(samps.shape[1]):\n",
520628
" plt.plot(x_gridded, samps[:, i], color=\"steelblue\", alpha=0.5)\n",
521629
"plot_truth()\n",
@@ -538,14 +646,6 @@
538646
"metadata": {},
539647
"outputs": [],
540648
"source": [
541-
"def plot_spread(xs, mean, variances):\n",
542-
" sd = np.sqrt(variances)\n",
543-
" plt.plot(xs, mean, lw=5, color='steelblue', label=\"prediction\")\n",
544-
" plt.fill_between(xs, mean + 2*sd, mean - 2*sd,\n",
545-
" color='steelblue', alpha=0.2, label=\"2 sigma\")\n",
546-
" plt.fill_between(xs, mean + sd, mean - sd,\n",
547-
" color='steelblue', alpha=0.5, label=\"sigma\")\n",
548-
"\n",
549649
"plot_spread(x_gridded, pred_mean, np.diag(pred_cov))\n",
550650
" \n",
551651
"plot_truth()\n",
@@ -611,7 +711,8 @@
611711
" cov_func = partial(squared_exponential, ell=ell, sigma=sigma)\n",
612712
" return -log_likelihood(cov_func, X, y, meas_noise=meas_noise)\n",
613713
"\n",
614-
"mle_params = scipy.optimize.minimize(compute_negative_log_likelihood, np.zeros(3), method=\"L-BFGS-B\")\n",
714+
"mle_params = scipy.optimize.minimize(compute_negative_log_likelihood,\n",
715+
" np.zeros(3), method=\"L-BFGS-B\")\n",
615716
"mle_sigma, mle_ell, mle_meas_noise = np.exp(mle_params.x)\n",
616717
"\n",
617718
"print(f\"MLE PARAMS:\\n sigma : {mle_sigma}\\n ell: {mle_ell}\\n meas_noise: {mle_meas_noise}\")\n",
@@ -626,7 +727,7 @@
626727
"source": [
627728
"Still not perfect ... but the true function is about as smooth as the true function and now mostly within the uncertainty bounds. Notice that a lot of the measurements are outside of the bounds. That's OK! We explicitly asked for the posterior distribution of the unknown function _not_ the posterior distribution of measurements of the function. Subtle distinictions like that are important to pay attention to.\n",
628729
"\n",
629-
"Anothing thing worth noting, the $\\sigma_{se}$ that maximized likelihood is about $2$, but the posterior distribution has function values which are 3 to 4. It might be tempting to think the value of $2$ means the function will be within $\\left[-2, 2\\right]$, but it can be very common for the function estimates to exceed the sigma from the prior. Sometimes multiple times over. Here, for example, is the posterior with $\\sigma_{se} = 1$,"
730+
"Another thing worth noting, the $\\sigma_{se}$ that maximized likelihood is about $2$ and it might be tempting to think the value of $2$ means the function will mostly be within $\\left[-2, 2\\right]$, but it can be very common for the function estimates to exceed the sigma from the prior. Sometimes multiple times over. Here, for example are the predictions with $\\sigma_{se} = 0.5$,"
630731
]
631732
},
632733
{
@@ -636,15 +737,26 @@
636737
"metadata": {},
637738
"outputs": [],
638739
"source": [
639-
"plot_fit_and_predict(ell=2.5, sigma=1, meas_noise=0.4)"
740+
"fit_sizes = [1, 5, 20, 100]\n",
741+
"fig, axes = plt.subplots(1, len(fit_sizes), figsize=(36, 8))\n",
742+
"cov_func = partial(squared_exponential, ell=mle_ell, sigma=0.5)\n",
743+
"\n",
744+
"for ax, n in zip(axes, fit_sizes):\n",
745+
" X_sub = X[:n]\n",
746+
" y_sub = y[:n]\n",
747+
" \n",
748+
" pred_mean, pred_cov = fit_and_predict(cov_func, X_sub, y_sub, x_gridded, meas_noise=mle_meas_noise)\n",
749+
" ax.scatter(X_sub, y_sub, color=\"black\", s=50)\n",
750+
" plot_spread(x_gridded, pred_mean, np.diag(pred_cov), ax=ax)\n",
751+
" ax.set_ylim([-2, 4])\n"
640752
]
641753
},
642754
{
643755
"cell_type": "markdown",
644756
"id": "a961ffe1",
645757
"metadata": {},
646758
"source": [
647-
"According to the prior with $\\sigma_{se} = 1$, there's only a $0.3\\%$ chance of the function taking on a value of $3$, yet that prior actually results in a relatively good fit. The posterior even states there's a reasonable chance the true function approaches $4$. Takeaway: the hyper parameters describe the prior we place on a function, but ultimately it can be the data that drives the posterior (depending of course on measurement noise, quantity and other factors)."
759+
"It still does a pretty good job and according to the prior, $\\sigma_{se} = 0.5$, there's only a $2 x 10^{-7}$ percent chance of the function taking on a value of $3$, yet we're seeing that happen. The point here is that the data can eventually override the prior. When we fit the model to a single data point the resulting predictions are very close to the prior, but ultimately the data drives the estimate. The prior is still very important, we saw some bad choices of parmeters earlier, but it's really the interaction of the prior and the data that matter."
648760
]
649761
},
650762
{

tutorials/tutorial_utils.py

Lines changed: 32 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@
44

55
from scipy.stats import ks_1samp, norm
66
from functools import partial
7+
from inspect import signature, Parameter
78

89
EXAMPLE_SLOPE_VALUE = np.sqrt(2.0)
910
EXAMPLE_CONSTANT_VALUE = 3.14159
@@ -21,6 +22,7 @@
2122

2223
x_gridded = np.linspace(LOWEST, HIGHEST, 301)
2324

25+
2426
def reshape_inputs(x):
2527
if x.ndim == 1:
2628
return x[:, None]
@@ -45,7 +47,7 @@ def sinc(xs):
4547

4648

4749
def truth(xs):
48-
return (EXAMPLE_SCALE_VALUE * sinc(xs - EXAMPLE_TRANSLATION_VALUE))
50+
return EXAMPLE_SCALE_VALUE * sinc(xs - EXAMPLE_TRANSLATION_VALUE)
4951

5052

5153
def generate_training_data(n=N):
@@ -131,30 +133,43 @@ def example_fit_and_predict(cov_func, X, y, x_star, meas_noise):
131133

132134

133135
def sinc(xs):
134-
return np.where(xs == 0, np.ones(xs.size), np.sin(xs) / xs)
136+
non_zero = np.nonzero(xs)[0]
137+
output = np.ones(xs.shape)
138+
output[non_zero] = np.sin(xs[non_zero]) / xs[non_zero]
139+
return output
135140

136141

137142
def truth(xs):
138-
return (EXAMPLE_SCALE_VALUE * sinc(xs - EXAMPLE_TRANSLATION_VALUE))
143+
return EXAMPLE_SCALE_VALUE * sinc(xs - EXAMPLE_TRANSLATION_VALUE)
139144

140145

141146
def plot_truth(xs):
142-
plt.plot(xs, truth(xs),
143-
lw=5,
144-
color="firebrick", label="truth")
147+
plt.plot(xs, truth(xs), lw=5, color="firebrick", label="truth")
145148

146149

147-
def plot_measurements(xs, ys):
148-
plt.scatter(xs, ys, s=50, color='black', label="measurements")
150+
def plot_measurements(xs, ys, color="black", label="measurements"):
151+
plt.scatter(xs, ys, s=50, color=color, label=label)
149152

150153

151-
def plot_spread(xs, mean, variances):
154+
def plot_spread(xs, mean, variances, ax=None):
155+
if ax is None:
156+
ax = plt.gca()
157+
xs = np.reshape(xs, -1)
158+
mean = np.reshape(mean, -1)
159+
variances = np.reshape(variances, -1)
152160
sd = np.sqrt(variances)
153-
plt.plot(xs, mean, lw=5, color='steelblue', label="prediction")
154-
plt.fill_between(xs, mean + 2*sd, mean - 2*sd,
155-
color='steelblue', alpha=0.2, label="uncertainty")
156-
plt.fill_between(xs, mean + sd, mean - sd,
157-
color='steelblue', alpha=0.5, label="uncertainty")
161+
ax.plot(xs, mean, lw=5, color="steelblue", label="prediction")
162+
ax.fill_between(
163+
xs,
164+
mean + 2 * sd,
165+
mean - 2 * sd,
166+
color="steelblue",
167+
alpha=0.2,
168+
label="uncertainty",
169+
)
170+
ax.fill_between(
171+
xs, mean + sd, mean - sd, color="steelblue", alpha=0.5, label="uncertainty"
172+
)
158173

159174

160175
def TEST_FIT_AND_PREDICT(f):
@@ -183,15 +198,13 @@ def TEST_FIT_AND_PREDICT(f):
183198
f"Incorrect covariance [.\n Expected: f{expected_cov} \n Actual: f{actual_cov}"
184199
)
185200

201+
186202
def example_fit(cov_func, X, y, meas_noise):
187203
K_yy = cov_func(X, X) + meas_noise * meas_noise * np.eye(y.size)
188204
L = np.linalg.cholesky(K_yy)
189205
v = scipy.linalg.cho_solve((L, True), y)
190-
191-
return {"train_locations": X,
192-
"information": v,
193-
"cholesky": L,
194-
"cov_func": cov_func}
206+
207+
return {"train_locations": X, "information": v, "cholesky": L, "cov_func": cov_func}
195208

196209

197210
def example_predict(fit_model, x_star):

0 commit comments

Comments
 (0)