From 0bf496de21eb248d4f3a1a8b35a3a313670f79bd Mon Sep 17 00:00:00 2001 From: Madalin Tatarciuc <73785144+cetusian@users.noreply.github.com> Date: Sun, 17 Mar 2024 13:35:39 +0200 Subject: [PATCH] Update 04_mnist_basics.ipynb Fixed the mistyped 'nil' to 'null' on line 3940. --- 04_mnist_basics.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/04_mnist_basics.ipynb b/04_mnist_basics.ipynb index 675bb5b36..3040ad940 100644 --- a/04_mnist_basics.ipynb +++ b/04_mnist_basics.ipynb @@ -3937,7 +3937,7 @@ "source": [ "A very small change in the value of a weight will often not actually change the accuracy at all. This means it is not useful to use accuracy as a loss function—if we do, most of the time our gradients will actually be 0, and the model will not be able to learn from that number.\n", "\n", - "> S: In mathematical terms, accuracy is a function that is constant almost everywhere (except at the threshold, 0.5), so its derivative is nil almost everywhere (and infinity at the threshold). This then gives gradients that are 0 or infinite, which are useless for updating the model.\n", + "> S: In mathematical terms, accuracy is a function that is constant almost everywhere (except at the threshold, 0.5), so its derivative is null almost everywhere (and infinity at the threshold). This then gives gradients that are 0 or infinite, which are useless for updating the model.\n", "\n", "Instead, we need a loss function which, when our weights result in slightly better predictions, gives us a slightly better loss. So what does a \"slightly better prediction\" look like, exactly? Well, in this case, it means that if the correct answer is a 3 the score is a little higher, or if the correct answer is a 7 the score is a little lower.\n", "\n",