diff --git a/tutorials/W1D5_Microcircuits/W1D5_Tutorial1.ipynb b/tutorials/W1D5_Microcircuits/W1D5_Tutorial1.ipynb
index 1daf3b586..d27eaab2b 100644
--- a/tutorials/W1D5_Microcircuits/W1D5_Tutorial1.ipynb
+++ b/tutorials/W1D5_Microcircuits/W1D5_Tutorial1.ipynb
@@ -1166,6 +1166,18 @@
"interact(plot_kurtosis, theta_value = slider)"
]
},
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "execution": {}
+ },
+ "source": [
+ "\n",
+ " Kurtosis value behaviour
\n",
+ " You might notice that, at first, the kurtosis value decreases (around till $\\theta = 140$), and then it drastically increases (reflecting the desired sparsity property). If we take a closer look at the kurtosis formula, it measures the expected value (average) of standardized data values raised to the 4th power. That being said, if the data point lies in the range of standard deviation, it doesn’t contribute to the kurtosis value almost at all (something less than 1 to the fourth degree is small), and most of the contribution is produced by extreme outliers (lying far away from the range of standard deviation). So, the main characteristic it measures is the tailedness of the data - it will be high when the power of criticality of outliers will overweight the “simple” points (as kurtosis is an average metric for all points). What happens is that with $\\theta \\le 120$, outliers don't perform that much to the kurtosis.\n",
+ " \n"
+ ]
+ },
{
"cell_type": "code",
"execution_count": null,
diff --git a/tutorials/W1D5_Microcircuits/W1D5_Tutorial2.ipynb b/tutorials/W1D5_Microcircuits/W1D5_Tutorial2.ipynb
index 87749a478..68e2771af 100644
--- a/tutorials/W1D5_Microcircuits/W1D5_Tutorial2.ipynb
+++ b/tutorials/W1D5_Microcircuits/W1D5_Tutorial2.ipynb
@@ -638,6 +638,8 @@
"\n",
"$$\\hat{x} = \\frac{x}{f(||x||)}$$\n",
"\n",
+ "There are indeed many options for the specific form of the denominator here; still, what we want to highlight is the essential divisive nature of the normalization.\n",
+ "\n",
"Evidence suggests that normalization provides a useful inductive bias in artificial and natural systems. However, do we need a dedicated computation that implements normalization?\n",
"\n",
"Let's explore if ReLUs can estimate a normalization-like function. Specifically, we will see if a fully-connected one-layer network can estimate $y=\\frac{1}{x+\\epsilon}$ function.\n",
diff --git a/tutorials/W1D5_Microcircuits/further_reading.md b/tutorials/W1D5_Microcircuits/further_reading.md
index ceaaeb12a..30dc812af 100644
--- a/tutorials/W1D5_Microcircuits/further_reading.md
+++ b/tutorials/W1D5_Microcircuits/further_reading.md
@@ -14,6 +14,8 @@
- [Flexible gating of contextual influences in natural vision](https://pubmed.ncbi.nlm.nih.gov/26436902/)
- [Normalization as a canonical neural computation](https://www.nature.com/articles/nrn3136)
- [Attention-related changes in correlated neuronal activity arise from normalization mechanisms](https://www.nature.com/articles/nn.4572)
+- [Spatially tuned normalization explains attention modulation variance within neurons](https://journals.physiology.org/doi/full/10.1152/jn.00218.2017)
+- [Attention-related changes in correlated neuronal activity arise from normalization mechanisms](https://www.nature.com/articles/nn.4572)
## Tutorial 3: Attention
diff --git a/tutorials/W1D5_Microcircuits/instructor/W1D5_Tutorial1.ipynb b/tutorials/W1D5_Microcircuits/instructor/W1D5_Tutorial1.ipynb
index d2044f34f..84a06f3b0 100644
--- a/tutorials/W1D5_Microcircuits/instructor/W1D5_Tutorial1.ipynb
+++ b/tutorials/W1D5_Microcircuits/instructor/W1D5_Tutorial1.ipynb
@@ -1168,6 +1168,18 @@
"interact(plot_kurtosis, theta_value = slider)"
]
},
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "execution": {}
+ },
+ "source": [
+ "\n",
+ " Kurtosis value behaviour
\n",
+ " You might notice that, at first, the kurtosis value decreases (around till $\\theta = 140$), and then it drastically increases (reflecting the desired sparsity property). If we take a closer look at the kurtosis formula, it measures the expected value (average) of standardized data values raised to the 4th power. That being said, if the data point lies in the range of standard deviation, it doesn’t contribute to the kurtosis value almost at all (something less than 1 to the fourth degree is small), and most of the contribution is produced by extreme outliers (lying far away from the range of standard deviation). So, the main characteristic it measures is the tailedness of the data - it will be high when the power of criticality of outliers will overweight the “simple” points (as kurtosis is an average metric for all points). What happens is that with $\\theta \\le 120$, outliers don't perform that much to the kurtosis.\n",
+ " \n"
+ ]
+ },
{
"cell_type": "code",
"execution_count": null,
diff --git a/tutorials/W1D5_Microcircuits/instructor/W1D5_Tutorial2.ipynb b/tutorials/W1D5_Microcircuits/instructor/W1D5_Tutorial2.ipynb
index f3844dc28..cbd53c669 100644
--- a/tutorials/W1D5_Microcircuits/instructor/W1D5_Tutorial2.ipynb
+++ b/tutorials/W1D5_Microcircuits/instructor/W1D5_Tutorial2.ipynb
@@ -638,6 +638,8 @@
"\n",
"$$\\hat{x} = \\frac{x}{f(||x||)}$$\n",
"\n",
+ "There are indeed many options for the specific form of the denominator here; still, what we want to highlight is the essential divisive nature of the normalization.\n",
+ "\n",
"Evidence suggests that normalization provides a useful inductive bias in artificial and natural systems. However, do we need a dedicated computation that implements normalization?\n",
"\n",
"Let's explore if ReLUs can estimate a normalization-like function. Specifically, we will see if a fully-connected one-layer network can estimate $y=\\frac{1}{x+\\epsilon}$ function.\n",
diff --git a/tutorials/W1D5_Microcircuits/student/W1D5_Tutorial1.ipynb b/tutorials/W1D5_Microcircuits/student/W1D5_Tutorial1.ipynb
index 636c338fc..21261af2b 100644
--- a/tutorials/W1D5_Microcircuits/student/W1D5_Tutorial1.ipynb
+++ b/tutorials/W1D5_Microcircuits/student/W1D5_Tutorial1.ipynb
@@ -1151,6 +1151,18 @@
"interact(plot_kurtosis, theta_value = slider)"
]
},
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "execution": {}
+ },
+ "source": [
+ "\n",
+ " Kurtosis value behaviour
\n",
+ " You might notice that, at first, the kurtosis value decreases (around till $\\theta = 140$), and then it drastically increases (reflecting the desired sparsity property). If we take a closer look at the kurtosis formula, it measures the expected value (average) of standardized data values raised to the 4th power. That being said, if the data point lies in the range of standard deviation, it doesn’t contribute to the kurtosis value almost at all (something less than 1 to the fourth degree is small), and most of the contribution is produced by extreme outliers (lying far away from the range of standard deviation). So, the main characteristic it measures is the tailedness of the data - it will be high when the power of criticality of outliers will overweight the “simple” points (as kurtosis is an average metric for all points). What happens is that with $\\theta \\le 120$, outliers don't perform that much to the kurtosis.\n",
+ " \n"
+ ]
+ },
{
"cell_type": "code",
"execution_count": null,
diff --git a/tutorials/W1D5_Microcircuits/student/W1D5_Tutorial2.ipynb b/tutorials/W1D5_Microcircuits/student/W1D5_Tutorial2.ipynb
index 6b0ca0379..86896eaf7 100644
--- a/tutorials/W1D5_Microcircuits/student/W1D5_Tutorial2.ipynb
+++ b/tutorials/W1D5_Microcircuits/student/W1D5_Tutorial2.ipynb
@@ -638,6 +638,8 @@
"\n",
"$$\\hat{x} = \\frac{x}{f(||x||)}$$\n",
"\n",
+ "There are indeed many options for the specific form of the denominator here; still, what we want to highlight is the essential divisive nature of the normalization.\n",
+ "\n",
"Evidence suggests that normalization provides a useful inductive bias in artificial and natural systems. However, do we need a dedicated computation that implements normalization?\n",
"\n",
"Let's explore if ReLUs can estimate a normalization-like function. Specifically, we will see if a fully-connected one-layer network can estimate $y=\\frac{1}{x+\\epsilon}$ function.\n",