Skip to content

Commit

Permalink
Process tutorial notebooks
Browse files Browse the repository at this point in the history
  • Loading branch information
actions-user committed Aug 12, 2024
1 parent 0dbaa74 commit f26167d
Show file tree
Hide file tree
Showing 6 changed files with 33 additions and 3 deletions.
6 changes: 4 additions & 2 deletions tutorials/W1D5_Microcircuits/W1D5_Tutorial1.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1168,7 +1168,9 @@
},
{
"cell_type": "markdown",
"metadata": {},
"metadata": {
"execution": {}
},
"source": [
"<details>\n",
" <summary>Kurtosis value behaviour</summary>\n",
Expand Down Expand Up @@ -3985,7 +3987,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
"version": "3.9.19"
}
},
"nbformat": 4,
Expand Down
2 changes: 1 addition & 1 deletion tutorials/W1D5_Microcircuits/W1D5_Tutorial2.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -3386,7 +3386,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
"version": "3.9.19"
}
},
"nbformat": 4,
Expand Down
12 changes: 12 additions & 0 deletions tutorials/W1D5_Microcircuits/instructor/W1D5_Tutorial1.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1168,6 +1168,18 @@
"interact(plot_kurtosis, theta_value = slider)"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"<details>\n",
" <summary>Kurtosis value behaviour</summary>\n",
" You might notice that, at first, the kurtosis value decreases (around till $\\theta = 140$), and then it drastically increases (reflecting the desired sparsity property). If we take a closer look at the kurtosis formula, it measures the expected value (average) of standardized data values raised to the 4th power. That being said, if the data point lies in the range of standard deviation, it doesn’t contribute to the kurtosis value almost at all (something less than 1 to the fourth degree is small), and most of the contribution is produced by extreme outliers (lying far away from the range of standard deviation). So, the main characteristic it measures is the tailedness of the data - it will be high when the power of criticality of outliers will overweight the “simple” points (as kurtosis is an average metric for all points). What happens is that with $\\theta \\le 120$, outliers don't perform that much to the kurtosis.\n",
"</details>\n"
]
},
{
"cell_type": "code",
"execution_count": null,
Expand Down
2 changes: 2 additions & 0 deletions tutorials/W1D5_Microcircuits/instructor/W1D5_Tutorial2.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -638,6 +638,8 @@
"\n",
"$$\\hat{x} = \\frac{x}{f(||x||)}$$\n",
"\n",
"There are indeed many options for the specific form of the denominator here; still, what we want to highlight is the essential divisive nature of the normalization.\n",
"\n",
"Evidence suggests that normalization provides a useful inductive bias in artificial and natural systems. However, do we need a dedicated computation that implements normalization?\n",
"\n",
"Let's explore if ReLUs can estimate a normalization-like function. Specifically, we will see if a fully-connected one-layer network can estimate $y=\\frac{1}{x+\\epsilon}$ function.\n",
Expand Down
12 changes: 12 additions & 0 deletions tutorials/W1D5_Microcircuits/student/W1D5_Tutorial1.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1151,6 +1151,18 @@
"interact(plot_kurtosis, theta_value = slider)"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"<details>\n",
" <summary>Kurtosis value behaviour</summary>\n",
" You might notice that, at first, the kurtosis value decreases (around till $\\theta = 140$), and then it drastically increases (reflecting the desired sparsity property). If we take a closer look at the kurtosis formula, it measures the expected value (average) of standardized data values raised to the 4th power. That being said, if the data point lies in the range of standard deviation, it doesn’t contribute to the kurtosis value almost at all (something less than 1 to the fourth degree is small), and most of the contribution is produced by extreme outliers (lying far away from the range of standard deviation). So, the main characteristic it measures is the tailedness of the data - it will be high when the power of criticality of outliers will overweight the “simple” points (as kurtosis is an average metric for all points). What happens is that with $\\theta \\le 120$, outliers don't perform that much to the kurtosis.\n",
"</details>\n"
]
},
{
"cell_type": "code",
"execution_count": null,
Expand Down
2 changes: 2 additions & 0 deletions tutorials/W1D5_Microcircuits/student/W1D5_Tutorial2.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -638,6 +638,8 @@
"\n",
"$$\\hat{x} = \\frac{x}{f(||x||)}$$\n",
"\n",
"There are indeed many options for the specific form of the denominator here; still, what we want to highlight is the essential divisive nature of the normalization.\n",
"\n",
"Evidence suggests that normalization provides a useful inductive bias in artificial and natural systems. However, do we need a dedicated computation that implements normalization?\n",
"\n",
"Let's explore if ReLUs can estimate a normalization-like function. Specifically, we will see if a fully-connected one-layer network can estimate $y=\\frac{1}{x+\\epsilon}$ function.\n",
Expand Down

0 comments on commit f26167d

Please sign in to comment.