Skip to content

Commit

Permalink
fixed info theo sections and ordering and aligned deep dive names
Browse files Browse the repository at this point in the history
  • Loading branch information
chriskolb committed Jan 15, 2024
1 parent 9b472ea commit ffc0a95
Show file tree
Hide file tree
Showing 11 changed files with 54 additions and 12 deletions.
2 changes: 1 addition & 1 deletion content/chapters/02_supervised_regression/02-02-ols.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: "Chapter 02.02: Deep Dive: Proof OLS Regression"
title: "Chapter 02.02: Proof OLS Regression: Deep Dive"
weight: 2002
---
In this section, we provide you with a proof for the ordinary least squares (OLS) method.
Expand Down
2 changes: 1 addition & 1 deletion content/chapters/13_information_theory/13-01-entropy.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: "Chapter 13.01: Entropy"
title: "Chapter 13.01: Entropy I"
weight: 13001
---
We introduce entropy, which expresses the expected information for discrete random variables, as a central concept in information theory.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: "Chapter 13.02: Entropy"
title: "Chapter 13.02: Entropy II"
weight: 13002
---
We continue our discussion about entropy and introduce joint entropy, the uniqueness theorem and the maximum entropy principle.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: "Chapter 13.06: Cross-Entropy, KL and Source Coding"
weight: 13006
title: "Chapter 13.05: Cross-Entropy and KL"
weight: 13005
---
We introduce cross-entropy as a further information-theoretic concept and discuss the connection between entropy, cross-entropy, and Kullback-Leibler divergence.

Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: "Chapter 13.07: Information Theory for Machine Learning"
weight: 13007
title: "Chapter 13.06: Information Theory for Machine Learning"
weight: 13006
---
In this section, we discuss how information-theoretic concepts are used in machine learning and demonstrate the equivalence of KL minimization and maximum likelihood maximization, as well as how (cross-)entropy can be used as a loss function.

Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: "Chapter 13.08: Joint Entropy and Mutual Information"
weight: 13008
title: "Chapter 13.07: Joint Entropy and Mutual Information I"
weight: 13007
---
Information theory also provides means of quantifying relations between two random variables that extend the concept of (linear) correlation. We discuss joint entropy, conditional entropy, and mutual information in this context.

Expand Down
15 changes: 15 additions & 0 deletions content/chapters/13_information_theory/13-08-mutual-info2.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
---
title: "Chapter 13.08: Joint Entropy and Mutual Information II"
weight: 13008
---
Information theory also provides means of quantifying relations between two random variables that extend the concept of (linear) correlation. We discuss joint entropy, conditional entropy, and mutual information in this context.

<!--more-->

### Lecture video

{{< video id="" >}}

### Lecture slides

{{< pdfjs file="https://github.com/slds-lmu/lecture_sl/raw/main/slides-pdf/slides-info-mutual-info2.pdf" >}}
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: "Chapter 13.05: Entropy and Optimal Code Length"
weight: 13005
title: "Chapter 13.09: Entropy and Optimal Code Length I"
weight: 13009
---
In this section, we introduce source coding and discuss how entropy can be understood as optimal code length.

Expand Down
15 changes: 15 additions & 0 deletions content/chapters/13_information_theory/13-10-sourcecoding2.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
---
title: "Chapter 13.10: Entropy and Optimal Code Length II"
weight: 13010
---
In this section, we continue our discussion on source coding and its relation to entropy.

<!--more-->

### Lecture video

{{< video id="" >}}

### Lecture slides

{{< pdfjs file="https://github.com/slds-lmu/lecture_sl/raw/main/slides-pdf/slides-info-sourcecoding2.pdf" >}}
12 changes: 12 additions & 0 deletions content/chapters/13_information_theory/13-11-mi-deepdive.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
---
title: "Chapter 13.11: MI under Reparametrization: Deep Dive"
weight: 13011
---
In this deep dive, we discuss the invariance of MI under certain reparametrizations.

<!--more-->


### Lecture slides

{{< pdfjs file="https://github.com/slds-lmu/lecture_sl/raw/main/slides-pdf/slides-info-mi-deepdive.pdf" >}}
2 changes: 1 addition & 1 deletion content/chapters/15_regularization/15-09-lasso-deep.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: "Chapter 15.09: Soft-thresholding and L1 regularization deep-dive"
title: "Chapter 15.09: Soft-thresholding and L1 regularization: Deep Dive"
weight: 15009
---
In this section, we prove the previously stated proposition regarding soft-thresholding and L1 regularization.
Expand Down

0 comments on commit ffc0a95

Please sign in to comment.