Skip to content

Commit

Permalink
adapt titles and weights
Browse files Browse the repository at this point in the history
  • Loading branch information
tpielok committed Nov 10, 2023
1 parent 4185eba commit 0546ee7
Show file tree
Hide file tree
Showing 9 changed files with 17 additions and 17 deletions.
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: "Chapter 11.08: Logistic Regression: Deep Dive"
weight: 11015
weight: 11008
---
In this segment, we derive the gradient and Hessian of logistice regression and show that logistic regression is a convex problem. This section is presented as a **deep-dive**. Please note that there are no videos accompanying this section.

Expand Down
4 changes: 2 additions & 2 deletions content/chapters/11_advriskmin/11-09-classification-brier.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: "Chapter 11.08: Brier Score"
weight: 11008
title: "Chapter 11.09: Brier Score"
weight: 11009
---
In this section, we introduce the Brier score and derive its risk minimizer and optimal constant model. We further discuss the connection between Brier score minimization and tree splitting according to the Gini index.

Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: "Chapter 11.09: Advanced Classification Losses"
weight: 11009
title: "Chapter 11.10: Advanced Classification Losses"
weight: 11010
---
In this section, we introduce and discuss the following advanced classification losses: (squared) hinge loss, \\(L2\\) loss on scores, exponential loss, and AUC loss.

Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: "Chapter 11.10: Optimal constant model for the empirical log loss risk"
weight: 11010
title: "Chapter 11.11: Optimal constant model for the empirical log loss risk"
weight: 11011
---
In this segment, we explore the derivation of the optimal constant model concerning the empirical log loss risk. This section is presented as a **deep-dive**. Please note that there are no videos accompanying this section.

Expand Down
4 changes: 2 additions & 2 deletions content/chapters/11_advriskmin/11-12-max-likelihood-l2.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: "Chapter 11.11: Maximum Likelihood Estimation vs Empirical Risk Minimization I"
weight: 11011
title: "Chapter 11.12: Maximum Likelihood Estimation vs Empirical Risk Minimization I"
weight: 11012
---
We discuss the connection between maximum likelihood estimation and risk minimization, then demonstrate the correspondence between a Gaussian error distribution and \\(L2\\) loss.

Expand Down
4 changes: 2 additions & 2 deletions content/chapters/11_advriskmin/11-13-max-likelihood-other.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: "Chapter 11.12: Maximum Likelihood Estimation vs Empirical Risk Minimization II"
weight: 11012
title: "Chapter 11.13: Maximum Likelihood Estimation vs Empirical Risk Minimization II"
weight: 11013
---
We discuss the connection between maximum likelihood estimation and risk minimization for further losses (\\(L1\\) loss, Bernoulli loss).

Expand Down
4 changes: 2 additions & 2 deletions content/chapters/11_advriskmin/11-14-losses-properties.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: "Chapter 11.13: Properties of Loss Functions"
weight: 11013
title: "Chapter 11.14: Properties of Loss Functions"
weight: 11014
---
We discuss the concept of robustness, analytical and functional properties of loss functions and how they may influence the convergence of optimizers.

Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: "Chapter 11.14: Bias Variance Decomposition"
weight: 11014
title: "Chapter 11.15: Bias Variance Decomposition"
weight: 11015
---
We discuss how to decompose the generalization error of a learner.

Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: "Chapter 11.15: Bias Variance Decomposition: Deep Dive"
weight: 11015
title: "Chapter 11.16: Bias Variance Decomposition: Deep Dive"
weight: 11016
---
In this segment, we discuss details of the decomposition of the generalization error of a learner. This section is presented as a **deep-dive**. Please note that there are no videos accompanying this section.

Expand Down

0 comments on commit 0546ee7

Please sign in to comment.