diff --git a/content/chapters/11_advriskmin/11-08-classification-logreg-deep-dive.md b/content/chapters/11_advriskmin/11-08-classification-logreg-deep-dive.md index 65835ca..77643fe 100644 --- a/content/chapters/11_advriskmin/11-08-classification-logreg-deep-dive.md +++ b/content/chapters/11_advriskmin/11-08-classification-logreg-deep-dive.md @@ -1,6 +1,6 @@ --- title: "Chapter 11.08: Logistic Regression: Deep Dive" -weight: 11015 +weight: 11008 --- In this segment, we derive the gradient and Hessian of logistice regression and show that logistic regression is a convex problem. This section is presented as a **deep-dive**. Please note that there are no videos accompanying this section. diff --git a/content/chapters/11_advriskmin/11-09-classification-brier.md b/content/chapters/11_advriskmin/11-09-classification-brier.md index 49939eb..b601581 100644 --- a/content/chapters/11_advriskmin/11-09-classification-brier.md +++ b/content/chapters/11_advriskmin/11-09-classification-brier.md @@ -1,6 +1,6 @@ --- -title: "Chapter 11.08: Brier Score" -weight: 11008 +title: "Chapter 11.09: Brier Score" +weight: 11009 --- In this section, we introduce the Brier score and derive its risk minimizer and optimal constant model. We further discuss the connection between Brier score minimization and tree splitting according to the Gini index. diff --git a/content/chapters/11_advriskmin/11-10-classification-further-losses.md b/content/chapters/11_advriskmin/11-10-classification-further-losses.md index 4d9c403..6354c1e 100644 --- a/content/chapters/11_advriskmin/11-10-classification-further-losses.md +++ b/content/chapters/11_advriskmin/11-10-classification-further-losses.md @@ -1,6 +1,6 @@ --- -title: "Chapter 11.09: Advanced Classification Losses" -weight: 11009 +title: "Chapter 11.10: Advanced Classification Losses" +weight: 11010 --- In this section, we introduce and discuss the following advanced classification losses: (squared) hinge loss, \\(L2\\) loss on scores, exponential loss, and AUC loss. diff --git a/content/chapters/11_advriskmin/11-11-classification-deep-dive.md b/content/chapters/11_advriskmin/11-11-classification-deep-dive.md index 481880d..f8f4a29 100644 --- a/content/chapters/11_advriskmin/11-11-classification-deep-dive.md +++ b/content/chapters/11_advriskmin/11-11-classification-deep-dive.md @@ -1,6 +1,6 @@ --- -title: "Chapter 11.10: Optimal constant model for the empirical log loss risk" -weight: 11010 +title: "Chapter 11.11: Optimal constant model for the empirical log loss risk" +weight: 11011 --- In this segment, we explore the derivation of the optimal constant model concerning the empirical log loss risk. This section is presented as a **deep-dive**. Please note that there are no videos accompanying this section. diff --git a/content/chapters/11_advriskmin/11-12-max-likelihood-l2.md b/content/chapters/11_advriskmin/11-12-max-likelihood-l2.md index 3fd26c9..8e9abf2 100644 --- a/content/chapters/11_advriskmin/11-12-max-likelihood-l2.md +++ b/content/chapters/11_advriskmin/11-12-max-likelihood-l2.md @@ -1,6 +1,6 @@ --- -title: "Chapter 11.11: Maximum Likelihood Estimation vs Empirical Risk Minimization I" -weight: 11011 +title: "Chapter 11.12: Maximum Likelihood Estimation vs Empirical Risk Minimization I" +weight: 11012 --- We discuss the connection between maximum likelihood estimation and risk minimization, then demonstrate the correspondence between a Gaussian error distribution and \\(L2\\) loss. diff --git a/content/chapters/11_advriskmin/11-13-max-likelihood-other.md b/content/chapters/11_advriskmin/11-13-max-likelihood-other.md index 506c6c0..aa1e0a3 100644 --- a/content/chapters/11_advriskmin/11-13-max-likelihood-other.md +++ b/content/chapters/11_advriskmin/11-13-max-likelihood-other.md @@ -1,6 +1,6 @@ --- -title: "Chapter 11.12: Maximum Likelihood Estimation vs Empirical Risk Minimization II" -weight: 11012 +title: "Chapter 11.13: Maximum Likelihood Estimation vs Empirical Risk Minimization II" +weight: 11013 --- We discuss the connection between maximum likelihood estimation and risk minimization for further losses (\\(L1\\) loss, Bernoulli loss). diff --git a/content/chapters/11_advriskmin/11-14-losses-properties.md b/content/chapters/11_advriskmin/11-14-losses-properties.md index bd9ff52..174c246 100644 --- a/content/chapters/11_advriskmin/11-14-losses-properties.md +++ b/content/chapters/11_advriskmin/11-14-losses-properties.md @@ -1,6 +1,6 @@ --- -title: "Chapter 11.13: Properties of Loss Functions" -weight: 11013 +title: "Chapter 11.14: Properties of Loss Functions" +weight: 11014 --- We discuss the concept of robustness, analytical and functional properties of loss functions and how they may influence the convergence of optimizers. diff --git a/content/chapters/11_advriskmin/11-15-bias-variance-decomposition.md b/content/chapters/11_advriskmin/11-15-bias-variance-decomposition.md index bab0275..397255e 100644 --- a/content/chapters/11_advriskmin/11-15-bias-variance-decomposition.md +++ b/content/chapters/11_advriskmin/11-15-bias-variance-decomposition.md @@ -1,6 +1,6 @@ --- -title: "Chapter 11.14: Bias Variance Decomposition" -weight: 11014 +title: "Chapter 11.15: Bias Variance Decomposition" +weight: 11015 --- We discuss how to decompose the generalization error of a learner. diff --git a/content/chapters/11_advriskmin/11-16-bias-variance-deep-dive.md b/content/chapters/11_advriskmin/11-16-bias-variance-deep-dive.md index a33d8cc..9cd0f25 100644 --- a/content/chapters/11_advriskmin/11-16-bias-variance-deep-dive.md +++ b/content/chapters/11_advriskmin/11-16-bias-variance-deep-dive.md @@ -1,6 +1,6 @@ --- -title: "Chapter 11.15: Bias Variance Decomposition: Deep Dive" -weight: 11015 +title: "Chapter 11.16: Bias Variance Decomposition: Deep Dive" +weight: 11016 --- In this segment, we discuss details of the decomposition of the generalization error of a learner. This section is presented as a **deep-dive**. Please note that there are no videos accompanying this section.