Skip to content

Latest commit

 

History

History
40 lines (27 loc) · 1.61 KB

08-summary.md

File metadata and controls

40 lines (27 loc) · 1.61 KB

4.8 Summary

Slides

Notes

General definitions:

  • Metric: A single number that describes the performance of a model
  • Accuracy: Fraction of correct answers; sometimes misleading
  • Precision and recall are less misleading when we have class inbalance
  • ROC Curve: A way to evaluate the performance at all thresholds; okay to use with imbalance
  • K-Fold CV: More reliable estimate for performance (mean + std)

In brief, this weeks was about different metrics to evaluate a binary classifier. These measures included accuracy, confusion table, precision, recall, ROC curves(TPR, FRP, random model, and ideal model), and AUROC. Also, we talked about a different way to estimate the performance of the model and make the parameter tuning with cross-validation.

The code of this project is available in this jupyter notebook.

Add notes from the video (PRs are welcome)

⚠️ The notes are written by the community.
If you see an error here, please create a PR with a fix.

Navigation