Skip to content

Commit

Permalink
add alternative roc inclass
Browse files Browse the repository at this point in the history
  • Loading branch information
giuseppec committed Nov 27, 2024
1 parent 4f7e025 commit 03ffcac
Show file tree
Hide file tree
Showing 2 changed files with 84 additions and 0 deletions.
69 changes: 69 additions & 0 deletions exercises/evaluation-tex/ex_rnw/ex_roc-metrics-allocation.Rnw
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@

\begin{enumerate}
\item \textbf{Confusion Matrix Completion and Explanation}
\begin{enumerate}
\item Below is an empty confusion matrix for a binary classification problem. Assign the correct terms (\( \text{TP} \), \( \text{FP} \), \( \text{FN} \), \( \text{TN} \)) to fields (A), (B), (C), (D) and explain what each term represents. % and how it is used to evaluate model performance.

\[
\begin{array}{c|c|c|}
\multicolumn{1}{c}{} & \textbf{Actual Positive} & \textbf{Actual Negative} \\
\hline
\textbf{Predicted Positive} & (A) & (B) \\
\hline
\textbf{Predicted Negative} & (C) & (D) \\
\hline
\end{array}
\]


\end{enumerate}

\item Below is a table that incorrectly assigns 11 metrics derived from the confusion matrix to their corresponding formulas and descriptions.
The table contains the following columns:

\begin{itemize}
\item Metric Names (Lettered A-O): Commonly used names (including alternative names) for each metric.
\item Formula (Lettered a-o): Expressions using \( \text{TP} \), \( \text{FP} \), \( \text{FN} \), \( \text{TN} \).
\item Description (Numbered 1-11): Brief descriptions of metrics derived from the confusion matrix.
\end{itemize}
Your task is to:
\begin{itemize}
\item Correctly match metric names, formulas, and descriptions in a new table.
\item Identify metric names that are synonyms (i.e., referring to the same metric).
\item Identify any formula that does not correspond to a metric or description and list them separately.
\end{itemize}
%Below you find a table that wrongly assigns 11 ROC metrics to their formulas and their short descriptions of what they measure. Your task is to create new table by correctly assigning the metrics to their corresponding formulas and descriptions. %, then assign the correct formula to the metric.
%Start with identifying which metric names refer to the same metric and proceed with identifying formulas that cannot be assigned to a suitable metric name (and description).




\begin{tabular}{|l|l|p{6cm}|}
\hline
\textbf{Metric Name} & \textbf{Formula} & \textbf{Description} \\ \hline
A) True Positive Rate (TPR) & a) \( \frac{\text{FN}}{\text{TP} + \text{FN}} \) & 1) Proportion of actual positives correctly identified. \\ \hline
B) Recall & b) \( \frac{\text{TP}}{\text{TP} + \text{FN}} \) & 2) Proportion of actual negatives correctly identified. \\ \hline
C) Sensitivity & c) \( \frac{\text{TN}}{\text{TN} + \text{FP}} \) & 3) Proportion of positive predictions that are correct. \\ \hline
D) True Negative Rate (TNR) & d) \( \frac{\text{TP}}{\text{TP} + \text{FP}} \) & 4) Proportion of positive predictions that are incorrect. \\ \hline
E) Specificity & e) \( \frac{\text{FP}}{\text{TP} + \text{FP}} \) & 5) Proportion of actual negatives incorrectly classified as positive. \\ \hline
F) Precision & f) \( \frac{\text{FP}}{\text{FP} + \text{TN}} \) & 6) Overall proportion of correct predictions. \\ \hline
G) Positive Predictive Value (PPV) & g) \( \frac{\text{TP} + \text{TN}}{\text{TP} + \text{FP} + \text{FN} + \text{TN}} \) & 7) Combines precision and recall using their harmonic mean. \\ \hline
H) False Discovery Rate (FDR) & h) \( \frac{2 \cdot \text{Precision} \cdot \text{Recall}}{\text{Precision} + \text{Recall}} \) & 8) Proportion of actual positives in the dataset. \\ \hline
I) False Positive Rate (FPR) & i) \( \frac{2 \cdot (\text{Precision} + \text{Recall})}{\text{Precision} \cdot \text{Recall}} \) & 9) Proportion of negative predictions that are correct. \\ \hline
J) False Negative Rate (FNR) & j) \( \frac{\text{TP} + \text{FN}}{\text{TP} + \text{FP} + \text{FN} + \text{TN}} \) & 10) Proportion of negative predictions that are incorrect. \\ \hline
K) Negative Predictive Value (NPV) & k) \( \frac{\text{TN}}{\text{TN} + \text{FN}} \) & 11) Proportion of actual positives incorrectly classified as negative. \\ \hline
L) Accuracy & l) \( \frac{\text{FN}}{\text{TN} + \text{FN}} \) & \\ \hline
M) False Omission Rate (FOR) & m) \( \frac{\text{FN}}{\text{FP} + \text{FN}} \) & \\ \hline
N) Prevalence & n) \( \frac{\text{FN} + \text{FP}}{\text{TP} + \text{TN} + \text{FP} + \text{FN}} \) & \\ \hline
O) F1 Score & o) \( \frac{\text{TP} - \text{FP}}{\text{TP} + \text{FN} + \text{FP} + \text{TN}} \) & \\ \hline
\end{tabular}

% \item \textbf{Reflection Questions}
% \begin{enumerate}
% \item Why might Precision and Recall be more informative than Accuracy in imbalanced datasets?
% \item How does the F1 Score balance Precision and Recall, and why is this balance important?
% \item Why are Specificity and False Positive Rate complementary to Recall and Precision in model evaluation?
% \item Provide an example where a high False Negative Rate would be critical to address.
% \item In real-world applications, how do you decide which metrics to prioritize when evaluating model performance?
% \end{enumerate}
\end{enumerate}
15 changes: 15 additions & 0 deletions exercises/evaluation-tex/ic_concepts_2.Rnw
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
% !Rnw weave = knitr

<<setup-child, include = FALSE>>=
library('knitr')
knitr::set_parent("../../style/preamble_ueb.Rnw")
@


\kopfic{6}{Evaluation}


\aufgabe{}{
<<child="ex_rnw/ex_roc-metrics-allocation.Rnw">>=
@
}

0 comments on commit 03ffcac

Please sign in to comment.