Skip to content

Commit

Permalink
Some new literature
Browse files Browse the repository at this point in the history
  • Loading branch information
michael-hoss committed Jun 26, 2023
1 parent f58c975 commit 0b91272
Show file tree
Hide file tree
Showing 2 changed files with 9 additions and 2 deletions.
2 changes: 1 addition & 1 deletion literature
9 changes: 8 additions & 1 deletion rolling_review_updates.tex
Original file line number Diff line number Diff line change
Expand Up @@ -356,6 +356,8 @@ \subsubsection{Relevance for Vehicle Safety}
\subsection{Object-Level Data-Driven Sensor Modeling}
\label{sec:sensor_modeling}

Lindenmaier et al. \cite{Lindenmaier2023sensor} do object-level data-driven sensor modeling, including existence uncertainty.

Innes and Ramamoorthy \cite{Innes2022testing} use a surrogate model of the OuT (``sensor model" in this review) to forecast failures of the perception-control system in a simulation.
By specifically sampling important scenarios in the simulation, they can calculate failure probabilities with a reduced test effort.
However, a failure observed by this test method can be attributed to either the perception or the planning subsystem, or the interaction of both.
Expand Down Expand Up @@ -394,6 +396,11 @@ \section{Test Criteria and Metrics}
\label{sec:axis_criteria_metrics}

\subsection{\new{Classification of Metrics}}
\textit{\textbf{Task-oriented metrics}}, which consider the relevance for the downstream driving function, are likely the most important metric category for this review.

Madala and Avalos Gonzalez \cite{Madala2023metrics} propose new metrics for SOTIF analysis of ML in AD. Many of their metrics are based on existing TP/FP/FN-based metrics, but become more fine-granuar to reveal information that traditional metrics obscure (e.g. at which distance, speed, ODD condition, DNN confidence etc.).


Sämann et al. \cite{Saemann2020strategy} propose a classification scheme for safety-relevant DNN perception metrics:
\begin{itemize}
\item Prediction quality metrics (PQM): typical metrics for perception algorithm benchmarking, see \ref{sec:perc_algo_benchm_metrics}.
Expand All @@ -402,7 +409,7 @@ \subsection{\new{Classification of Metrics}}
\item Data metrics: quality of the data sets. Describe coverage, reference data quality, realism of simulated data etc.
\end{itemize}

Additionally, \textit{\textbf{task-oriented metrics}}, which consider the relevance for the downstream driving function, are likely the most important metric category for this review.


\subsection{Specification of Requirements and Criteria}
\label{sec:requirements}
Expand Down

0 comments on commit 0b91272

Please sign in to comment.