Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

move hierarchichal SBI over ensembles to SBI section #174

Merged
merged 1 commit into from
Jul 20, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions HEPML.tex
Original file line number Diff line number Diff line change
Expand Up @@ -177,7 +177,7 @@
\\\textit{These approaches learn the density or perform generative modeling using transformer-based networks.}
\item \textbf{Physics-inspired}~\cite{Andreassen:2018apy,Andreassen:2019txo,1808876,Lai:2020byl,Barenboim:2021vzh}
\\\textit{A variety of methods have been proposed to use machine learning tools (e.g. neural networks) combined with physical components.}
\item \textbf{Mixture Models}~\cite{Chen:2020uds,Burton:2021tsd,Graziani:2021vai,Liu:2022dem,Heinrich:2023bmt}
\item \textbf{Mixture Models}~\cite{Chen:2020uds,Burton:2021tsd,Graziani:2021vai,Liu:2022dem}
\\\textit{A mixture model is a superposition of simple probability densities. For example, a Gaussian mixture model is a sum of normal probability densities. Mixture density networks are mixture models where the coefficients in front of the constituent densities as well as the density parameters (e.g. mean and variances of Gaussians) are parameterized by neural networks.}
\item \textbf{Phase space generation}~\cite{Bendavid:2017zhk,Bothmann:2020ywa,Gao:2020zvv,Gao:2020vdv,Klimek:2018mza,Carrazza:2020rdn,Nachman:2020fff,Chen:2020nfb,Verheyen:2020bjw,Backes:2020vka,Danziger:2021eeg,Yoon:2020zmb,Maitre:2022xle,Jinno:2022sbr,Heimel:2022wyj,Renteria-Estrada:2023buo,Singh:2023yvj}
\\\textit{Monte Carlo event generators integrate over a phase space that needs to be generated efficiently and this can be aided by machine learning methods.}
Expand All @@ -191,7 +191,7 @@
\item \textbf{Simulation-based (`likelihood-free') Inference}
\\\textit{Likelihood-based inference is the case where $p(x|\theta)$ is known and $\theta$ can be determined by maximizing the probability of the data. In high energy physics, $p(x|\theta)$ is often not known analytically, but it is often possible to sample from the density implicitly using simulations.}
\begin{itemize}
\item \textbf{Parameter estimation}~\cite{Andreassen:2019nnm,Stoye:2018ovl,Hollingsworth:2020kjg,Brehmer:2018kdj,Brehmer:2018eca,Brehmer:2019xox,Brehmer:2018hga,Cranmer:2015bka,Andreassen:2020gtw,Coogan:2020yux,Flesher:2020kuy,Bieringer:2020tnw,Nachman:2021yvi,Chatterjee:2021nms,NEURIPS2020_a878dbeb,Mishra-Sharma:2021oxe,Barman:2021yfh,Bahl:2021dnc,Arganda:2022qzy,Kong:2022rnd,Arganda:2022zbs,Rizvi:2023mws}
\item \textbf{Parameter estimation}~\cite{Andreassen:2019nnm,Stoye:2018ovl,Hollingsworth:2020kjg,Brehmer:2018kdj,Brehmer:2018eca,Brehmer:2019xox,Brehmer:2018hga,Cranmer:2015bka,Andreassen:2020gtw,Coogan:2020yux,Flesher:2020kuy,Bieringer:2020tnw,Nachman:2021yvi,Chatterjee:2021nms,NEURIPS2020_a878dbeb,Mishra-Sharma:2021oxe,Barman:2021yfh,Bahl:2021dnc,Arganda:2022qzy,Kong:2022rnd,Arganda:2022zbs,Rizvi:2023mws,Heinrich:2023bmt}
\\\textit{This can also be viewed as a regression problem, but there the goal is typically to do maximum likelihood estimation in contrast to directly minimizing the mean squared error between a function and the target.}
\item \textbf{Unfolding}~\cite{Andreassen:2019cjw,Datta:2018mwd,Bellagente:2019uyp,Gagunashvili:2010zw,Glazov:2017vni,Martschei:2012pr,Lindemann:1995ut,Zech2003BinningFreeUB,1800956,Vandegar:2020yvw,Howard:2021pos,Baron:2021vvl,Andreassen:2021zzk,Komiske:2021vym,H1:2021wkz,Arratia:2021otl,Wong:2021zvv,Arratia:2022wny,Backes:2022vmn,Chan:2023tbf,Shmakov:2023kjj}
\\\textit{This is the task of removing detector distortions. In contrast to parameter estimation, the goal is not to infer model parameters, but instead, the undistorted phase space probability density. This is often also called deconvolution.}
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -1128,7 +1128,6 @@ This review was built with the help of the HEP-ML community, the [INSPIRE REST A
* [Mixture Density Network Estimation of Continuous Variable Maximum Likelihood Using Discrete Training Samples](https://arxiv.org/abs/2103.13416)
* [A Neural-Network-defined Gaussian Mixture Model for particle identification applied to the LHCb fixed-target programme](https://arxiv.org/abs/2110.10259)
* [Geometry-aware Autoregressive Models for Calorimeter Shower Simulations](https://arxiv.org/abs/2212.08233)
* [Hierarchical Neural Simulation-Based Inference Over Event Ensembles](https://arxiv.org/abs/2306.12584)

### Phase space generation

Expand Down Expand Up @@ -1283,6 +1282,7 @@ This review was built with the help of the HEP-ML community, the [INSPIRE REST A
* [New Machine Learning Techniques for Simulation-Based Inference: InferoStatic Nets, Kernel Score Estimation, and Kernel Likelihood Ratio Estimation](https://arxiv.org/abs/2210.01680)
* [Machine-Learned Exclusion Limits without Binning](https://arxiv.org/abs/2211.04806)
* [Learning Likelihood Ratios with Neural Network Classifiers](https://arxiv.org/abs/2305.10500)
* [Hierarchical Neural Simulation-Based Inference Over Event Ensembles](https://arxiv.org/abs/2306.12584)

### Unfolding

Expand Down
2 changes: 1 addition & 1 deletion docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -1285,7 +1285,6 @@ const expandElements = shouldExpand => {
* [Mixture Density Network Estimation of Continuous Variable Maximum Likelihood Using Discrete Training Samples](https://arxiv.org/abs/2103.13416)
* [A Neural-Network-defined Gaussian Mixture Model for particle identification applied to the LHCb fixed-target programme](https://arxiv.org/abs/2110.10259)
* [Geometry-aware Autoregressive Models for Calorimeter Shower Simulations](https://arxiv.org/abs/2212.08233)
* [Hierarchical Neural Simulation-Based Inference Over Event Ensembles](https://arxiv.org/abs/2306.12584)


??? example "Phase space generation"
Expand Down Expand Up @@ -1463,6 +1462,7 @@ const expandElements = shouldExpand => {
* [New Machine Learning Techniques for Simulation-Based Inference: InferoStatic Nets, Kernel Score Estimation, and Kernel Likelihood Ratio Estimation](https://arxiv.org/abs/2210.01680)
* [Machine-Learned Exclusion Limits without Binning](https://arxiv.org/abs/2211.04806)
* [Learning Likelihood Ratios with Neural Network Classifiers](https://arxiv.org/abs/2305.10500)
* [Hierarchical Neural Simulation-Based Inference Over Event Ensembles](https://arxiv.org/abs/2306.12584)


??? example "Unfolding"
Expand Down