From 19906a9b55b539fe09e3c33db86f0f9db73f3839 Mon Sep 17 00:00:00 2001 From: Jonas Rembser Date: Mon, 4 Mar 2024 10:33:39 +0100 Subject: [PATCH] [RF] Mention in the docs that the new CPU eval backend is the default --- roofit/roofitcore/src/RooAbsPdf.cxx | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/roofit/roofitcore/src/RooAbsPdf.cxx b/roofit/roofitcore/src/RooAbsPdf.cxx index 282938551ff5b5..830613defca3dc 100644 --- a/roofit/roofitcore/src/RooAbsPdf.cxx +++ b/roofit/roofitcore/src/RooAbsPdf.cxx @@ -853,18 +853,19 @@ double RooAbsPdf::extendedTerm(RooAbsData const& data, bool weightSquared, bool * `EvalBackend(std::string const&)` Choose a likelihood evaluation backend: * *
Backend Description - *
**legacy** - *default* The original likelihood evaluation method. - * Evaluates the PDF for each single data entry at a time before summing the negative log probabilities. - * This is the default if `EvalBackend()` is not passed. - *
**cpu** New vectorized evaluation mode, using faster math functions and auto-vectorisation. - * If all RooAbsArg objects in the model support it, likelihood computations are 2 to 10 times faster, - * unless your dataset is so small that the vectorization is not worth it. - * The relative difference of the single log-likelihoods w.r.t. the legacy mode is usually better than \f$10^{-12}\f$, + *
**cpu** - *default* New vectorized evaluation mode, using faster math functions and auto-vectorisation. + * Since ROOT 6.23, this is the default if `EvalBackend()` is not passed, succeeding the **legacy** backend. + * If all RooAbsArg objects in the model support vectorized evaluation, + * likelihood computations are 2 to 10 times faster than with the **legacy** backend + * - unless your dataset is so small that the vectorization is not worth it. + * The relative difference of the single log-likelihoods with respect to the legacy mode is usually better than \f$10^{-12}\f$, * and for fit parameters it's usually better than \f$10^{-6}\f$. In past ROOT releases, this backend could be activated with the now deprecated `BatchMode()` option. *
**cuda** Evaluate the likelihood on a GPU that supports CUDA. * This backend re-uses code from the **cpu** backend, but compiled in CUDA kernels. * Hence, the results are expected to be identical, modulo some numerical differences that can arise from the different order in which the GPU is summing the log probabilities. * This backend can drastically speed up the fit if all RooAbsArg object in the model support it. + *
**legacy** The original likelihood evaluation method. + * Evaluates the PDF for each single data entry at a time before summing the negative log probabilities. *
**codegen** **Experimental** - Generates and compiles minimal C++ code for the NLL on-the-fly and wraps it in the returned RooAbsReal. * Also generates and compiles the code for the gradient using Automatic Differentiation (AD) with [Clad](https://github.com/vgvassilev/clad). * This analytic gradient is passed to the minimizer, which can result in significant speedups for many-parameter fits,