diff --git a/search.json b/search.json
index 593a5b5d..45367f4e 100644
--- a/search.json
+++ b/search.json
@@ -4,7 +4,7 @@
"href": "NEWS.html",
"title": "News",
"section": "",
- "text": "coef_rename() gets a poly=TRUE argument to rename poly(x, 2)-style coefficients. Thanks to @mccarthy-m-g for code contribution #778.\nget_gof(): logLik column converted to numeric for consistent types. Issue 649 reported on the mice Github.\nkableExtra update the siunitx commands for d columns.\nkableExtra escapes footnotes in HTML when output=\"kableExtra\"). Thanks to @dmurdoch and @michaelherndon97 for report #793.\nNew fmt_equivalence() function to implement the rounding suggestion of Astier & Wolak (2024). Thanks to Nicolas Astier for code prototype.\nFix partial match warnings for some datasummary_*() tables. No change in behavior. Thanks to @fkohrt for report #804.\n\nBugs:\n\nStars footnotes get properly escaped in some LaTeX configurations. Thanks to @etiennebacher for report #798.\ndatasummary_*() functions can be called as arguments in another datasummary_*() arguments, like add_columns. Thanks to @mronkko for report #799\n\n\n\n\n\nDocumentation improvements\nWarning when users use caption instead of title. Inconsistency with respect to tinytable.\nImproved documentation for title argument.\nhtest workaround.\n\nBugs:\n\ndatasummary_correlation() respects the escape argument. Issue #772.\ndatasummary_correlation() supports data.table objects. Issue #771.\n\n\n\n\nNew:\n\nmodelsummary() gets a gof_function argument which accepts functions to extract custom information from models.\nflextable: Support spanning column headers\ndatasummary_correlation() gets a star argument.\ndatasummary_correlation() accepts objects produced by the correlation package.\ndatasummary_balance(): formula can now include variables on the left-hand side to indicate the subset of columns to summarize: datasummary_balance(mpg + hp ~ am, data = mtcars) Thanks to @etiennebacher for feature request #751.\nUnnecessary text printed to screen on some F sta computations is now suppressed.\nUpdate to tinytable 0.3.0\n\nBugs:\n\nescape argument not respected in datasummary_df(). Thanks to @adamaltmejd for report #740\ndatasummary_correlation() supports data.table. Thanks to volatilehead on Twitter for report #737.\nAccepts named estimate argument when using shape and statistics are horizontal. Thanks to @iago-pssjd for report #745.\nLabelled values but no label for variable broke datasummary(). Thanks to @marklhc for report #752.\ncoef_map does not work when there is a group. Thanks to @mccarthy-m-g for report #757.\nkableExtra: fix spanning column headers when using the shape argument.\nMultiple footnotes and line breaks in footnotes are now allowed in tinytable output. Thanks to\n\n\n\n\nMAJOR BREAKING CHANGE: The default output format is now tinytable instead of kableExtra. Learn more about tinytable here:\nhttps://vincentarelbundock.github.io/tinytable/\nTo revert to the previous behavior persistently, users can call:\nlibrary(modelsummary) config_modelsummary(factory_default = “kableExtra”)\nOther breaking changes:\n\nThe statistic_override argument was replaced by vcov over 1 year ago, with appropriate deprecation warnings. It is now fully removed.\nThe group argument was replaced by shape several releases ago. It is now fully removed.\ndatasummary_skim()\n\nhistograms are only available with the tinytable backend. This allows a lot of code simplification and more customization.\nThe order of arguments type and output is switched for consistency with other functions.\nhistogram argument is deprecated.\n\n\nNew features:\n\ndatasummary_skim():\n\nNew type=\"all\" by default to display both numeric and categorical variables in a single table with distinct panels. This feature is only available with the tinytable backend (default).\nby argument allows summarizing numeric variables by group.\nfun_numeric argument accepts a list of functions to control the summary columns.\n\nmodelsummary():\n\nstatistic and estimate can be specified as named vectors to control the names of statistics when displayed in different columns using the shape argument. (Thanks to @mps9506 for bug report #722)\nmodelsummary(panels, shape = \"cbind\") automatically adds column-spanning labels when panels is a named nested list of models.\n\nconfig_modelsummary() gets a startup_message argument to silence the startup message persistently.\nImproved documentation and vignettes, providing clearer instructions and examples.\nUpdated tests and snapshots to ensure reliability and consistency across changes.\n\nBug fixes:\n\nFixed Issue #399: datasummary_balance() siunitx formatting.\nFixed Issue #782: Useless warning in some modelplot() calls. Thanks to @iago-pssjd for the report and @florence-laflamme for the fix.\nAddressed various bugs and made optimizations for better performance and user experience.\n\n\n\n\n\ntinytable supports histograms in datasummary_skim()\nconfig_modelsummary() supports tinytable factory.\n\n\n\n\n\nSupport the tinytable package as an output format (“factory”): https://vincentarelbundock.github.io/tinytable/\nQuarto: md output format is recognized.\noptions(modelsummary_factory_default) is respected, even in qmd->md documents.\n\nBugs:\n\nSome omitted coefficients with I() operator in formulas. Issue #693.\n\n\n\n\nMisc:\n\nDuplicate values in shape groups are removed automatically for cleaner labels.\n“Title” line no longer indented in markdown tables. Thanks to Ryan Briggs for report #671.\n\nBugs:\n\nSmall p values were not displayed properly in HTML output using kableExtra. Issue #669.\n\n\n\n\nNew:\n\nMinimal support for Typst output, with auto-detection in Quarto documents.\nstrip argument in dvnames.\ns.value statistic is now available whenever p.value is available. See Greenland (2019).\ndatasummary_skim() now includes histograms in gt tables.\n\nBugs:\n\nGOF term names get escaped in LaTeX. Thanks to @shreyasgm for reviving Issue #546.\nConflict with furrr generated errors for some models. Thanks to @sammo3182 for Issue #647.\n\n\n\n\nNew:\n\nfmt_sci() can now be used in the fmt argument for rounding with scientific notation.\n\nBugs:\n\nGroup separators respect add_rows with shape=\"rbind\". Thanks to @lrose1 for Report #626.\nBad column with horizontal models in shape and grouped estimates. Thanks to @iago-pssjd for Report #631.\ncoef_rename=TRUE bug with grouped estimates. Thanks to @iago-pssjd for Report #631.\nUpstream issue #881 in parameters meant that vcov was no longer used for confidence intervals.\n\n\n\n\n\nBuilt-in support for markdown tables.\nPackage no longer depends on kableExtra. Recommends an additional install for other formats.\nPersistent configuration of default output format: config_modelsummary(factory_default = \"gt\")\nshape = \"rcollapse\" and shape = \"rbind\"\nglance_custom() can drop GOF by assigning NA: https://stackoverflow.com/questions/75215355/assigning-different-coefficient-names-and-goodness-of-fit-statistics-to-differen\nWhen a statistic is not available, modelsummary prints an empty cell instead of returning an error.\n“\\label{tab:something}” works in title even when escape=TRUE\nMultiple fixest_multi objects supported.\noptions(modelsummary_future = FALSE) disables future parallelism.\n\nBug fixes:\n\nstatistic=NULL is now respected when shape=\"rbind\". Thanks to Panos Mavros for report #620.\nget_estimates() supports vcov string shortcuts and formulas. Thanks to @ethans-carl for report #605.\nQuarto and Rmarkdown documents include situnix in header automatically for decimal alignement with align=\"ddd\"\nescape is now respected by modelsummary with shape=\"rbind\". Thanks to @chickymonkeys for report #622.\n\n\n\n\nBreaking change:\n\nThe default column label style in modelsummary() has changed from “Model 1” to “(1)”. The benefits are: labels are no longer in English by default; use less horizontal space; eliminate the “Model” redundancy. Unfortunately, this could break code in some edge cases where users rely on column names to manipulate tables. The old behavior can be restored by calling: options(modelsummary_model_labels=\"model\")\n\nNew features:\n\nshape=\"rbind\" to stack multiple regression tables and create “panels” with labelled groups of models.\nfmt: new helper functions for different formatting styles\n\nfmt = fmt_decimal(2): decimal digits\nfmt = fmt_decimal(digits = 2, pdigits = 4): decimal digits with p value-specific setting\nfmt = fmt_sprintf(\"%.3f\"): sprintf() decimal\nfmt = fmt_sprintf(\"%.3e\"): sprintf() scientific\nfmt = fmt_significant(3): significant digits\nfmt = fmt_statistic(\"estimate\" = 2, \"std.error\" = 3): statistic-specific formatting\nfmt = fmt_term(\"(Intercept)\" = 2, \"hp\" = 3): term-specific formatting\nfmt = fmt_identity(): raw values\n\nNew styles for default column labels in modelsummary, such as Roman Numerals or letters in parentheses.\n\nSet the style with a global option: options(modelsummary_model_labels = \"roman\")\nSupported styles: “model”, “arabic”, “letters”, “roman”, “(arabic)”, “(letters)”, “(roman)””\n\nmodelplot(draw = FALSE) now returns a p.value column. This allows conditional aesthetics (see the modelplot vignette).\nBetter integration with the marginaleffects package.\n\nBugs:\n\nSome fixest models returns useless “group.x” and “group.y” columns. Isse #591. Thanks to Adam Altmejd for the report.\n\n\n\n\nBreaking change:\n\nWith the shape and output=\"dataframe\" arguments, there always used to be a group column. Now, this column has the same name as the variable in the shape formula (“response”, “component”, etc.).\n\nNew features:\n\nshape can include multiple groups.\ncoef_rename can be an unnamed vector of length equal to the number of terms in the final table, obtained after coef_map and coef_omit are applied and models are merged.\ncoef_omit accepts numeric indices. Positive values: coefficients to omit. Negative values: coefficients to keep.\ndatasummary_skim: Increased maximum number of variables to 250.\nQuarto notebooks compile to Word and Markdown automatically.\n\nBug fixes:\n\nOrder of notes preserved in some output format (Issue #577)\n\n\n\n\nBreaking change:\n\nRequires siunitx version 3.0.25 LaTeX package.\nThe title argument now respects the escape argument for all kableExtra output formats. This can break tables in which users manually escaped titles.\n\nNew features:\n\n“d” is accepted for decimal-alignment in the align argument for all output formats. modelsummary(mod, align = \"ld\")\nNew update_modelsummary() function makes it easy to install the dev versions of modelsummary and its dependencies (mostly useful for Vincent and people who report bugs).\nRounding: display at least one significant digit by default.\nAutomatic renaming of haven labels in modelsummary(), datasummary(), datasummary_skim()\nAllow output = \"filename.csv\"\nAllow output = \"filename.xlsx\"\nadd_columns argument supported in modelsummary()\ndatasummary_balance supports the stars argument.\nAllow stars and confidence intervals with align = \"d\" column.\n\nBug fixes:\n\nIn some locales, the HTML minus sign created problems in the output. We only use it in “known” locales.\nMany minor bug fixes\n\n\n\n\n\nMinor release to fix CRAN failure\n\n\n\n\n\nshape argument accepts interactions with the colon “:” character. This combines two columns into one, which can be useful to display terms and group names in a single column.\nParallelization using parallel::mclapply. See ?modelsummary\nmodelsummary no longer computes confidence intervals when not necessary, which can save some time. Also see: conf_level=NULL\nAdded log likelihood to GOF for lm and glm models.\nRemoved extraneous warnings\nBug fixes\n\n\n\n\nThis first major release accompanies the publication of an article in the Journal of Statistical Software:\nArel-Bundock, Vincent (2022). “modelsummary: Data and Model Summaries in R.” Journal of Statistical Software, 103(1), 1-23. doi:10.18637/jss.v103.i01 https://doi.org/10.18637/jss.v103.i01.’\nIf you like modelsummary, please cite the JSS article and tell your friends about it.\nMinor changes:\n\ngof_map=\"all\" includes all available statistics. gof_map=\"none\" excludes all statistics.\nBug fixes\n\n\n\n\n\nBetter printout for term names in mixed-effects models\n{brms} and {stanreg} models now extracted with diagnostic=NULL and test=NULL by default for speed.\n\n\n\n\nBreaking changes:\n\nmodelsummary_wide is no longer available. Use the shape argument of modelsummary instead.\nmodelsummary now uses the easystats packages (performance and parameters) to extract estimates and goodness-of-fit statistics instead of broom. This can be reverted by setting a global option: options(modelsummary_get=\"broom\"). This change aims to (1) increase consistency across models, (2) improve the developers’ ability to push bug fixes upstream when necessary, and (3) improve support for mixed effects, bayesian, and GAM models. The two main drawbacks are: (a) The set of printed statistics may be slightly different from previous versions of modelsummary (b) The group identifiers used in the shape formula will also be different for certain models (e.g., in nnet::multinom, y.level becomes response).\n\nNew features:\n\nThe shape argument accepts a formula and can reshape information in myriad ways. Deprecates the group argument. Examples:\n\n~ statistic: statistics are shown horizontally in distinct columns.\nmodel ~ term: models in rows and terms in columns.\nterm + y.level + statistic ~ model: grouped coefficients for multivariate outcome in nnet::multinom\ny.level ~ model: partial match is the same as the previous formula\n\nFormat distinct statistics differently by passing a named list to fmt:\n\nmodelsummary(mod, fmt = list(estimate = 2, std.error = 1, rmse = 4))\n\nUse glue to apply functions to numeric values by setting fmt = NULL. Example:\n\nmodelsummary(model, fmt = NULL, estimate = \"{log(estimate)}\")\n\nUpdate for breaking changes after fixest 0.10.4\n\nBug fixes:\n\ngroup_map rename issue\nResidual standard error mistakenly labelled “RMSE” in lm models.\ndatasummary_skim output to jpg should now works\nescape fixes\n\n\n\n\n\nNew exponentiate argument for modelsummary() and modelplot()\ngof_map accepts a vector such as c(\"rmse\", \"nobs\", \"r.squared\")\nDrop rlang dependency\nBug fixes\n\n\n\n\ndatasummary_balance:\n\nAccepts ~ 1 as a formula to summarize all data.\n\nMisc:\n\ndocumentation improvements\nRMSE included by default in models of class lm\n\n\n\n\nmodelsummary:\n\nvcov strings like HC1 and Robust are now case-insensitive\ngof_map now accepts a data.frame or tibble with a fmt list-column which includes functions (see Examples in docs)\nR2 is no longer computed by default for bayesian and mixed effects models. An informative one-time warning is printed about the metrics argument.\n\ndatasummary_skim:\n\nHistograms now work in Jupyter\nBugfix: harmless error message is no longer printed\n\nkableExtra factory:\n\nThe col.names argument can now be passed to kableExtra::kbl through the … ellipsis.\n\nMisc:\n\nMany small improvements to the vignettes and docs\noutput = \"github_document\" is now supported\n\n\n\n\n\nBug fix: siunitx and rounding NA\n\n\n\n\nmodelsummary:\n\nF statistic takes into account vcov argument\nSupport group = group ~ model + term\n\ndatasummary_balance:\n\nWeighted means and standard deviations are now supported. Counts and percentages are not, but raise a warning.\n\nMisc:\n\nBugfix: rounding in LaTeX w/ siunitx and NaN entries.\noutput=‘jupyter’ no longer prints an extraneous TRUE to the notebook\n\n\n\n\nmodelsummary:\n\nImproved vcov argument handling for fixest models (#357 by @grantmcdermott)\nFix display of fixest::i() variables and interactions (#361 by @grantmcdermott)\nConsistent display of clustered SEs (#356, #363 and #366 by @grantmcdermott)\n\ndatasummary_correlation:\n\nadd_rows and add_columns arguments are now available here.\n\nMisc:\n\nGlobal options for output factories are renamed: modelsummary_factory_default, modelsummary_factory_html, etc.\nHot fix for change in R-devel behavior or intersect\n\nBug fixes:\n\ndatasummary_balance: escape variable names when escape=TRUE\nBlogdown LaTeX dependency bug when output is HTML\n\n\n\n\nBreaking change:\n\nSupport for dcolumn for dot-aligned columns is deprecated. Use “d” in the align argument instead.\n\nOther changes:\n\nLaTeX output: Numeric entries are wrapped in the \\num{} function from the siunitx package by default. This produces much nicer formatting. This can be disabled with a global option. See ?modelsummary\nThe align argument accepts a “d” column for dot-alignment using the siunitx LaTeX package: align=\"ldd\".\nHTML tables display proper minus signs.\nNew escape argument in most table-building functions.\nLaTeX output accepts the threeparttable=TRUE argument through ...\nNo more dependency on tidyr\n\nmodelsummary:\n\ngroup: The order of terms in the formula determines the order of rows/columns\n\nmodelsummary_wide:\n\nNote: This function will eventually be deprecated\nBugfix with statistic=NULL.\n\nmodelplot:\n\nPreserves order of models in the user-supplied list\n\ndatasummary_crosstab:\n\nstatistic=NULL produces a very basic crosstab\n\ndatasummary_crosstab:\n\nDefault alignment “lrrrrr” consistent with other datasummary_* functions\n\n\n\n\nmodelsummary:\n\nDisable stars footnote with options(\"modelsummary_stars_note\" = FALSE)\nlongtable=TRUE works for LaTeX output\nInteractions with “:” are no longer converted to “x” when coef_map or coef_rename are used.\ngroup = model ~ term + group is now supported.\n\ndatasummary_skim:\n\ndatasummary_skim(\"categorical\") keeps NA by default. Users can convert variables to factors before calling datasummary_skim to exclude NA.\n\nOther:\n\nImproved warnings for bad calls: modelsummary(model1, model2)\ngt titles use the new caption argument in the gt 0.3.0 function\nBug fix: Overaggressive tests for glue strings prevented functions inside {}\n\n\n\n\nBreaking change:\n\nThe default significance markers stars=TRUE have been updated to be consistent with the default output from base R (e.g., in summary.lm). The new significance thresholds are: “+” p < 0.1, “” p < 0.05, ”” p < 0.01, ”” p < 0.001\n\ndatasummary_crosstab:\n\nNew function to produce cross-tabulations\n\ndatasummary:\n\nN is smart enough to return either the number of elements in a subset or the number of non-missing observations in a variable\n\ndatasummary_balance:\n\nKeeps NAs in factor variables by default. Users can convert their variables with the factor() function to omit NAs automatically.\n\nmodelsummary:\n\nthemes can be set using global options (experimental)\nnew vcov options: “bootstrap”, “HAC”, “NeweyWest”, “Andrews”, “panel-corrected”, “weave”, “outer-product”\nA valid get_gof (glance) is now optional.\n… is pushed through to sandwich, which allows things like: modelsummary(model, vcov = \"bootstrap\", R = 1000, cluster = \"firm\")\n\nOther:\n\nJupyter notebook support via output=\"jupyter\"\nBug fixes\n\n\n\n\nmodelsummary:\n\nnew arguments for modelsummary: group and group_map for grouped parameters (e.g., outcome levels in multinomial logit or components of gamlss model).\ndvnames() makes it easy to get dependent variable column titles (thanks to @NickCH-K)\noutput=\"modelsummary_list\" to save a lightweight list-based representation of the table which can be saved and fed to modelsummary once more to get a full table.\nvcov adds a row to note the type of standard errors.\nmodelsummary accepts a single model with multiple vcovs.\nget_gof forwards … to model_performance\ncoef_map accepts unnamed vectors for easy subsetting\nfixest::fixest_multi support\noptions(modelsummary_get) to set the order of extraction functions to use under the hood (broom vs. easystats vs. all)\nmetrics argument of performance::model_performance is available via modelsummary’s … ellipsis to limit the GOF statistics in Bayesian models.\nusers can omit the stars legend note by using glue strings: estimate=\"{estimate}{stars}\"\noutput=“html” can use gt by setting options(modelsummary_factory_html=\"gt\")\n\ndatasummary_correlation:\n\npasses ... forward\nnew function: datasummary_correlation_format\ndatasummary_correlation’s method argument accepts functions and “pearspear” (thanks to @joachim-gassen)\n\ndatasummary:\n\ndatasummary functions and rounding accept …, big.mark, etc.\n\ndatasummary_skim:\n\nnow works with haven_labeled numeric\nfaster tables with bayesian models.\n\nBug fixes and lints\n\n\n\n\nnew output format: latex_tabular\ntidy_custom allows partial term matches\nmodelsummary(coef_rename) accepts functions\nnew function coef_rename for use in modelsummary(coef_rename=coef_rename)\nmodelplot accepts add_rows to add reference categories\ninformative error message when estimate or statistic is not available\nbug fixes\n\n\n\n\n\nstatistic_override becomes vcov\nvcov accepts shortcuts: “robust”, “stata”, “HC0”, etc.\nvcov accepts formulas for clustered SEs: ~group\nmodelsummary_wide has a new “stacking” argument\nhtml horizontal rule to separate estimates form gof\ngof_map accepts list of lists. only needs 3 columns.\nsupport officedown Rmd\nestimate accepts a vector for per model estimates\noptions(modelsummary_default) can be markdown, html, latex\nbug: passing arguments through …\nbug: stars and rounding\n\n\n\n\n\nglue format for estimate and statistic\neasystats support for model info extraction\ndeprecate statistic_vertical\ndeprecate extract_models. Use modelsummary(output=“dataframe”) instead.\nmodelplot pushes … through to modelsummary(output=“dataframe”)\ndatasummary_skim(type=“dataset”)\ngof_map omits by default\ndatasummary_balance uses row percentages\nstatistic_override does not require a list\nstatistic_override accepts a single model\nN function for well formatted N in datasummary\nBug fixes\n\n\n\n\n\nnew function: modelsummary_wide\ncoef_omit and gof_omit use grepl(perl=TRUE)\nfmt accepts integer, string or function and respects options(OutDec=“,”)\nalign argument for modelsummary\nalign is more liberal to accept dcolumn alignment\nglance_custom methods for lfe and fixest\nbug fixes\n\n\n\n\n\nnew argument: coef_rename\nnew function: datasummary_df\npreserve term order in modelsummary\nrefactor datasummary_balance\ndatasummary_skim uses svg histograms instead of unicode\nremoved 5 dependencies\npass … to kableExtra::kbl for more customization\ntest improvements\ninternal code style\nbug fixes\n\n\n\n\n\nbug fixes\n\n\n\n\n\ndefault HTML output factory is now kableExtra\ninteraction “:” gsubbed by “0d7”\ndependencies: removed 1 depends, 3 imports, and 3 suggests\nword_document knitr works out-of-the-box\nbug fixes\n\n\n\n\n\nglance_custom.fixest ships with modelsummary\n\n\n\n\n\ndatasummary\ndatasummary_skim\ndatasummary_balance\ndatasummary_correlation\nmodelplot\nallow duplicate model names\nbug: can’t use coef_map with multiple statistics (thanks @sbw78)\nbug: wrong number of stars w/ statistic=‘p.value’ (thanks @torfason)\noutput=‘data.frame’. extract is no longer documented.\n\n\n\n\n\nadd_rows now accepts a data.frame with “position” and “section” columns\nadd_rows_location is deprecated\nbug in sanity_output prevented overwriting files\n\n\n\n\n\nhuxtable support\nflextable support\nestimate argument\nfixest tidiers\nwebsite and vignette improvements\ngof_map additions\nglance_custom\ntidy_custom\n\n\n\n\n\nOut-of-the-box Rmarkdown compilation to HTML, PDF, RTF\nkableExtra output format for LaTeX and Markdown\nSupport for threeparttable, colors, and many other LaTeX options\nDeprecated arguments: filename, subtitle\nDeprecated functions: clean_latex, knit_latex\npkgdown website and doc improvements\nmitools tidiers\nNew tests\n\n\n\n\n\nConvenience function to render markdown in row/column labels\nbug: breakage when all GOF were omitted\nClean up manual with @keywords internal\nbug: tidyr import\n\n\n\n\n\ngt is now available on CRAN\nnew latex_env argument for knit_latex and clean_latex\nbug when all gof omitted\nbug in statistic_override with functions\nbug caused by upstream changes in tab_style\nbug caused by upstream changes in filename=‘rtf’\nAllow multiple rows of uncertainty estimates per coefficient\nPreserve add_rows order\nDisplay uncertainty estimates next to the coefficient with statistic_vertical = FALSE\nBetter clean_latex function\nCan display R2 and confidence intervals for mice-imputed lm-models\nInternal functions have @keywords internal to avoid inclusion in docs\nStatistic override accepts pre-formatted character vectors\n\n\n\n\n\nInitial release (gt still needs to be installed from github)",
+ "text": "Bump minimum version requirement for tinytable, parameters, and insight dependencies.\ncoef_rename() gets a poly=TRUE argument to rename poly(x, 2)-style coefficients. Thanks to @mccarthy-m-g for code contribution #778.\nget_gof(): logLik column converted to numeric for consistent types. Issue 649 reported on the mice Github.\nkableExtra update the siunitx commands for d columns.\nkableExtra escapes footnotes in HTML when output=\"kableExtra\"). Thanks to @dmurdoch and @michaelherndon97 for report #793.\nNew fmt_equivalence() function to implement the rounding suggestion of Astier & Wolak (2024). Thanks to Nicolas Astier for code prototype.\nFix partial match warnings for some datasummary_*() tables. No change in behavior. Thanks to @fkohrt for report #804.\n\nBugs:\n\nStars footnotes get properly escaped in some LaTeX configurations. Thanks to @etiennebacher for report #798.\ndatasummary_*() functions can be called as arguments in another datasummary_*() arguments, like add_columns. Thanks to @mronkko for report #799\n\n\n\n\n\nDocumentation improvements\nWarning when users use caption instead of title. Inconsistency with respect to tinytable.\nImproved documentation for title argument.\nhtest workaround.\n\nBugs:\n\ndatasummary_correlation() respects the escape argument. Issue #772.\ndatasummary_correlation() supports data.table objects. Issue #771.\n\n\n\n\nNew:\n\nmodelsummary() gets a gof_function argument which accepts functions to extract custom information from models.\nflextable: Support spanning column headers\ndatasummary_correlation() gets a star argument.\ndatasummary_correlation() accepts objects produced by the correlation package.\ndatasummary_balance(): formula can now include variables on the left-hand side to indicate the subset of columns to summarize: datasummary_balance(mpg + hp ~ am, data = mtcars) Thanks to @etiennebacher for feature request #751.\nUnnecessary text printed to screen on some F sta computations is now suppressed.\nUpdate to tinytable 0.3.0\n\nBugs:\n\nescape argument not respected in datasummary_df(). Thanks to @adamaltmejd for report #740\ndatasummary_correlation() supports data.table. Thanks to volatilehead on Twitter for report #737.\nAccepts named estimate argument when using shape and statistics are horizontal. Thanks to @iago-pssjd for report #745.\nLabelled values but no label for variable broke datasummary(). Thanks to @marklhc for report #752.\ncoef_map does not work when there is a group. Thanks to @mccarthy-m-g for report #757.\nkableExtra: fix spanning column headers when using the shape argument.\nMultiple footnotes and line breaks in footnotes are now allowed in tinytable output. Thanks to\n\n\n\n\nMAJOR BREAKING CHANGE: The default output format is now tinytable instead of kableExtra. Learn more about tinytable here:\nhttps://vincentarelbundock.github.io/tinytable/\nTo revert to the previous behavior persistently, users can call:\nlibrary(modelsummary) config_modelsummary(factory_default = “kableExtra”)\nOther breaking changes:\n\nThe statistic_override argument was replaced by vcov over 1 year ago, with appropriate deprecation warnings. It is now fully removed.\nThe group argument was replaced by shape several releases ago. It is now fully removed.\ndatasummary_skim()\n\nhistograms are only available with the tinytable backend. This allows a lot of code simplification and more customization.\nThe order of arguments type and output is switched for consistency with other functions.\nhistogram argument is deprecated.\n\n\nNew features:\n\ndatasummary_skim():\n\nNew type=\"all\" by default to display both numeric and categorical variables in a single table with distinct panels. This feature is only available with the tinytable backend (default).\nby argument allows summarizing numeric variables by group.\nfun_numeric argument accepts a list of functions to control the summary columns.\n\nmodelsummary():\n\nstatistic and estimate can be specified as named vectors to control the names of statistics when displayed in different columns using the shape argument. (Thanks to @mps9506 for bug report #722)\nmodelsummary(panels, shape = \"cbind\") automatically adds column-spanning labels when panels is a named nested list of models.\n\nconfig_modelsummary() gets a startup_message argument to silence the startup message persistently.\nImproved documentation and vignettes, providing clearer instructions and examples.\nUpdated tests and snapshots to ensure reliability and consistency across changes.\n\nBug fixes:\n\nFixed Issue #399: datasummary_balance() siunitx formatting.\nFixed Issue #782: Useless warning in some modelplot() calls. Thanks to @iago-pssjd for the report and @florence-laflamme for the fix.\nAddressed various bugs and made optimizations for better performance and user experience.\n\n\n\n\n\ntinytable supports histograms in datasummary_skim()\nconfig_modelsummary() supports tinytable factory.\n\n\n\n\n\nSupport the tinytable package as an output format (“factory”): https://vincentarelbundock.github.io/tinytable/\nQuarto: md output format is recognized.\noptions(modelsummary_factory_default) is respected, even in qmd->md documents.\n\nBugs:\n\nSome omitted coefficients with I() operator in formulas. Issue #693.\n\n\n\n\nMisc:\n\nDuplicate values in shape groups are removed automatically for cleaner labels.\n“Title” line no longer indented in markdown tables. Thanks to Ryan Briggs for report #671.\n\nBugs:\n\nSmall p values were not displayed properly in HTML output using kableExtra. Issue #669.\n\n\n\n\nNew:\n\nMinimal support for Typst output, with auto-detection in Quarto documents.\nstrip argument in dvnames.\ns.value statistic is now available whenever p.value is available. See Greenland (2019).\ndatasummary_skim() now includes histograms in gt tables.\n\nBugs:\n\nGOF term names get escaped in LaTeX. Thanks to @shreyasgm for reviving Issue #546.\nConflict with furrr generated errors for some models. Thanks to @sammo3182 for Issue #647.\n\n\n\n\nNew:\n\nfmt_sci() can now be used in the fmt argument for rounding with scientific notation.\n\nBugs:\n\nGroup separators respect add_rows with shape=\"rbind\". Thanks to @lrose1 for Report #626.\nBad column with horizontal models in shape and grouped estimates. Thanks to @iago-pssjd for Report #631.\ncoef_rename=TRUE bug with grouped estimates. Thanks to @iago-pssjd for Report #631.\nUpstream issue #881 in parameters meant that vcov was no longer used for confidence intervals.\n\n\n\n\n\nBuilt-in support for markdown tables.\nPackage no longer depends on kableExtra. Recommends an additional install for other formats.\nPersistent configuration of default output format: config_modelsummary(factory_default = \"gt\")\nshape = \"rcollapse\" and shape = \"rbind\"\nglance_custom() can drop GOF by assigning NA: https://stackoverflow.com/questions/75215355/assigning-different-coefficient-names-and-goodness-of-fit-statistics-to-differen\nWhen a statistic is not available, modelsummary prints an empty cell instead of returning an error.\n“\\label{tab:something}” works in title even when escape=TRUE\nMultiple fixest_multi objects supported.\noptions(modelsummary_future = FALSE) disables future parallelism.\n\nBug fixes:\n\nstatistic=NULL is now respected when shape=\"rbind\". Thanks to Panos Mavros for report #620.\nget_estimates() supports vcov string shortcuts and formulas. Thanks to @ethans-carl for report #605.\nQuarto and Rmarkdown documents include situnix in header automatically for decimal alignement with align=\"ddd\"\nescape is now respected by modelsummary with shape=\"rbind\". Thanks to @chickymonkeys for report #622.\n\n\n\n\nBreaking change:\n\nThe default column label style in modelsummary() has changed from “Model 1” to “(1)”. The benefits are: labels are no longer in English by default; use less horizontal space; eliminate the “Model” redundancy. Unfortunately, this could break code in some edge cases where users rely on column names to manipulate tables. The old behavior can be restored by calling: options(modelsummary_model_labels=\"model\")\n\nNew features:\n\nshape=\"rbind\" to stack multiple regression tables and create “panels” with labelled groups of models.\nfmt: new helper functions for different formatting styles\n\nfmt = fmt_decimal(2): decimal digits\nfmt = fmt_decimal(digits = 2, pdigits = 4): decimal digits with p value-specific setting\nfmt = fmt_sprintf(\"%.3f\"): sprintf() decimal\nfmt = fmt_sprintf(\"%.3e\"): sprintf() scientific\nfmt = fmt_significant(3): significant digits\nfmt = fmt_statistic(\"estimate\" = 2, \"std.error\" = 3): statistic-specific formatting\nfmt = fmt_term(\"(Intercept)\" = 2, \"hp\" = 3): term-specific formatting\nfmt = fmt_identity(): raw values\n\nNew styles for default column labels in modelsummary, such as Roman Numerals or letters in parentheses.\n\nSet the style with a global option: options(modelsummary_model_labels = \"roman\")\nSupported styles: “model”, “arabic”, “letters”, “roman”, “(arabic)”, “(letters)”, “(roman)””\n\nmodelplot(draw = FALSE) now returns a p.value column. This allows conditional aesthetics (see the modelplot vignette).\nBetter integration with the marginaleffects package.\n\nBugs:\n\nSome fixest models returns useless “group.x” and “group.y” columns. Isse #591. Thanks to Adam Altmejd for the report.\n\n\n\n\nBreaking change:\n\nWith the shape and output=\"dataframe\" arguments, there always used to be a group column. Now, this column has the same name as the variable in the shape formula (“response”, “component”, etc.).\n\nNew features:\n\nshape can include multiple groups.\ncoef_rename can be an unnamed vector of length equal to the number of terms in the final table, obtained after coef_map and coef_omit are applied and models are merged.\ncoef_omit accepts numeric indices. Positive values: coefficients to omit. Negative values: coefficients to keep.\ndatasummary_skim: Increased maximum number of variables to 250.\nQuarto notebooks compile to Word and Markdown automatically.\n\nBug fixes:\n\nOrder of notes preserved in some output format (Issue #577)\n\n\n\n\nBreaking change:\n\nRequires siunitx version 3.0.25 LaTeX package.\nThe title argument now respects the escape argument for all kableExtra output formats. This can break tables in which users manually escaped titles.\n\nNew features:\n\n“d” is accepted for decimal-alignment in the align argument for all output formats. modelsummary(mod, align = \"ld\")\nNew update_modelsummary() function makes it easy to install the dev versions of modelsummary and its dependencies (mostly useful for Vincent and people who report bugs).\nRounding: display at least one significant digit by default.\nAutomatic renaming of haven labels in modelsummary(), datasummary(), datasummary_skim()\nAllow output = \"filename.csv\"\nAllow output = \"filename.xlsx\"\nadd_columns argument supported in modelsummary()\ndatasummary_balance supports the stars argument.\nAllow stars and confidence intervals with align = \"d\" column.\n\nBug fixes:\n\nIn some locales, the HTML minus sign created problems in the output. We only use it in “known” locales.\nMany minor bug fixes\n\n\n\n\n\nMinor release to fix CRAN failure\n\n\n\n\n\nshape argument accepts interactions with the colon “:” character. This combines two columns into one, which can be useful to display terms and group names in a single column.\nParallelization using parallel::mclapply. See ?modelsummary\nmodelsummary no longer computes confidence intervals when not necessary, which can save some time. Also see: conf_level=NULL\nAdded log likelihood to GOF for lm and glm models.\nRemoved extraneous warnings\nBug fixes\n\n\n\n\nThis first major release accompanies the publication of an article in the Journal of Statistical Software:\nArel-Bundock, Vincent (2022). “modelsummary: Data and Model Summaries in R.” Journal of Statistical Software, 103(1), 1-23. doi:10.18637/jss.v103.i01 https://doi.org/10.18637/jss.v103.i01.’\nIf you like modelsummary, please cite the JSS article and tell your friends about it.\nMinor changes:\n\ngof_map=\"all\" includes all available statistics. gof_map=\"none\" excludes all statistics.\nBug fixes\n\n\n\n\n\nBetter printout for term names in mixed-effects models\n{brms} and {stanreg} models now extracted with diagnostic=NULL and test=NULL by default for speed.\n\n\n\n\nBreaking changes:\n\nmodelsummary_wide is no longer available. Use the shape argument of modelsummary instead.\nmodelsummary now uses the easystats packages (performance and parameters) to extract estimates and goodness-of-fit statistics instead of broom. This can be reverted by setting a global option: options(modelsummary_get=\"broom\"). This change aims to (1) increase consistency across models, (2) improve the developers’ ability to push bug fixes upstream when necessary, and (3) improve support for mixed effects, bayesian, and GAM models. The two main drawbacks are: (a) The set of printed statistics may be slightly different from previous versions of modelsummary (b) The group identifiers used in the shape formula will also be different for certain models (e.g., in nnet::multinom, y.level becomes response).\n\nNew features:\n\nThe shape argument accepts a formula and can reshape information in myriad ways. Deprecates the group argument. Examples:\n\n~ statistic: statistics are shown horizontally in distinct columns.\nmodel ~ term: models in rows and terms in columns.\nterm + y.level + statistic ~ model: grouped coefficients for multivariate outcome in nnet::multinom\ny.level ~ model: partial match is the same as the previous formula\n\nFormat distinct statistics differently by passing a named list to fmt:\n\nmodelsummary(mod, fmt = list(estimate = 2, std.error = 1, rmse = 4))\n\nUse glue to apply functions to numeric values by setting fmt = NULL. Example:\n\nmodelsummary(model, fmt = NULL, estimate = \"{log(estimate)}\")\n\nUpdate for breaking changes after fixest 0.10.4\n\nBug fixes:\n\ngroup_map rename issue\nResidual standard error mistakenly labelled “RMSE” in lm models.\ndatasummary_skim output to jpg should now works\nescape fixes\n\n\n\n\n\nNew exponentiate argument for modelsummary() and modelplot()\ngof_map accepts a vector such as c(\"rmse\", \"nobs\", \"r.squared\")\nDrop rlang dependency\nBug fixes\n\n\n\n\ndatasummary_balance:\n\nAccepts ~ 1 as a formula to summarize all data.\n\nMisc:\n\ndocumentation improvements\nRMSE included by default in models of class lm\n\n\n\n\nmodelsummary:\n\nvcov strings like HC1 and Robust are now case-insensitive\ngof_map now accepts a data.frame or tibble with a fmt list-column which includes functions (see Examples in docs)\nR2 is no longer computed by default for bayesian and mixed effects models. An informative one-time warning is printed about the metrics argument.\n\ndatasummary_skim:\n\nHistograms now work in Jupyter\nBugfix: harmless error message is no longer printed\n\nkableExtra factory:\n\nThe col.names argument can now be passed to kableExtra::kbl through the … ellipsis.\n\nMisc:\n\nMany small improvements to the vignettes and docs\noutput = \"github_document\" is now supported\n\n\n\n\n\nBug fix: siunitx and rounding NA\n\n\n\n\nmodelsummary:\n\nF statistic takes into account vcov argument\nSupport group = group ~ model + term\n\ndatasummary_balance:\n\nWeighted means and standard deviations are now supported. Counts and percentages are not, but raise a warning.\n\nMisc:\n\nBugfix: rounding in LaTeX w/ siunitx and NaN entries.\noutput=‘jupyter’ no longer prints an extraneous TRUE to the notebook\n\n\n\n\nmodelsummary:\n\nImproved vcov argument handling for fixest models (#357 by @grantmcdermott)\nFix display of fixest::i() variables and interactions (#361 by @grantmcdermott)\nConsistent display of clustered SEs (#356, #363 and #366 by @grantmcdermott)\n\ndatasummary_correlation:\n\nadd_rows and add_columns arguments are now available here.\n\nMisc:\n\nGlobal options for output factories are renamed: modelsummary_factory_default, modelsummary_factory_html, etc.\nHot fix for change in R-devel behavior or intersect\n\nBug fixes:\n\ndatasummary_balance: escape variable names when escape=TRUE\nBlogdown LaTeX dependency bug when output is HTML\n\n\n\n\nBreaking change:\n\nSupport for dcolumn for dot-aligned columns is deprecated. Use “d” in the align argument instead.\n\nOther changes:\n\nLaTeX output: Numeric entries are wrapped in the \\num{} function from the siunitx package by default. This produces much nicer formatting. This can be disabled with a global option. See ?modelsummary\nThe align argument accepts a “d” column for dot-alignment using the siunitx LaTeX package: align=\"ldd\".\nHTML tables display proper minus signs.\nNew escape argument in most table-building functions.\nLaTeX output accepts the threeparttable=TRUE argument through ...\nNo more dependency on tidyr\n\nmodelsummary:\n\ngroup: The order of terms in the formula determines the order of rows/columns\n\nmodelsummary_wide:\n\nNote: This function will eventually be deprecated\nBugfix with statistic=NULL.\n\nmodelplot:\n\nPreserves order of models in the user-supplied list\n\ndatasummary_crosstab:\n\nstatistic=NULL produces a very basic crosstab\n\ndatasummary_crosstab:\n\nDefault alignment “lrrrrr” consistent with other datasummary_* functions\n\n\n\n\nmodelsummary:\n\nDisable stars footnote with options(\"modelsummary_stars_note\" = FALSE)\nlongtable=TRUE works for LaTeX output\nInteractions with “:” are no longer converted to “x” when coef_map or coef_rename are used.\ngroup = model ~ term + group is now supported.\n\ndatasummary_skim:\n\ndatasummary_skim(\"categorical\") keeps NA by default. Users can convert variables to factors before calling datasummary_skim to exclude NA.\n\nOther:\n\nImproved warnings for bad calls: modelsummary(model1, model2)\ngt titles use the new caption argument in the gt 0.3.0 function\nBug fix: Overaggressive tests for glue strings prevented functions inside {}\n\n\n\n\nBreaking change:\n\nThe default significance markers stars=TRUE have been updated to be consistent with the default output from base R (e.g., in summary.lm). The new significance thresholds are: “+” p < 0.1, “” p < 0.05, ”” p < 0.01, ”” p < 0.001\n\ndatasummary_crosstab:\n\nNew function to produce cross-tabulations\n\ndatasummary:\n\nN is smart enough to return either the number of elements in a subset or the number of non-missing observations in a variable\n\ndatasummary_balance:\n\nKeeps NAs in factor variables by default. Users can convert their variables with the factor() function to omit NAs automatically.\n\nmodelsummary:\n\nthemes can be set using global options (experimental)\nnew vcov options: “bootstrap”, “HAC”, “NeweyWest”, “Andrews”, “panel-corrected”, “weave”, “outer-product”\nA valid get_gof (glance) is now optional.\n… is pushed through to sandwich, which allows things like: modelsummary(model, vcov = \"bootstrap\", R = 1000, cluster = \"firm\")\n\nOther:\n\nJupyter notebook support via output=\"jupyter\"\nBug fixes\n\n\n\n\nmodelsummary:\n\nnew arguments for modelsummary: group and group_map for grouped parameters (e.g., outcome levels in multinomial logit or components of gamlss model).\ndvnames() makes it easy to get dependent variable column titles (thanks to @NickCH-K)\noutput=\"modelsummary_list\" to save a lightweight list-based representation of the table which can be saved and fed to modelsummary once more to get a full table.\nvcov adds a row to note the type of standard errors.\nmodelsummary accepts a single model with multiple vcovs.\nget_gof forwards … to model_performance\ncoef_map accepts unnamed vectors for easy subsetting\nfixest::fixest_multi support\noptions(modelsummary_get) to set the order of extraction functions to use under the hood (broom vs. easystats vs. all)\nmetrics argument of performance::model_performance is available via modelsummary’s … ellipsis to limit the GOF statistics in Bayesian models.\nusers can omit the stars legend note by using glue strings: estimate=\"{estimate}{stars}\"\noutput=“html” can use gt by setting options(modelsummary_factory_html=\"gt\")\n\ndatasummary_correlation:\n\npasses ... forward\nnew function: datasummary_correlation_format\ndatasummary_correlation’s method argument accepts functions and “pearspear” (thanks to @joachim-gassen)\n\ndatasummary:\n\ndatasummary functions and rounding accept …, big.mark, etc.\n\ndatasummary_skim:\n\nnow works with haven_labeled numeric\nfaster tables with bayesian models.\n\nBug fixes and lints\n\n\n\n\nnew output format: latex_tabular\ntidy_custom allows partial term matches\nmodelsummary(coef_rename) accepts functions\nnew function coef_rename for use in modelsummary(coef_rename=coef_rename)\nmodelplot accepts add_rows to add reference categories\ninformative error message when estimate or statistic is not available\nbug fixes\n\n\n\n\n\nstatistic_override becomes vcov\nvcov accepts shortcuts: “robust”, “stata”, “HC0”, etc.\nvcov accepts formulas for clustered SEs: ~group\nmodelsummary_wide has a new “stacking” argument\nhtml horizontal rule to separate estimates form gof\ngof_map accepts list of lists. only needs 3 columns.\nsupport officedown Rmd\nestimate accepts a vector for per model estimates\noptions(modelsummary_default) can be markdown, html, latex\nbug: passing arguments through …\nbug: stars and rounding\n\n\n\n\n\nglue format for estimate and statistic\neasystats support for model info extraction\ndeprecate statistic_vertical\ndeprecate extract_models. Use modelsummary(output=“dataframe”) instead.\nmodelplot pushes … through to modelsummary(output=“dataframe”)\ndatasummary_skim(type=“dataset”)\ngof_map omits by default\ndatasummary_balance uses row percentages\nstatistic_override does not require a list\nstatistic_override accepts a single model\nN function for well formatted N in datasummary\nBug fixes\n\n\n\n\n\nnew function: modelsummary_wide\ncoef_omit and gof_omit use grepl(perl=TRUE)\nfmt accepts integer, string or function and respects options(OutDec=“,”)\nalign argument for modelsummary\nalign is more liberal to accept dcolumn alignment\nglance_custom methods for lfe and fixest\nbug fixes\n\n\n\n\n\nnew argument: coef_rename\nnew function: datasummary_df\npreserve term order in modelsummary\nrefactor datasummary_balance\ndatasummary_skim uses svg histograms instead of unicode\nremoved 5 dependencies\npass … to kableExtra::kbl for more customization\ntest improvements\ninternal code style\nbug fixes\n\n\n\n\n\nbug fixes\n\n\n\n\n\ndefault HTML output factory is now kableExtra\ninteraction “:” gsubbed by “0d7”\ndependencies: removed 1 depends, 3 imports, and 3 suggests\nword_document knitr works out-of-the-box\nbug fixes\n\n\n\n\n\nglance_custom.fixest ships with modelsummary\n\n\n\n\n\ndatasummary\ndatasummary_skim\ndatasummary_balance\ndatasummary_correlation\nmodelplot\nallow duplicate model names\nbug: can’t use coef_map with multiple statistics (thanks @sbw78)\nbug: wrong number of stars w/ statistic=‘p.value’ (thanks @torfason)\noutput=‘data.frame’. extract is no longer documented.\n\n\n\n\n\nadd_rows now accepts a data.frame with “position” and “section” columns\nadd_rows_location is deprecated\nbug in sanity_output prevented overwriting files\n\n\n\n\n\nhuxtable support\nflextable support\nestimate argument\nfixest tidiers\nwebsite and vignette improvements\ngof_map additions\nglance_custom\ntidy_custom\n\n\n\n\n\nOut-of-the-box Rmarkdown compilation to HTML, PDF, RTF\nkableExtra output format for LaTeX and Markdown\nSupport for threeparttable, colors, and many other LaTeX options\nDeprecated arguments: filename, subtitle\nDeprecated functions: clean_latex, knit_latex\npkgdown website and doc improvements\nmitools tidiers\nNew tests\n\n\n\n\n\nConvenience function to render markdown in row/column labels\nbug: breakage when all GOF were omitted\nClean up manual with @keywords internal\nbug: tidyr import\n\n\n\n\n\ngt is now available on CRAN\nnew latex_env argument for knit_latex and clean_latex\nbug when all gof omitted\nbug in statistic_override with functions\nbug caused by upstream changes in tab_style\nbug caused by upstream changes in filename=‘rtf’\nAllow multiple rows of uncertainty estimates per coefficient\nPreserve add_rows order\nDisplay uncertainty estimates next to the coefficient with statistic_vertical = FALSE\nBetter clean_latex function\nCan display R2 and confidence intervals for mice-imputed lm-models\nInternal functions have @keywords internal to avoid inclusion in docs\nStatistic override accepts pre-formatted character vectors\n\n\n\n\n\nInitial release (gt still needs to be installed from github)",
"crumbs": [
"Get started",
"News"
@@ -15,7 +15,7 @@
"href": "NEWS.html#development",
"title": "News",
"section": "",
- "text": "coef_rename() gets a poly=TRUE argument to rename poly(x, 2)-style coefficients. Thanks to @mccarthy-m-g for code contribution #778.\nget_gof(): logLik column converted to numeric for consistent types. Issue 649 reported on the mice Github.\nkableExtra update the siunitx commands for d columns.\nkableExtra escapes footnotes in HTML when output=\"kableExtra\"). Thanks to @dmurdoch and @michaelherndon97 for report #793.\nNew fmt_equivalence() function to implement the rounding suggestion of Astier & Wolak (2024). Thanks to Nicolas Astier for code prototype.\nFix partial match warnings for some datasummary_*() tables. No change in behavior. Thanks to @fkohrt for report #804.\n\nBugs:\n\nStars footnotes get properly escaped in some LaTeX configurations. Thanks to @etiennebacher for report #798.\ndatasummary_*() functions can be called as arguments in another datasummary_*() arguments, like add_columns. Thanks to @mronkko for report #799",
+ "text": "Bump minimum version requirement for tinytable, parameters, and insight dependencies.\ncoef_rename() gets a poly=TRUE argument to rename poly(x, 2)-style coefficients. Thanks to @mccarthy-m-g for code contribution #778.\nget_gof(): logLik column converted to numeric for consistent types. Issue 649 reported on the mice Github.\nkableExtra update the siunitx commands for d columns.\nkableExtra escapes footnotes in HTML when output=\"kableExtra\"). Thanks to @dmurdoch and @michaelherndon97 for report #793.\nNew fmt_equivalence() function to implement the rounding suggestion of Astier & Wolak (2024). Thanks to Nicolas Astier for code prototype.\nFix partial match warnings for some datasummary_*() tables. No change in behavior. Thanks to @fkohrt for report #804.\n\nBugs:\n\nStars footnotes get properly escaped in some LaTeX configurations. Thanks to @etiennebacher for report #798.\ndatasummary_*() functions can be called as arguments in another datasummary_*() arguments, like add_columns. Thanks to @mronkko for report #799",
"crumbs": [
"Get started",
"News"
@@ -609,7 +609,7 @@
"href": "vignettes/datasummary.html#arguments-weighted-mean",
"title": "Data Summaries",
"section": "Arguments: Weighted Mean",
- "text": "Arguments: Weighted Mean\nYou can use the Arguments mechanism to do various things, such as calculating weighted means:\n\nnewdata <- data.frame(\n x = rnorm(20),\n w = rnorm(20),\n y = rnorm(20))\n\ndatasummary(x + y ~ weighted.mean * Arguments(w = w),\n data = newdata)\n\n\n\n \n\n \n \n \n \n \n \n \n weighted.mean\n \n \n \n \n \n x\n -0.10\n \n \n y\n 0.09 \n \n \n \n \n\n\n\nWhich produces the same results as:\n\nweighted.mean(newdata$x, newdata$w)\n\n[1] -0.09821034\n\nweighted.mean(newdata$y, newdata$w)\n\n[1] 0.09157174\n\n\nBut different results from:\n\nmean(newdata$x)\n\n[1] 0.4488352\n\nmean(newdata$y)\n\n[1] -0.004286111",
+ "text": "Arguments: Weighted Mean\nYou can use the Arguments mechanism to do various things, such as calculating weighted means:\n\nnewdata <- data.frame(\n x = rnorm(20),\n w = rnorm(20),\n y = rnorm(20))\n\ndatasummary(x + y ~ weighted.mean * Arguments(w = w),\n data = newdata)\n\n\n\n \n\n \n \n \n \n \n \n \n weighted.mean\n \n \n \n \n \n x\n -2.63\n \n \n y\n 0.88 \n \n \n \n \n\n\n\nWhich produces the same results as:\n\nweighted.mean(newdata$x, newdata$w)\n\n[1] -2.62697\n\nweighted.mean(newdata$y, newdata$w)\n\n[1] 0.876995\n\n\nBut different results from:\n\nmean(newdata$x)\n\n[1] -0.5372682\n\nmean(newdata$y)\n\n[1] 0.08706249",
"crumbs": [
"Get started",
"Data Summaries"
@@ -653,7 +653,7 @@
"href": "vignettes/datasummary.html#add-columns",
"title": "Data Summaries",
"section": "Add columns",
- "text": "Add columns\n\nnew_cols <- data.frame('New Stat' = runif(2))\ndatasummary(flipper_length_mm + body_mass_g ~ species * (Mean + SD),\n data = penguins,\n add_columns = new_cols)\n\n\n\n \n\n \n \n \n \n\n \nAdelie\nChinstrap\nGentoo\n \n\n \n \n \n Mean\n SD\n Mean\n SD\n Mean\n SD\n New.Stat\n \n \n \n \n \n flipper_length_mm\n 189.95 \n 6.54 \n 195.82 \n 7.13 \n 217.19 \n 6.48 \n 0.21\n \n \n body_mass_g \n 3700.66\n 458.57\n 3733.09\n 384.34\n 5076.02\n 504.12\n 0.98",
+ "text": "Add columns\n\nnew_cols <- data.frame('New Stat' = runif(2))\ndatasummary(flipper_length_mm + body_mass_g ~ species * (Mean + SD),\n data = penguins,\n add_columns = new_cols)\n\n\n\n \n\n \n \n \n \n\n \nAdelie\nChinstrap\nGentoo\n \n\n \n \n \n Mean\n SD\n Mean\n SD\n Mean\n SD\n New.Stat\n \n \n \n \n \n flipper_length_mm\n 189.95 \n 6.54 \n 195.82 \n 7.13 \n 217.19 \n 6.48 \n 0.37\n \n \n body_mass_g \n 3700.66\n 458.57\n 3733.09\n 384.34\n 5076.02\n 504.12\n 0.53",
"crumbs": [
"Get started",
"Data Summaries"
@@ -752,7 +752,7 @@
"href": "vignettes/modelsummary.html#quarto",
"title": "Model Summaries",
"section": "Quarto",
- "text": "Quarto\nQuarto is an open source publishing system built on top of Pandoc. It was designed as a “successor” to Rmarkdown, and includes useful features for technical writing, such as built-in support for cross-references. modelsummary works automatically with Quarto. This is a minimal document with cross-references which should render automatically to PDF, HTML, and more:\n---\nformat: pdf\ntitle: Example\n---\n\n@tbl-mtcars shows that cars with high horse power get low miles per gallon.\n\n::: {#tbl-mtcars .cell tbl-cap='Horse Powers vs. Miles per Gallon'}\n\n```{.r .cell-code}\nlibrary(modelsummary)\nmod <- lm(mpg ~ hp, mtcars)\nmodelsummary(mod)\n```\n\n::: {.cell-output-display}\n\n```{=html}\n<!-- preamble start -->\n\n <script>\n function styleCell_1624ke16egd51f5yfatz(i, j, css_id) {\n var table = document.getElementById(\"tinytable_1624ke16egd51f5yfatz\");\n table.rows[i].cells[j].classList.add(css_id);\n }\n function insertSpanRow(i, colspan, content) {\n var table = document.getElementById('tinytable_1624ke16egd51f5yfatz');\n var newRow = table.insertRow(i);\n var newCell = newRow.insertCell(0);\n newCell.setAttribute(\"colspan\", colspan);\n // newCell.innerText = content;\n // this may be unsafe, but innerText does not interpret <br>\n newCell.innerHTML = content;\n }\n function spanCell_1624ke16egd51f5yfatz(i, j, rowspan, colspan) {\n var table = document.getElementById(\"tinytable_1624ke16egd51f5yfatz\");\n const targetRow = table.rows[i];\n const targetCell = targetRow.cells[j];\n for (let r = 0; r < rowspan; r++) {\n // Only start deleting cells to the right for the first row (r == 0)\n if (r === 0) {\n // Delete cells to the right of the target cell in the first row\n for (let c = colspan - 1; c > 0; c--) {\n if (table.rows[i + r].cells[j + c]) {\n table.rows[i + r].deleteCell(j + c);\n }\n }\n }\n // For rows below the first, delete starting from the target column\n if (r > 0) {\n for (let c = colspan - 1; c >= 0; c--) {\n if (table.rows[i + r] && table.rows[i + r].cells[j]) {\n table.rows[i + r].deleteCell(j);\n }\n }\n }\n }\n // Set rowspan and colspan of the target cell\n targetCell.rowSpan = rowspan;\n targetCell.colSpan = colspan;\n }\nwindow.addEventListener('load', function () { styleCell_1624ke16egd51f5yfatz(0, 0, 'tinytable_css_idz4idv2pp31vzfozc0bl5') })\nwindow.addEventListener('load', function () { styleCell_1624ke16egd51f5yfatz(0, 1, 'tinytable_css_id5ugdwowpbgoxrr2cv7da') })\nwindow.addEventListener('load', function () { styleCell_1624ke16egd51f5yfatz(1, 0, 'tinytable_css_idm2lj71ysn8vkiggdxjhn') })\nwindow.addEventListener('load', function () { styleCell_1624ke16egd51f5yfatz(1, 1, 'tinytable_css_idqp04jfukckfhql02iyt0') })\nwindow.addEventListener('load', function () { styleCell_1624ke16egd51f5yfatz(2, 0, 'tinytable_css_idm2lj71ysn8vkiggdxjhn') })\nwindow.addEventListener('load', function () { styleCell_1624ke16egd51f5yfatz(2, 1, 'tinytable_css_idqp04jfukckfhql02iyt0') })\nwindow.addEventListener('load', function () { styleCell_1624ke16egd51f5yfatz(3, 0, 'tinytable_css_idm2lj71ysn8vkiggdxjhn') })\nwindow.addEventListener('load', function () { styleCell_1624ke16egd51f5yfatz(3, 1, 'tinytable_css_idqp04jfukckfhql02iyt0') })\nwindow.addEventListener('load', function () { styleCell_1624ke16egd51f5yfatz(4, 0, 'tinytable_css_idpjgiw4iswko7v23oie2a') })\nwindow.addEventListener('load', function () { styleCell_1624ke16egd51f5yfatz(4, 1, 'tinytable_css_ido3fttuxjn7wx7bdvfvwj') })\nwindow.addEventListener('load', function () { styleCell_1624ke16egd51f5yfatz(5, 0, 'tinytable_css_idm2lj71ysn8vkiggdxjhn') })\nwindow.addEventListener('load', function () { styleCell_1624ke16egd51f5yfatz(5, 1, 'tinytable_css_idqp04jfukckfhql02iyt0') })\nwindow.addEventListener('load', function () { styleCell_1624ke16egd51f5yfatz(6, 0, 'tinytable_css_idm2lj71ysn8vkiggdxjhn') })\nwindow.addEventListener('load', function () { styleCell_1624ke16egd51f5yfatz(6, 1, 'tinytable_css_idqp04jfukckfhql02iyt0') })\nwindow.addEventListener('load', function () { styleCell_1624ke16egd51f5yfatz(7, 0, 'tinytable_css_idm2lj71ysn8vkiggdxjhn') })\nwindow.addEventListener('load', function () { styleCell_1624ke16egd51f5yfatz(7, 1, 'tinytable_css_idqp04jfukckfhql02iyt0') })\nwindow.addEventListener('load', function () { styleCell_1624ke16egd51f5yfatz(8, 0, 'tinytable_css_idm2lj71ysn8vkiggdxjhn') })\nwindow.addEventListener('load', function () { styleCell_1624ke16egd51f5yfatz(8, 1, 'tinytable_css_idqp04jfukckfhql02iyt0') })\nwindow.addEventListener('load', function () { styleCell_1624ke16egd51f5yfatz(9, 0, 'tinytable_css_idm2lj71ysn8vkiggdxjhn') })\nwindow.addEventListener('load', function () { styleCell_1624ke16egd51f5yfatz(9, 1, 'tinytable_css_idqp04jfukckfhql02iyt0') })\nwindow.addEventListener('load', function () { styleCell_1624ke16egd51f5yfatz(10, 0, 'tinytable_css_idm2lj71ysn8vkiggdxjhn') })\nwindow.addEventListener('load', function () { styleCell_1624ke16egd51f5yfatz(10, 1, 'tinytable_css_idqp04jfukckfhql02iyt0') })\nwindow.addEventListener('load', function () { styleCell_1624ke16egd51f5yfatz(11, 0, 'tinytable_css_idm2lj71ysn8vkiggdxjhn') })\nwindow.addEventListener('load', function () { styleCell_1624ke16egd51f5yfatz(11, 1, 'tinytable_css_idqp04jfukckfhql02iyt0') })\nwindow.addEventListener('load', function () { styleCell_1624ke16egd51f5yfatz(12, 0, 'tinytable_css_idm2lj71ysn8vkiggdxjhn') })\nwindow.addEventListener('load', function () { styleCell_1624ke16egd51f5yfatz(12, 1, 'tinytable_css_idqp04jfukckfhql02iyt0') })\n </script>\n\n <style>\n .table td.tinytable_css_idz4idv2pp31vzfozc0bl5, .table th.tinytable_css_idz4idv2pp31vzfozc0bl5 { text-align: left; border-bottom: solid 0.1em #d3d8dc; }\n .table td.tinytable_css_id5ugdwowpbgoxrr2cv7da, .table th.tinytable_css_id5ugdwowpbgoxrr2cv7da { text-align: center; border-bottom: solid 0.1em #d3d8dc; }\n .table td.tinytable_css_idm2lj71ysn8vkiggdxjhn, .table th.tinytable_css_idm2lj71ysn8vkiggdxjhn { text-align: left; }\n .table td.tinytable_css_idqp04jfukckfhql02iyt0, .table th.tinytable_css_idqp04jfukckfhql02iyt0 { text-align: center; }\n .table td.tinytable_css_idpjgiw4iswko7v23oie2a, .table th.tinytable_css_idpjgiw4iswko7v23oie2a { border-bottom: solid 0.05em black; text-align: left; }\n .table td.tinytable_css_ido3fttuxjn7wx7bdvfvwj, .table th.tinytable_css_ido3fttuxjn7wx7bdvfvwj { border-bottom: solid 0.05em black; text-align: center; }\n </style>\n <div class=\"container\">\n <table class=\"table table-borderless\" id=\"tinytable_1624ke16egd51f5yfatz\" style=\"width: auto; margin-left: auto; margin-right: auto;\" data-quarto-disable-processing='true'>\n <thead>\n \n <tr>\n <th scope=\"col\"> </th>\n <th scope=\"col\">(1)</th>\n </tr>\n </thead>\n \n <tbody>\n <tr>\n <td>(Intercept)</td>\n <td>30.099 </td>\n </tr>\n <tr>\n <td> </td>\n <td>(1.634)</td>\n </tr>\n <tr>\n <td>hp </td>\n <td>-0.068 </td>\n </tr>\n <tr>\n <td> </td>\n <td>(0.010)</td>\n </tr>\n <tr>\n <td>Num.Obs. </td>\n <td>32 </td>\n </tr>\n <tr>\n <td>R2 </td>\n <td>0.602 </td>\n </tr>\n <tr>\n <td>R2 Adj. </td>\n <td>0.589 </td>\n </tr>\n <tr>\n <td>AIC </td>\n <td>181.2 </td>\n </tr>\n <tr>\n <td>BIC </td>\n <td>185.6 </td>\n </tr>\n <tr>\n <td>Log.Lik. </td>\n <td>-87.619</td>\n </tr>\n <tr>\n <td>F </td>\n <td>45.460 </td>\n </tr>\n <tr>\n <td>RMSE </td>\n <td>3.74 </td>\n </tr>\n </tbody>\n </table>\n </div>\n<!-- hack to avoid NA insertion in last line -->\n```\n\n:::\n:::",
+ "text": "Quarto\nQuarto is an open source publishing system built on top of Pandoc. It was designed as a “successor” to Rmarkdown, and includes useful features for technical writing, such as built-in support for cross-references. modelsummary works automatically with Quarto. This is a minimal document with cross-references which should render automatically to PDF, HTML, and more:\n---\nformat: pdf\ntitle: Example\n---\n\n@tbl-mtcars shows that cars with high horse power get low miles per gallon.\n\n::: {#tbl-mtcars .cell tbl-cap='Horse Powers vs. Miles per Gallon'}\n\n```{.r .cell-code}\nlibrary(modelsummary)\nmod <- lm(mpg ~ hp, mtcars)\nmodelsummary(mod)\n```\n\n::: {.cell-output-display}\n\n```{=html}\n<!-- preamble start -->\n\n <script>\n function styleCell_eauek741bcovf9zf3iat(i, j, css_id) {\n var table = document.getElementById(\"tinytable_eauek741bcovf9zf3iat\");\n table.rows[i].cells[j].classList.add(css_id);\n }\n function insertSpanRow(i, colspan, content) {\n var table = document.getElementById('tinytable_eauek741bcovf9zf3iat');\n var newRow = table.insertRow(i);\n var newCell = newRow.insertCell(0);\n newCell.setAttribute(\"colspan\", colspan);\n // newCell.innerText = content;\n // this may be unsafe, but innerText does not interpret <br>\n newCell.innerHTML = content;\n }\n function spanCell_eauek741bcovf9zf3iat(i, j, rowspan, colspan) {\n var table = document.getElementById(\"tinytable_eauek741bcovf9zf3iat\");\n const targetRow = table.rows[i];\n const targetCell = targetRow.cells[j];\n for (let r = 0; r < rowspan; r++) {\n // Only start deleting cells to the right for the first row (r == 0)\n if (r === 0) {\n // Delete cells to the right of the target cell in the first row\n for (let c = colspan - 1; c > 0; c--) {\n if (table.rows[i + r].cells[j + c]) {\n table.rows[i + r].deleteCell(j + c);\n }\n }\n }\n // For rows below the first, delete starting from the target column\n if (r > 0) {\n for (let c = colspan - 1; c >= 0; c--) {\n if (table.rows[i + r] && table.rows[i + r].cells[j]) {\n table.rows[i + r].deleteCell(j);\n }\n }\n }\n }\n // Set rowspan and colspan of the target cell\n targetCell.rowSpan = rowspan;\n targetCell.colSpan = colspan;\n }\nwindow.addEventListener('load', function () { styleCell_eauek741bcovf9zf3iat(0, 0, 'tinytable_css_idyo1wv0dv8caq1mlesg9f') })\nwindow.addEventListener('load', function () { styleCell_eauek741bcovf9zf3iat(0, 1, 'tinytable_css_idrd5l45v9gty70e76nu8n') })\nwindow.addEventListener('load', function () { styleCell_eauek741bcovf9zf3iat(1, 0, 'tinytable_css_idl547jsi7pj1o6t7l4qol') })\nwindow.addEventListener('load', function () { styleCell_eauek741bcovf9zf3iat(1, 1, 'tinytable_css_idvzso1xd21hf2h747n3p6') })\nwindow.addEventListener('load', function () { styleCell_eauek741bcovf9zf3iat(2, 0, 'tinytable_css_idl547jsi7pj1o6t7l4qol') })\nwindow.addEventListener('load', function () { styleCell_eauek741bcovf9zf3iat(2, 1, 'tinytable_css_idvzso1xd21hf2h747n3p6') })\nwindow.addEventListener('load', function () { styleCell_eauek741bcovf9zf3iat(3, 0, 'tinytable_css_idl547jsi7pj1o6t7l4qol') })\nwindow.addEventListener('load', function () { styleCell_eauek741bcovf9zf3iat(3, 1, 'tinytable_css_idvzso1xd21hf2h747n3p6') })\nwindow.addEventListener('load', function () { styleCell_eauek741bcovf9zf3iat(4, 0, 'tinytable_css_id4u9xjydzmtv2l4fi3b4v') })\nwindow.addEventListener('load', function () { styleCell_eauek741bcovf9zf3iat(4, 1, 'tinytable_css_idhyo5zrnikl49e19bwgv6') })\nwindow.addEventListener('load', function () { styleCell_eauek741bcovf9zf3iat(5, 0, 'tinytable_css_idl547jsi7pj1o6t7l4qol') })\nwindow.addEventListener('load', function () { styleCell_eauek741bcovf9zf3iat(5, 1, 'tinytable_css_idvzso1xd21hf2h747n3p6') })\nwindow.addEventListener('load', function () { styleCell_eauek741bcovf9zf3iat(6, 0, 'tinytable_css_idl547jsi7pj1o6t7l4qol') })\nwindow.addEventListener('load', function () { styleCell_eauek741bcovf9zf3iat(6, 1, 'tinytable_css_idvzso1xd21hf2h747n3p6') })\nwindow.addEventListener('load', function () { styleCell_eauek741bcovf9zf3iat(7, 0, 'tinytable_css_idl547jsi7pj1o6t7l4qol') })\nwindow.addEventListener('load', function () { styleCell_eauek741bcovf9zf3iat(7, 1, 'tinytable_css_idvzso1xd21hf2h747n3p6') })\nwindow.addEventListener('load', function () { styleCell_eauek741bcovf9zf3iat(8, 0, 'tinytable_css_idl547jsi7pj1o6t7l4qol') })\nwindow.addEventListener('load', function () { styleCell_eauek741bcovf9zf3iat(8, 1, 'tinytable_css_idvzso1xd21hf2h747n3p6') })\nwindow.addEventListener('load', function () { styleCell_eauek741bcovf9zf3iat(9, 0, 'tinytable_css_idl547jsi7pj1o6t7l4qol') })\nwindow.addEventListener('load', function () { styleCell_eauek741bcovf9zf3iat(9, 1, 'tinytable_css_idvzso1xd21hf2h747n3p6') })\nwindow.addEventListener('load', function () { styleCell_eauek741bcovf9zf3iat(10, 0, 'tinytable_css_idl547jsi7pj1o6t7l4qol') })\nwindow.addEventListener('load', function () { styleCell_eauek741bcovf9zf3iat(10, 1, 'tinytable_css_idvzso1xd21hf2h747n3p6') })\nwindow.addEventListener('load', function () { styleCell_eauek741bcovf9zf3iat(11, 0, 'tinytable_css_idl547jsi7pj1o6t7l4qol') })\nwindow.addEventListener('load', function () { styleCell_eauek741bcovf9zf3iat(11, 1, 'tinytable_css_idvzso1xd21hf2h747n3p6') })\nwindow.addEventListener('load', function () { styleCell_eauek741bcovf9zf3iat(12, 0, 'tinytable_css_idl547jsi7pj1o6t7l4qol') })\nwindow.addEventListener('load', function () { styleCell_eauek741bcovf9zf3iat(12, 1, 'tinytable_css_idvzso1xd21hf2h747n3p6') })\n </script>\n\n <style>\n .table td.tinytable_css_idyo1wv0dv8caq1mlesg9f, .table th.tinytable_css_idyo1wv0dv8caq1mlesg9f { text-align: left; border-bottom: solid 0.1em #d3d8dc; }\n .table td.tinytable_css_idrd5l45v9gty70e76nu8n, .table th.tinytable_css_idrd5l45v9gty70e76nu8n { text-align: center; border-bottom: solid 0.1em #d3d8dc; }\n .table td.tinytable_css_idl547jsi7pj1o6t7l4qol, .table th.tinytable_css_idl547jsi7pj1o6t7l4qol { text-align: left; }\n .table td.tinytable_css_idvzso1xd21hf2h747n3p6, .table th.tinytable_css_idvzso1xd21hf2h747n3p6 { text-align: center; }\n .table td.tinytable_css_id4u9xjydzmtv2l4fi3b4v, .table th.tinytable_css_id4u9xjydzmtv2l4fi3b4v { border-bottom: solid 0.05em black; text-align: left; }\n .table td.tinytable_css_idhyo5zrnikl49e19bwgv6, .table th.tinytable_css_idhyo5zrnikl49e19bwgv6 { border-bottom: solid 0.05em black; text-align: center; }\n </style>\n <div class=\"container\">\n <table class=\"table table-borderless\" id=\"tinytable_eauek741bcovf9zf3iat\" style=\"width: auto; margin-left: auto; margin-right: auto;\" data-quarto-disable-processing='true'>\n <thead>\n \n <tr>\n <th scope=\"col\"> </th>\n <th scope=\"col\">(1)</th>\n </tr>\n </thead>\n \n <tbody>\n <tr>\n <td>(Intercept)</td>\n <td>30.099 </td>\n </tr>\n <tr>\n <td> </td>\n <td>(1.634)</td>\n </tr>\n <tr>\n <td>hp </td>\n <td>-0.068 </td>\n </tr>\n <tr>\n <td> </td>\n <td>(0.010)</td>\n </tr>\n <tr>\n <td>Num.Obs. </td>\n <td>32 </td>\n </tr>\n <tr>\n <td>R2 </td>\n <td>0.602 </td>\n </tr>\n <tr>\n <td>R2 Adj. </td>\n <td>0.589 </td>\n </tr>\n <tr>\n <td>AIC </td>\n <td>181.2 </td>\n </tr>\n <tr>\n <td>BIC </td>\n <td>185.6 </td>\n </tr>\n <tr>\n <td>Log.Lik. </td>\n <td>-87.619</td>\n </tr>\n <tr>\n <td>F </td>\n <td>45.460 </td>\n </tr>\n <tr>\n <td>RMSE </td>\n <td>3.74 </td>\n </tr>\n </tbody>\n </table>\n </div>\n<!-- hack to avoid NA insertion in last line -->\n```\n\n:::\n:::",
"crumbs": [
"Get started",
"Model Summaries"
@@ -796,7 +796,7 @@
"href": "vignettes/modelsummary.html#bootstrap",
"title": "Model Summaries",
"section": "Bootstrap",
- "text": "Bootstrap\nUsers often want to use estimates or standard errors that have been obtained using a custom strategy. To achieve this in an automated and replicable way, it can be useful to use the tidy_custom strategy described above in the “Cutomizing Existing Models” section.\nFor example, we can use the modelr package to draw 500 resamples of a dataset, and compute bootstrap standard errors by taking the standard deviation of estimates computed in all of those resampled datasets. To do this, we defined tidy_custom.lm function that will automatically bootstrap any lm model supplied to modelsummary, and replace the values in the table automatically.\nNote that the tidy_custom_lm returns a data.frame with 3 columns: term, estimate, and std.error:\n\nlibrary(\"broom\")\nlibrary(\"tidyverse\")\nlibrary(\"modelr\")\n\ntidy_custom.lm <- function(x, ...) {\n # extract data from the model\n model.frame(x) %>%\n # draw 500 bootstrap resamples\n modelr::bootstrap(n = 500) %>%\n # estimate the model 500 times\n mutate(results = map(strap, ~ update(x, data = .))) %>%\n # extract results using `broom::tidy`\n mutate(results = map(results, tidy)) %>%\n # unnest and summarize\n unnest(results) %>%\n group_by(term) %>%\n summarize(std.error = sd(estimate),\n estimate = mean(estimate))\n}\n\nmod = list(\n lm(hp ~ mpg, mtcars) ,\n lm(hp ~ mpg + drat, mtcars))\n\nmodelsummary(mod)\n\n\n\n \n\n \n \n \n \n \n \n \n (1)\n (2)\n \n \n \n \n \n (Intercept)\n 325.951 \n 287.229 \n \n \n \n (29.682)\n (42.535)\n \n \n mpg \n -8.979 \n -10.002 \n \n \n \n (1.348) \n (2.384) \n \n \n drat \n \n 16.576 \n \n \n \n \n (20.544)\n \n \n Num.Obs. \n 32 \n 32 \n \n \n R2 \n 0.602 \n 0.614 \n \n \n R2 Adj. \n 0.589 \n 0.588 \n \n \n AIC \n 336.9 \n 337.9 \n \n \n BIC \n 341.3 \n 343.7 \n \n \n Log.Lik. \n -165.428\n -164.940\n \n \n F \n 45.460 \n 23.100 \n \n \n RMSE \n 42.55 \n 41.91",
+ "text": "Bootstrap\nUsers often want to use estimates or standard errors that have been obtained using a custom strategy. To achieve this in an automated and replicable way, it can be useful to use the tidy_custom strategy described above in the “Cutomizing Existing Models” section.\nFor example, we can use the modelr package to draw 500 resamples of a dataset, and compute bootstrap standard errors by taking the standard deviation of estimates computed in all of those resampled datasets. To do this, we defined tidy_custom.lm function that will automatically bootstrap any lm model supplied to modelsummary, and replace the values in the table automatically.\nNote that the tidy_custom_lm returns a data.frame with 3 columns: term, estimate, and std.error:\n\nlibrary(\"broom\")\nlibrary(\"tidyverse\")\nlibrary(\"modelr\")\n\ntidy_custom.lm <- function(x, ...) {\n # extract data from the model\n model.frame(x) %>%\n # draw 500 bootstrap resamples\n modelr::bootstrap(n = 500) %>%\n # estimate the model 500 times\n mutate(results = map(strap, ~ update(x, data = .))) %>%\n # extract results using `broom::tidy`\n mutate(results = map(results, tidy)) %>%\n # unnest and summarize\n unnest(results) %>%\n group_by(term) %>%\n summarize(std.error = sd(estimate),\n estimate = mean(estimate))\n}\n\nmod = list(\n lm(hp ~ mpg, mtcars) ,\n lm(hp ~ mpg + drat, mtcars))\n\nmodelsummary(mod)\n\n\n\n \n\n \n \n \n \n \n \n \n (1)\n (2)\n \n \n \n \n \n (Intercept)\n 327.540 \n 282.512 \n \n \n \n (30.471)\n (43.248)\n \n \n mpg \n -9.046 \n -10.075 \n \n \n \n (1.408) \n (2.352) \n \n \n drat \n \n 18.144 \n \n \n \n \n (21.005)\n \n \n Num.Obs. \n 32 \n 32 \n \n \n R2 \n 0.602 \n 0.614 \n \n \n R2 Adj. \n 0.589 \n 0.588 \n \n \n AIC \n 336.9 \n 337.9 \n \n \n BIC \n 341.3 \n 343.7 \n \n \n Log.Lik. \n -165.428\n -164.940\n \n \n F \n 45.460 \n 23.100 \n \n \n RMSE \n 42.55 \n 41.91",
"crumbs": [
"Get started",
"Model Summaries"
@@ -807,7 +807,7 @@
"href": "vignettes/modelsummary.html#fixest-fixed-effects-and-instrumental-variable-regression",
"title": "Model Summaries",
"section": "fixest: Fixed effects and instrumental variable regression",
- "text": "fixest: Fixed effects and instrumental variable regression\nOne common use-case for glance_custom is to include additional goodness-of-fit statistics. For example, in an instrumental variable estimation computed by the fixest package, we may want to include an IV-Wald statistic for the first-stage regression of each endogenous regressor:\n\nlibrary(fixest)\nlibrary(tidyverse)\n\n# create a toy dataset\nbase <- iris\nnames(base) <- c(\"y\", \"x1\", \"x_endo_1\", \"x_inst_1\", \"fe\")\nbase$x_inst_2 <- 0.2 * base$y + 0.2 * base$x_endo_1 + rnorm(150, sd = 0.5)\nbase$x_endo_2 <- 0.2 * base$y - 0.2 * base$x_inst_1 + rnorm(150, sd = 0.5)\n\n# estimate an instrumental variable model\nmod <- feols(y ~ x1 | fe | x_endo_1 + x_endo_2 ~ x_inst_1 + x_inst_2, base)\n\n# custom extractor function returns a one-row data.frame (or tibble)\nglance_custom.fixest <- function(x) {\n tibble(\n \"Wald (x_endo_1)\" = fitstat(x, \"ivwald\")[[1]]$stat,\n \"Wald (x_endo_2)\" = fitstat(x, \"ivwald\")[[2]]$stat\n )\n}\n\n# draw table\nmodelsummary(mod)\n\n\n\n \n\n \n \n \n \n \n \n \n (1)\n \n \n \n \n \n fit_x_endo_1 \n 1.243 \n \n \n \n (0.713) \n \n \n fit_x_endo_2 \n 2.067 \n \n \n \n (0.606) \n \n \n x1 \n 0.250 \n \n \n \n (0.494) \n \n \n Num.Obs. \n 150 \n \n \n R2 \n -0.415 \n \n \n R2 Adj. \n -0.464 \n \n \n R2 Within \n -2.710 \n \n \n R2 Within Adj. \n -2.788 \n \n \n AIC \n 432.1 \n \n \n BIC \n 450.2 \n \n \n RMSE \n 0.98 \n \n \n Std.Errors \n by: fe \n \n \n FE: fe \n X \n \n \n Wald (x_endo_1)\n 40.8150562320962\n \n \n Wald (x_endo_2)\n 9.50346988711168\n \n \n \n \n\n\n\n\nrm(\"glance_custom.fixest\")",
+ "text": "fixest: Fixed effects and instrumental variable regression\nOne common use-case for glance_custom is to include additional goodness-of-fit statistics. For example, in an instrumental variable estimation computed by the fixest package, we may want to include an IV-Wald statistic for the first-stage regression of each endogenous regressor:\n\nlibrary(fixest)\nlibrary(tidyverse)\n\n# create a toy dataset\nbase <- iris\nnames(base) <- c(\"y\", \"x1\", \"x_endo_1\", \"x_inst_1\", \"fe\")\nbase$x_inst_2 <- 0.2 * base$y + 0.2 * base$x_endo_1 + rnorm(150, sd = 0.5)\nbase$x_endo_2 <- 0.2 * base$y - 0.2 * base$x_inst_1 + rnorm(150, sd = 0.5)\n\n# estimate an instrumental variable model\nmod <- feols(y ~ x1 | fe | x_endo_1 + x_endo_2 ~ x_inst_1 + x_inst_2, base)\n\n# custom extractor function returns a one-row data.frame (or tibble)\nglance_custom.fixest <- function(x) {\n tibble(\n \"Wald (x_endo_1)\" = fitstat(x, \"ivwald\")[[1]]$stat,\n \"Wald (x_endo_2)\" = fitstat(x, \"ivwald\")[[2]]$stat\n )\n}\n\n# draw table\nmodelsummary(mod)\n\n\n\n \n\n \n \n \n \n \n \n \n (1)\n \n \n \n \n \n fit_x_endo_1 \n 0.475 \n \n \n \n (0.076) \n \n \n fit_x_endo_2 \n 0.257 \n \n \n \n (0.365) \n \n \n x1 \n 0.519 \n \n \n \n (0.160) \n \n \n Num.Obs. \n 150 \n \n \n R2 \n 0.840 \n \n \n R2 Adj. \n 0.834 \n \n \n R2 Within \n 0.579 \n \n \n R2 Within Adj. \n 0.571 \n \n \n AIC \n 105.5 \n \n \n BIC \n 123.6 \n \n \n RMSE \n 0.33 \n \n \n Std.Errors \n by: fe \n \n \n FE: fe \n X \n \n \n Wald (x_endo_1)\n 8.87294167022996\n \n \n Wald (x_endo_2)\n 19.5053192767895\n \n \n \n \n\n\n\n\nrm(\"glance_custom.fixest\")",
"crumbs": [
"Get started",
"Model Summaries"
@@ -818,7 +818,7 @@
"href": "vignettes/modelsummary.html#multiple-imputation",
"title": "Model Summaries",
"section": "Multiple imputation",
- "text": "Multiple imputation\nmodelsummary can pool and display analyses on several datasets imputed using the mice or Amelia packages. This code illustrates how:\n\nlibrary(mice)\n\nWarning in check_dep_version(): ABI version mismatch: \nlme4 was built with Matrix ABI version 2\nCurrent Matrix ABI version is 1\nPlease re-install lme4 from source or restore original 'Matrix' package\n\nlibrary(Amelia)\n\n# Download data from `Rdatasets`\nurl <- 'https://vincentarelbundock.github.io/Rdatasets/csv/HistData/Guerry.csv'\ndat <- read.csv(url)[, c('Clergy', 'Commerce', 'Literacy')]\n\n# Insert missing values\ndat$Clergy[sample(1:nrow(dat), 10)] <- NA\ndat$Commerce[sample(1:nrow(dat), 10)] <- NA\ndat$Literacy[sample(1:nrow(dat), 10)] <- NA\n\n# Impute with `mice` and `Amelia`\ndat_mice <- mice(dat, m = 5, printFlag = FALSE)\ndat_amelia <- amelia(dat, m = 5, p2s = 0)$imputations\n\n# Estimate models\nmod <- list()\nmod[['Listwise deletion']] <- lm(Clergy ~ Literacy + Commerce, dat)\nmod[['Mice']] <- with(dat_mice, lm(Clergy ~ Literacy + Commerce)) \nmod[['Amelia']] <- lapply(dat_amelia, function(x) lm(Clergy ~ Literacy + Commerce, x))\n\n# Pool results\nmod[['Mice']] <- mice::pool(mod[['Mice']])\nmod[['Amelia']] <- mice::pool(mod[['Amelia']])\n\n# Summarize\nmodelsummary(mod)\n\n\n\n \n\n \n \n \n \n \n \n \n Listwise deletion\n Mice\n Amelia\n \n \n \n \n \n (Intercept)\n 89.767 \n 92.551 \n 91.865 \n \n \n \n (14.901)\n (13.638)\n (15.252)\n \n \n Literacy \n -0.633 \n -0.648 \n -0.670 \n \n \n \n (0.229) \n (0.215) \n (0.251) \n \n \n Commerce \n -0.465 \n -0.524 \n -0.521 \n \n \n \n (0.168) \n (0.136) \n (0.156) \n \n \n Num.Obs. \n 58 \n 86 \n 86 \n \n \n Num.Imp. \n \n 5 \n 5 \n \n \n R2 \n 0.146 \n 0.189 \n 0.178 \n \n \n R2 Adj. \n 0.115 \n 0.170 \n 0.158 \n \n \n AIC \n 536.8 \n \n \n \n \n BIC \n 545.1 \n \n \n \n \n Log.Lik. \n -264.422\n \n \n \n \n RMSE \n 23.11",
+ "text": "Multiple imputation\nmodelsummary can pool and display analyses on several datasets imputed using the mice or Amelia packages. This code illustrates how:\n\nlibrary(mice)\n\nWarning in check_dep_version(): ABI version mismatch: \nlme4 was built with Matrix ABI version 2\nCurrent Matrix ABI version is 1\nPlease re-install lme4 from source or restore original 'Matrix' package\n\nlibrary(Amelia)\n\n# Download data from `Rdatasets`\nurl <- 'https://vincentarelbundock.github.io/Rdatasets/csv/HistData/Guerry.csv'\ndat <- read.csv(url)[, c('Clergy', 'Commerce', 'Literacy')]\n\n# Insert missing values\ndat$Clergy[sample(1:nrow(dat), 10)] <- NA\ndat$Commerce[sample(1:nrow(dat), 10)] <- NA\ndat$Literacy[sample(1:nrow(dat), 10)] <- NA\n\n# Impute with `mice` and `Amelia`\ndat_mice <- mice(dat, m = 5, printFlag = FALSE)\ndat_amelia <- amelia(dat, m = 5, p2s = 0)$imputations\n\n# Estimate models\nmod <- list()\nmod[['Listwise deletion']] <- lm(Clergy ~ Literacy + Commerce, dat)\nmod[['Mice']] <- with(dat_mice, lm(Clergy ~ Literacy + Commerce)) \nmod[['Amelia']] <- lapply(dat_amelia, function(x) lm(Clergy ~ Literacy + Commerce, x))\n\n# Pool results\nmod[['Mice']] <- mice::pool(mod[['Mice']])\nmod[['Amelia']] <- mice::pool(mod[['Amelia']])\n\n# Summarize\nmodelsummary(mod)\n\n\n\n \n\n \n \n \n \n \n \n \n Listwise deletion\n Mice\n Amelia\n \n \n \n \n \n (Intercept)\n 77.974 \n 77.857 \n 80.682 \n \n \n \n (14.040)\n (15.510)\n (11.807)\n \n \n Literacy \n -0.447 \n -0.493 \n -0.522 \n \n \n \n (0.222) \n (0.249) \n (0.189) \n \n \n Commerce \n -0.378 \n -0.363 \n -0.394 \n \n \n \n (0.163) \n (0.158) \n (0.134) \n \n \n Num.Obs. \n 61 \n 86 \n 85 \n \n \n Num.Imp. \n \n 5 \n 5 \n \n \n R2 \n 0.097 \n 0.098 \n 0.122 \n \n \n R2 Adj. \n 0.066 \n 0.074 \n 0.100 \n \n \n AIC \n 571.5 \n \n \n \n \n BIC \n 579.9 \n \n \n \n \n Log.Lik. \n -281.738\n \n \n \n \n RMSE \n 24.53",
"crumbs": [
"Get started",
"Model Summaries"
@@ -873,7 +873,7 @@
"href": "vignettes/modelsummary.html#how-can-i-speed-up-modelsummary",
"title": "Model Summaries",
"section": "How can I speed up modelsummary?",
- "text": "How can I speed up modelsummary?\nThe modelsummary function, by itself, is not slow: it should only take a couple seconds to produce a table in any output format. However, sometimes it can be computationally expensive (and long) to extract estimates and to compute goodness-of-fit statistics for your model.\nThe main options to speed up modelsummary are:\n\nSet gof_map=NA to avoid computing expensive goodness-of-fit statistics.\nUse the easystats extractor functions and the metrics argument to avoid computing expensive statistics (see below for an example).\nUse parallel computation if you are summarizing multiple models. See the “Parallel computation” section in the ?modelsummary documentation.\n\nTo diagnose the slowdown and find the bottleneck, you can try to benchmark the various extractor functions:\n\nlibrary(tictoc)\n\ndata(trade)\nmod <- lm(mpg ~ hp + drat, mtcars)\n\ntic(\"tidy\")\nx <- broom::tidy(mod)\ntoc()\n\ntidy: 0.003 sec elapsed\n\ntic(\"glance\")\nx <- broom::glance(mod)\ntoc()\n\nglance: 0.003 sec elapsed\n\ntic(\"parameters\")\nx <- parameters::parameters(mod)\ntoc()\n\nparameters: 0.021 sec elapsed\n\ntic(\"performance\")\nx <- performance::performance(mod)\ntoc()\n\nperformance: 0.011 sec elapsed\n\n\nIn my experience, the main bottleneck tends to be computing goodness-of-fit statistics. The performance extractor allows users to specify a metrics argument to select a subset of GOF to include. Using this can speedup things considerably.\nWe call modelsummary with the metrics argument:\n\nmodelsummary(mod, metrics = \"rmse\")\n\n\n\n \n\n \n \n \n \n \n \n \n (1)\n \n \n \n \n \n (Intercept)\n 10.790 \n \n \n \n (5.078)\n \n \n hp \n -0.052 \n \n \n \n (0.009)\n \n \n drat \n 4.698 \n \n \n \n (1.192)\n \n \n Num.Obs. \n 32 \n \n \n R2 \n 0.741 \n \n \n R2 Adj. \n 0.723 \n \n \n AIC \n 169.5 \n \n \n BIC \n 175.4 \n \n \n Log.Lik. \n -80.752\n \n \n F \n 41.522",
+ "text": "How can I speed up modelsummary?\nThe modelsummary function, by itself, is not slow: it should only take a couple seconds to produce a table in any output format. However, sometimes it can be computationally expensive (and long) to extract estimates and to compute goodness-of-fit statistics for your model.\nThe main options to speed up modelsummary are:\n\nSet gof_map=NA to avoid computing expensive goodness-of-fit statistics.\nUse the easystats extractor functions and the metrics argument to avoid computing expensive statistics (see below for an example).\nUse parallel computation if you are summarizing multiple models. See the “Parallel computation” section in the ?modelsummary documentation.\n\nTo diagnose the slowdown and find the bottleneck, you can try to benchmark the various extractor functions:\n\nlibrary(tictoc)\n\ndata(trade)\nmod <- lm(mpg ~ hp + drat, mtcars)\n\ntic(\"tidy\")\nx <- broom::tidy(mod)\ntoc()\n\ntidy: 0.002 sec elapsed\n\ntic(\"glance\")\nx <- broom::glance(mod)\ntoc()\n\nglance: 0.004 sec elapsed\n\ntic(\"parameters\")\nx <- parameters::parameters(mod)\ntoc()\n\nparameters: 0.021 sec elapsed\n\ntic(\"performance\")\nx <- performance::performance(mod)\ntoc()\n\nperformance: 0.011 sec elapsed\n\n\nIn my experience, the main bottleneck tends to be computing goodness-of-fit statistics. The performance extractor allows users to specify a metrics argument to select a subset of GOF to include. Using this can speedup things considerably.\nWe call modelsummary with the metrics argument:\n\nmodelsummary(mod, metrics = \"rmse\")\n\n\n\n \n\n \n \n \n \n \n \n \n (1)\n \n \n \n \n \n (Intercept)\n 10.790 \n \n \n \n (5.078)\n \n \n hp \n -0.052 \n \n \n \n (0.009)\n \n \n drat \n 4.698 \n \n \n \n (1.192)\n \n \n Num.Obs. \n 32 \n \n \n R2 \n 0.741 \n \n \n R2 Adj. \n 0.723 \n \n \n AIC \n 169.5 \n \n \n BIC \n 175.4 \n \n \n Log.Lik. \n -80.752\n \n \n F \n 41.522",
"crumbs": [
"Get started",
"Model Summaries"
@@ -895,7 +895,7 @@
"href": "vignettes/modelsummary.html#bayesian-models",
"title": "Model Summaries",
"section": "Bayesian models",
- "text": "Bayesian models\nMany bayesian models are supported out-of-the-box, including those produced by the rstanarm and brms packages. The statistics available for bayesian models are slightly different than those available for most frequentist models. Users can call get_estimates to see what is available:\n\nlibrary(rstanarm)\n\nThis is rstanarm version 2.32.1\n\n\n- See https://mc-stan.org/rstanarm/articles/priors for changes to default priors!\n\n\n- Default priors may change, so it's safest to specify priors, even if equivalent to the defaults.\n\n\n- For execution on a local, multicore CPU with excess RAM we recommend calling\n\n\n options(mc.cores = parallel::detectCores())\n\n\n\nAttaching package: 'rstanarm'\n\n\nThe following object is masked from 'package:fixest':\n\n se\n\nmod <- stan_glm(am ~ hp + drat, data = mtcars)\n\n\nget_estimates(mod)\n\n term estimate mad conf.level conf.low conf.high prior.distribution prior.location prior.scale group std.error statistic p.value\n1 (Intercept) -2.2120118509 0.604120394 0.95 -3.441993573 -0.939045750 normal 0.40625 1.24747729 NA NA NA\n2 hp 0.0006728326 0.001113702 0.95 -0.001664825 0.002824257 normal 0.00000 0.01819465 NA NA NA\n3 drat 0.7000544165 0.142063291 0.95 0.402271239 0.986980334 normal 0.00000 2.33313429 NA NA NA\n\n\nThis shows that there is no std.error column, but that there is a mad statistic (mean absolute deviation). So we can do:\n\nmodelsummary(mod, statistic = \"mad\")\n\nWarning: \n`modelsummary` uses the `performance` package to extract goodness-of-fit\nstatistics from models of this class. You can specify the statistics you wish\nto compute by supplying a `metrics` argument to `modelsummary`, which will then\npush it forward to `performance`. Acceptable values are: \"all\", \"common\",\n\"none\", or a character vector of metrics names. For example: `modelsummary(mod,\nmetrics = c(\"RMSE\", \"R2\")` Note that some metrics are computationally\nexpensive. See `?performance::performance` for details.\n This warning appears once per session.\n\n\n\n\n \n\n \n \n \n \n \n \n \n (1)\n \n \n \n \n \n (Intercept)\n -2.212 \n \n \n \n (0.604)\n \n \n hp \n 0.001 \n \n \n \n (0.001)\n \n \n drat \n 0.700 \n \n \n \n (0.142)\n \n \n Num.Obs. \n 32 \n \n \n R2 \n 0.499 \n \n \n R2 Adj. \n 0.415 \n \n \n Log.Lik. \n -12.065\n \n \n ELPD \n -15.3 \n \n \n ELPD s.e. \n 3.2 \n \n \n LOOIC \n 30.7 \n \n \n LOOIC s.e. \n 6.3 \n \n \n WAIC \n 30.4 \n \n \n RMSE \n 0.34 \n \n \n \n \n\n\n\nAs noted in the modelsummary() documentation, model results are extracted using the parameters package. Users can pass additional arguments to modelsummary(), which will then push forward those arguments to the parameters::parameters function to change the results. For example, the parameters documentation for bayesian models shows that there is a centrality argument, which allows users to report the mean and standard deviation of the posterior distribution, instead of the median and MAD:\n\nget_estimates(mod, centrality = \"mean\")\n\n term estimate std.dev conf.level conf.low conf.high prior.distribution prior.location prior.scale group std.error statistic p.value\n1 (Intercept) -2.2024688812 0.632078279 0.95 -3.441993573 -0.939045750 normal 0.40625 1.24747729 NA NA NA\n2 hp 0.0006342765 0.001148073 0.95 -0.001664825 0.002824257 normal 0.00000 0.01819465 NA NA NA\n3 drat 0.6991966324 0.146927309 0.95 0.402271239 0.986980334 normal 0.00000 2.33313429 NA NA NA\n\nmodelsummary(mod, statistic = \"std.dev\", centrality = \"mean\")\n\n\n\n \n\n \n \n \n \n \n \n \n (1)\n \n \n \n \n \n (Intercept)\n -2.202 \n \n \n \n (0.632)\n \n \n hp \n 0.001 \n \n \n \n (0.001)\n \n \n drat \n 0.699 \n \n \n \n (0.147)\n \n \n Num.Obs. \n 32 \n \n \n R2 \n 0.499 \n \n \n R2 Adj. \n 0.415 \n \n \n Log.Lik. \n -12.065\n \n \n ELPD \n -15.3 \n \n \n ELPD s.e. \n 3.2 \n \n \n LOOIC \n 30.7 \n \n \n LOOIC s.e. \n 6.3 \n \n \n WAIC \n 30.4 \n \n \n RMSE \n 0.34 \n \n \n \n \n\n\n\nWe can also get additional test statistics using the test argument:\n\nget_estimates(mod, test = c(\"pd\", \"rope\"))\n\n term estimate mad conf.level conf.low conf.high pd rope.percentage prior.distribution prior.location prior.scale group std.error statistic p.value\n1 (Intercept) -2.2120118509 0.604120394 0.95 -3.441993573 -0.939045750 0.99925 0 normal 0.40625 1.24747729 NA NA NA\n2 hp 0.0006728326 0.001113702 0.95 -0.001664825 0.002824257 0.72225 1 normal 0.00000 0.01819465 NA NA NA\n3 drat 0.7000544165 0.142063291 0.95 0.402271239 0.986980334 1.00000 0 normal 0.00000 2.33313429 NA NA NA",
+ "text": "Bayesian models\nMany bayesian models are supported out-of-the-box, including those produced by the rstanarm and brms packages. The statistics available for bayesian models are slightly different than those available for most frequentist models. Users can call get_estimates to see what is available:\n\nlibrary(rstanarm)\n\nThis is rstanarm version 2.32.1\n\n\n- See https://mc-stan.org/rstanarm/articles/priors for changes to default priors!\n\n\n- Default priors may change, so it's safest to specify priors, even if equivalent to the defaults.\n\n\n- For execution on a local, multicore CPU with excess RAM we recommend calling\n\n\n options(mc.cores = parallel::detectCores())\n\n\n\nAttaching package: 'rstanarm'\n\n\nThe following object is masked from 'package:fixest':\n\n se\n\nmod <- stan_glm(am ~ hp + drat, data = mtcars)\n\n\nget_estimates(mod)\n\n term estimate mad conf.level conf.low conf.high prior.distribution prior.location prior.scale group std.error statistic p.value\n1 (Intercept) -2.2136927177 0.571213351 0.95 -3.348469984 -1.043576095 normal 0.40625 1.24747729 NA NA NA\n2 hp 0.0006582996 0.001025414 0.95 -0.001467951 0.002783858 normal 0.00000 0.01819465 NA NA NA\n3 drat 0.7028675788 0.139272808 0.95 0.424442197 0.968567381 normal 0.00000 2.33313429 NA NA NA\n\n\nThis shows that there is no std.error column, but that there is a mad statistic (mean absolute deviation). So we can do:\n\nmodelsummary(mod, statistic = \"mad\")\n\nWarning: \n`modelsummary` uses the `performance` package to extract goodness-of-fit\nstatistics from models of this class. You can specify the statistics you wish\nto compute by supplying a `metrics` argument to `modelsummary`, which will then\npush it forward to `performance`. Acceptable values are: \"all\", \"common\",\n\"none\", or a character vector of metrics names. For example: `modelsummary(mod,\nmetrics = c(\"RMSE\", \"R2\")` Note that some metrics are computationally\nexpensive. See `?performance::performance` for details.\n This warning appears once per session.\n\n\n\n\n \n\n \n \n \n \n \n \n \n (1)\n \n \n \n \n \n (Intercept)\n -2.214 \n \n \n \n (0.571)\n \n \n hp \n 0.001 \n \n \n \n (0.001)\n \n \n drat \n 0.703 \n \n \n \n (0.139)\n \n \n Num.Obs. \n 32 \n \n \n R2 \n 0.497 \n \n \n R2 Adj. \n 0.434 \n \n \n Log.Lik. \n -12.042\n \n \n ELPD \n -15.1 \n \n \n ELPD s.e. \n 3.1 \n \n \n LOOIC \n 30.1 \n \n \n LOOIC s.e. \n 6.3 \n \n \n WAIC \n 29.9 \n \n \n RMSE \n 0.34 \n \n \n \n \n\n\n\nAs noted in the modelsummary() documentation, model results are extracted using the parameters package. Users can pass additional arguments to modelsummary(), which will then push forward those arguments to the parameters::parameters function to change the results. For example, the parameters documentation for bayesian models shows that there is a centrality argument, which allows users to report the mean and standard deviation of the posterior distribution, instead of the median and MAD:\n\nget_estimates(mod, centrality = \"mean\")\n\n term estimate std.dev conf.level conf.low conf.high prior.distribution prior.location prior.scale group std.error statistic p.value\n1 (Intercept) -2.2040529913 0.588169690 0.95 -3.348469984 -1.043576095 normal 0.40625 1.24747729 NA NA NA\n2 hp 0.0006642347 0.001070159 0.95 -0.001467951 0.002783858 normal 0.00000 0.01819465 NA NA NA\n3 drat 0.6986792248 0.139201618 0.95 0.424442197 0.968567381 normal 0.00000 2.33313429 NA NA NA\n\nmodelsummary(mod, statistic = \"std.dev\", centrality = \"mean\")\n\n\n\n \n\n \n \n \n \n \n \n \n (1)\n \n \n \n \n \n (Intercept)\n -2.204 \n \n \n \n (0.588)\n \n \n hp \n 0.001 \n \n \n \n (0.001)\n \n \n drat \n 0.699 \n \n \n \n (0.139)\n \n \n Num.Obs. \n 32 \n \n \n R2 \n 0.497 \n \n \n R2 Adj. \n 0.434 \n \n \n Log.Lik. \n -12.042\n \n \n ELPD \n -15.1 \n \n \n ELPD s.e. \n 3.1 \n \n \n LOOIC \n 30.1 \n \n \n LOOIC s.e. \n 6.3 \n \n \n WAIC \n 29.9 \n \n \n RMSE \n 0.34 \n \n \n \n \n\n\n\nWe can also get additional test statistics using the test argument:\n\nget_estimates(mod, test = c(\"pd\", \"rope\"))\n\n term estimate mad conf.level conf.low conf.high pd rope.percentage prior.distribution prior.location prior.scale group std.error statistic p.value\n1 (Intercept) -2.2136927177 0.571213351 0.95 -3.348469984 -1.043576095 0.99950 0 normal 0.40625 1.24747729 NA NA NA\n2 hp 0.0006582996 0.001025414 0.95 -0.001467951 0.002783858 0.74325 1 normal 0.00000 0.01819465 NA NA NA\n3 drat 0.7028675788 0.139272808 0.95 0.424442197 0.968567381 1.00000 0 normal 0.00000 2.33313429 NA NA NA",
"crumbs": [
"Get started",
"Model Summaries"
diff --git a/vignettes/appearance.html b/vignettes/appearance.html
index 85426105..1a0f6df6 100644
--- a/vignettes/appearance.html
+++ b/vignettes/appearance.html
@@ -486,12 +486,12 @@