Skip to content

Commit

Permalink
Merge branch 'dev'
Browse files Browse the repository at this point in the history
  • Loading branch information
agranholm committed May 3, 2024
2 parents 9fa43ee + 039aa48 commit fa534be
Show file tree
Hide file tree
Showing 91 changed files with 3,081 additions and 1,234 deletions.
10 changes: 5 additions & 5 deletions DESCRIPTION
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
Package: adaptr
Title: Adaptive Trial Simulator
Version: 1.3.2
Date: 2023-08-21
Version: 1.4.0
Date: 2024-05-03
Authors@R:
c(person("Anders", "Granholm",
email = "[email protected]",
Expand All @@ -22,8 +22,8 @@ Authors@R:
Description: Package that simulates adaptive (multi-arm, multi-stage) clinical
trials using adaptive stopping, adaptive arm dropping, and/or adaptive
randomisation. Developed as part of the INCEPT (Intensive Care Platform
Trial) project (<https://incept.dk/>), which is primarily supported by a
grant from Sygeforsikringen "danmark" (<https://www.sygeforsikring.dk/>).
Trial) project (<https://incept.dk/>), primarily supported by a grant
from Sygeforsikringen "danmark" (<https://www.sygeforsikring.dk/>).
License: GPL (>= 3)
Imports:
stats,
Expand All @@ -37,7 +37,7 @@ URL: https://inceptdk.github.io/adaptr/,
https://github.com/INCEPTdk/adaptr/,
https://incept.dk/
BugReports: https://github.com/INCEPTdk/adaptr/issues/
RoxygenNote: 7.2.3
RoxygenNote: 7.3.1
Suggests:
ggplot2,
covr,
Expand Down
1 change: 1 addition & 0 deletions NAMESPACE
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ export(setup_cluster)
export(setup_trial)
export(setup_trial_binom)
export(setup_trial_norm)
export(update_saved_calibration)
export(update_saved_trials)
import(parallel)
importFrom(stats,aggregate)
Expand Down
61 changes: 61 additions & 0 deletions NEWS.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,64 @@
# adaptr 1.4.0

This is a minor release implementing new functionality, and including bug fixes,
updates to documentation, argument checking and test coverage.

### New features and major changes:

* Added the `rescale_probs` argument to the `setup_trial()` family of
functions, allowing automatic rescaling of fixed allocation probabilities
and or minimum/maximum allocation probability limits when arms are dropped
in simulations of trial designs with `>2 arms`.

* The `extract_results()` function now also returns errors for each simulation
(in addition to squared errors) and the `check_performance()`,
`plot_convergence()`, and `summary()` functions (including their `print()`
methods) now calculate and present median absolute errors (in addition to
root mean squared errors).

* The `plot_metrics_ecdf()` function now supports plotting errors (raw,
squared, and absolute), and now takes the necessary additional arguments
passed to `extract_results()` used for arm selection in simulated trials not
stopped for superiority.

* Added the `update_saved_calibration()` function to update calibrated trial
objects (including embedded trial specifications and results) saved by the
`calibrate_trial()` function using previous versions of the package.

* Rewritten README and 'Overview' vignette to better reflect the typical usage
and workflow.

### Minor changes and bug fixes:

* The `setup_trial()` family of functions now stops with an error if less than
two `arms` are provided.

* The `setup_trial()` family of functions now stops with an error if
`control_prob_fixed` is `"match"` and `fixed_probs` is provided for the
common control arm.

* Improved error message when `true_ys`-argument is missing in
`setup_trial_binom()` or when `true_ys`- or `sds`-argument is missing in
`setup_trial_norm()`.

* Changed the number of rows used in `plot_convergence()` and `plot_status()`
if the total number of plots is `<= 3` and `nrow` and `ncol` are `NULL`.

* Fixed a bug in `extract_results()` (and thus all functions relying on it),
causing arm selection in inconclusive trial simulations to error when
stopped for practical equivalence and more simulated patients were
randomised than included in the last analysis.

* Improved test coverage.

* Minor edits and clarification to package documentation.

* Added references to two open access articles (with code) of simulation
studies using `adaptr` to assess the performance of adaptive clinical trials
according to different follow-up/data collection lags
(<https://doi.org/10.1002/pst.2342>) and different sceptical priors
(<https://doi.org/10.1002/pst.2387>)

# adaptr 1.3.2

This is a patch release with bug fixes and documentation updates.
Expand Down
16 changes: 11 additions & 5 deletions R/adaptr-package.R
Original file line number Diff line number Diff line change
@@ -1,9 +1,5 @@
#' adaptr: Adaptive Trial Simulator
#'
#' @docType package
#' @name adaptr-package
#' @aliases adaptr
#'
#' @description
#' \if{html}{
#' \figure{adaptr.png}{options: width="120" alt="logo"}
Expand Down Expand Up @@ -76,6 +72,16 @@
#'
#' [GitHub repository](https://github.com/INCEPTdk/adaptr/)
#'
#' **Examples of studies using `adaptr`:**
#'
#' Granholm A, Lange T, Harhay MO, Jensen AKG, Perner A, Møller MH, Kaas-Hansen
#' BS (2023). Effects of duration of follow-up and lag in data collection on the
#' performance of adaptive clinical trials. Pharm Stat. \doi{10.1002/pst.2342}
#'
#' Granholm A, Lange T, Harhay MO, Perner A, Møller MH, Kaas-Hansen BS (2024).
#' Effects of sceptical priors on the performance of adaptive clinical trials
#' with binary outcomes. Pharm Stat. \doi{10.1002/pst.2387}
#'
#'
#' @seealso
#' [setup_cluster()], [setup_trial()], [setup_trial_binom()],
Expand All @@ -84,4 +90,4 @@
#' [check_remaining_arms()], [plot_convergence()], [plot_metrics_ecdf()],
#' [print()], [plot_status()], [plot_history()].
#'
NULL
"_PACKAGE"
13 changes: 7 additions & 6 deletions R/calibrate_trial.R
Original file line number Diff line number Diff line change
Expand Up @@ -555,11 +555,16 @@ calibrate_trial <- function(
# Check if a previous version should be loaded and returned (only if overwrite is FALSE)
if (ifelse(!is.null(path) & !overwrite, file.exists(path), FALSE)) {
prev <- readRDS(path)
# Compare previous/current trial_specs
# Compare previous/current objects
prev_spec_nofun <- prev$input_trial_spec
spec_nofun <- trial_spec
prev_spec_nofun$fun_y_gen <- prev_spec_nofun$fun_draws <- prev_spec_nofun$fun_raw_est <- spec_nofun$fun_y_gen <- spec_nofun$fun_draws <- spec_nofun$fun_raw_est <- NULL
if (!isTRUE(all.equal(prev_spec_nofun, spec_nofun)) |
# Compare
if ((prev$adaptr_version != .adaptr_version)) { # Check version
stop0("The object in path was created by a previous version of adaptr and ",
"cannot be used by this version of adaptr unless the object is updated. ",
"Type 'help(\"update_saved_calibration\")' for help on updating.")
} else if (!isTRUE(all.equal(prev_spec_nofun, spec_nofun)) | # Check spec besides version
!equivalent_funs(prev$input_trial_spec$fun_y_gen, trial_spec$fun_y_gen) |
!equivalent_funs(prev$input_trial_spec$fun_draws, trial_spec$fun_draws) |
!equivalent_funs(prev$input_trial_spec$fun_raw_est, trial_spec$fun_raw_est)) {
Expand All @@ -577,10 +582,6 @@ calibrate_trial <- function(
} else if (!equivalent_funs(fun, prev$fun)){
stop0("The calibration function (argument 'fun') in the object in path ",
"is different from the current calibration function.")
} else if ((prev$adaptr_version != .adaptr_version)) { # Check version
# Included for future use - later, stopping will only be needed when
# differences in behaviour of the used functions are required
stop0("the object in path was created by a previous version of adaptr.")
}
if (verbose) {
message(paste0(
Expand Down
15 changes: 11 additions & 4 deletions R/check_performance.R
Original file line number Diff line number Diff line change
Expand Up @@ -63,9 +63,9 @@ calculate_idp <- function(sels, arms, true_ys, highest_is_best) {
#' SDs), `"err_mad"` (bootstrapped MAD-SDs, as described in [setup_trial()]
#' and [stats::mad()]), `"lo_ci"`, and `"hi_ci"`, the latter two corresponding
#' to the lower/upper limits of the percentile-based bootstrapped confidence
#' intervals. Bootstrap estimates are **not** calculated for the mininum
#' intervals. Bootstrap estimates are **not** calculated for the minimum
#' (`_p0`) and maximum values (`_p100`) of `size`, `sum_ys`, and `ratio_ys`,
#' as non-parametric bootstrapping for mininum/maximum values is not
#' as non-parametric bootstrapping for minimum/maximum values is not
#' sensible - bootstrap estimates for these values will be `NA`.\cr
#' The following performance metrics are calculated:
#' \itemize{
Expand Down Expand Up @@ -104,7 +104,10 @@ calculate_idp <- function(sels, arms, true_ys, highest_is_best) {
#' selection, according to the specified selection strategy. Contains one
#' element per `arm`, named `prob_select_arm_<arm name>` and
#' `prob_select_none` for the probability of selecting no arm.
#' \item `rmse`, `rmse_te`: the root mean squared error of the estimates for
#' \item `rmse`, `rmse_te`: the root mean squared errors of the estimates for
#' the selected arm and for the treatment effect, as described in
#' [extract_results()].
#' \item `mae`, `mae_te`: the median absolute errors of the estimates for
#' the selected arm and for the treatment effect, as described in
#' [extract_results()].
#' \item `idp`: the ideal design percentage (IDP; 0-100%), see **Details**.
Expand Down Expand Up @@ -228,7 +231,7 @@ check_performance <- function(object, select_strategy = "control if available",
"ratio_ys_p25", "ratio_ys_p75", "ratio_ys_p0", "ratio_ys_p100",
"prob_conclusive", "prob_superior", "prob_equivalence",
"prob_futility", "prob_max", paste0("prob_select_", c(paste0("arm_", arms), "none")),
"rmse", "rmse_te", "idp"),
"rmse", "rmse_te", "mae", "mae_te", "idp"),
est = NA, err_sd = NA, err_mad = NA, lo_ci = NA, hi_ci = NA)

# Restrict simulations summarised
Expand All @@ -255,6 +258,8 @@ check_performance <- function(object, select_strategy = "control if available",
mean(is.na(extr_res$selected_arm[restrict_idx])),
sqrt(mean(extr_res$sq_err[restrict_idx], na.rm = TRUE)) %f|% NA,
sqrt(mean(extr_res$sq_err_te[restrict_idx], na.rm = TRUE)) %f|% NA,
median(abs(extr_res$err[restrict_idx]), na.rm = TRUE) %f|% NA,
median(abs(extr_res$err_te[restrict_idx]), na.rm = TRUE) %f|% NA,
calculate_idp(extr_res$selected_arm[restrict_idx], arms, true_ys, highest_is_best) %f|% NA)

# Simply object or do bootstrapping
Expand Down Expand Up @@ -309,6 +314,8 @@ check_performance <- function(object, select_strategy = "control if available",
mean(is.na(extr_boot$selected_arm[restrict_idx])),
sqrt(mean(extr_boot$sq_err[restrict_idx], na.rm = TRUE)) %f|% NA,
sqrt(mean(extr_boot$sq_err_te[restrict_idx], na.rm = TRUE)) %f|% NA,
median(abs(extr_boot$err[restrict_idx]), na.rm = TRUE) %f|% NA,
median(abs(extr_boot$err_te[restrict_idx]), na.rm = TRUE) %f|% NA,
calculate_idp(extr_boot$selected_arm[restrict_idx], arms, true_ys, highest_is_best) %f|% NA)
}
boot_mat
Expand Down
2 changes: 1 addition & 1 deletion R/check_remaining_arms.R
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
#' with a common `control`) across multiple simulated trial results. The
#' function supplements the [extract_results()], [check_performance()], and
#' [summary()] functions, and is especially useful for designs with `> 2` arms,
#' where it provides details that the other functionality mentioned do not.
#' where it provides details that the other functions mentioned do not.
#'
#' @param object `trial_results` object, output from the [run_trials()]
#' function.
Expand Down
46 changes: 27 additions & 19 deletions R/extract_results.R
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,9 @@ extract_results_batch <- function(trial_results,
final_status = vapply_str(1:n_rep, function(x) trial_results[[x]]$final_status),
superior_arm = NA,
selected_arm = NA,
err = NA,
sq_err = NA,
err_te = NA,
sq_err_te = NA,
stringsAsFactors = FALSE)

Expand All @@ -59,7 +61,7 @@ extract_results_batch <- function(trial_results,
# Do not consider arms dropped for equivalence before final stop
if (cur_status == "equivalence") { # Stopped for equivalence
# Only consider equivalent arms declared equivalent at final look
tmp_sel <- tmp_sel[tmp_sel$final_status %in% c("equivalence", "control") & tmp_sel$status_look == df$final_n[[i]], ]
tmp_sel <- tmp_sel[tmp_sel$final_status %in% c("equivalence", "control") & tmp_sel$status_look == trial_results[[i]]$followed_n, ]
} else {
# Only consider arms not stopped for equivalence
tmp_sel <- tmp_sel[tmp_sel$final_status != "equivalence", ]
Expand Down Expand Up @@ -101,10 +103,12 @@ extract_results_batch <- function(trial_results,
selected_index <- which(tmp_res$arms == cur_select)
selected_est_y <- tmp_res[[which_ests]][selected_index]
selected_true_y <- tmp_res$true_ys[selected_index]
df$err[i] <- selected_est_y - selected_true_y
df$sq_err[i] <- (selected_est_y - selected_true_y)^2
if (!is.null(te_comp)){
if (cur_select != te_comp){
te_comp_est_y <- tmp_res[[which_ests]][te_comp_index]
df$err_te[i] <- (selected_est_y - te_comp_est_y) - (selected_true_y - te_comp_true_y)
df$sq_err_te[i] <- ( (selected_est_y - te_comp_est_y) - (selected_true_y - te_comp_true_y) )^2
}
}
Expand Down Expand Up @@ -173,18 +177,18 @@ extract_results_batch <- function(trial_results,
#' @param te_comp character string, treatment-effect comparator. Can be either
#' `NULL` (the default) in which case the **first** `control` arm is used for
#' trial designs with a common control arm, or a string naming a single trial
#' `arm`. Will be used when calculating `sq_err_te` (the squared error of the
#' treatment effect comparing the selected arm to the comparator arm, as
#' described below).
#' `arm`. Will be used when calculating `err_te` and `sq_err_te` (the error
#' and the squared error of the treatment effect comparing the selected arm to
#' the comparator arm, as described below).
#' @param raw_ests single logical. If `FALSE` (default), the
#' posterior estimates (`post_ests` or `post_ests_all`, see [setup_trial()]
#' and [run_trial()]) will be used to calculate `sq_err` (the squared error of
#' the estimated compared to the specified effect in the selected arm) and
#' `sq_err_te` (the squared error of the treatment effect comparing the
#' selected arm to the comparator arm, as described for `te_comp` and below).
#' If `TRUE`, the raw estimates (`raw_ests` or `raw_ests_all`, see
#' [setup_trial()] and [run_trial()]) will be used instead of the posterior
#' estimates.
#' and [run_trial()]) will be used to calculate `err` and `sq_err` (the error
#' and the squared error of the estimated compared to the specified effect in
#' the selected arm) and `err_te` and `sq_err_te` (the error and the squared
#' error of the treatment effect comparing the selected arm to the comparator
#' arm, as described for `te_comp` and below). If `TRUE`, the raw estimates
#' (`raw_ests` or `raw_ests_all`, see [setup_trial()] and [run_trial()]) will
#' be used instead of the posterior estimates.
#' @param final_ests single logical. If `TRUE` (recommended) the final estimates
#' calculated using outcome data from all patients randomised when trials are
#' stopped are used (`post_ests_all` or `raw_ests_all`, see [setup_trial()]
Expand Down Expand Up @@ -231,17 +235,21 @@ extract_results_batch <- function(trial_results,
#' \item `selected_arm`: the final selected arm (as described above). Will
#' correspond to the `superior_arm` in simulations stopped for superiority
#' and be `NA` if no arm is selected. See `select_strategy` above.
#' \item `err`: the squared error of the estimate in the selected arm,
#' calculated as `estimated effect - true effect` for the selected
#' arm.
#' \item `sq_err:` the squared error of the estimate in the selected arm,
#' calculated as `(estimated effect - true effect)^2` for the selected
#' arms.
#' \item `sq_err_te`: the squared error of the treatment effect comparing
#' the selected arm to the comparator arm (as specified in `te_comp`).
#' Calculated as:\cr
#' `((estimated effect in the selected arm - estimated effect in the comparator arm) -`
#' `(true effect in the selected arm - true effect in the comparator arm))^2` \cr
#' Will be `NA` for simulations without a selected arm, with no
#' calculated as `err^2` for the selected arm, with `err` defined above.
#' \item `err_te`: the error of the treatment effect comparing the selected
#' arm to the comparator arm (as specified in `te_comp`). Calculated as:\cr
#' `(estimated effect in the selected arm - estimated effect in the comparator arm) -`
#' `(true effect in the selected arm - true effect in the comparator arm)`
#' \cr Will be `NA` for simulations without a selected arm, with no
#' comparator specified (see `te_comp` above), and when the selected arm
#' is the comparator arm.
#' \item `sq_err_te`: the squared error of the treatment effect comparing
#' the selected arm to the comparator arm (as specified in `te_comp`),
#' calculated as `err_te^2`, with `err_te` defined above.
#' }
#'
#' @examples
Expand Down
Loading

0 comments on commit fa534be

Please sign in to comment.