From 9716122b9489a6badf82afdfcfa0bbbe66eab76f Mon Sep 17 00:00:00 2001 From: agranholm Date: Fri, 3 May 2024 12:31:39 +0000 Subject: [PATCH] =?UTF-8?q?Deploying=20to=20gh-pages=20from=20@=20INCEPTdk?= =?UTF-8?q?/adaptr@6204a660fcef665e073bbac93dc6bce528ea0721=20=F0=9F=9A=80?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- articles/Overview.html | 4 ++-- pkgdown.yml | 2 +- reference/summary.html | 2 +- search.json | 2 +- 4 files changed, 5 insertions(+), 5 deletions(-) diff --git a/articles/Overview.html b/articles/Overview.html index 2e9cc4e..837b259 100644 --- a/articles/Overview.html +++ b/articles/Overview.html @@ -287,7 +287,7 @@

Calibration#> Calibration/simulation details: #> * Total evaluations: 4 (previous + grid + iterations) #> * Repetitions: 1000 -#> * Calibration time: 54.9 secs +#> * Calibration time: 53.9 secs #> * Base random seed: 4131 #> #> See 'help("calibrate_trial")' for details. @@ -410,7 +410,7 @@

Summarising results#> * Ideal design percentage: not estimable #> #> Simulation details: -#> * Simulation time: 20.4 secs +#> * Simulation time: 20.1 secs #> * Base random seed: 4131 #> * Credible interval width: 95% #> * Number of posterior draws: 5000 diff --git a/pkgdown.yml b/pkgdown.yml index 03a3578..add5940 100644 --- a/pkgdown.yml +++ b/pkgdown.yml @@ -5,7 +5,7 @@ articles: Advanced-example: Advanced-example.html Basic-examples: Basic-examples.html Overview: Overview.html -last_built: 2024-05-03T09:13Z +last_built: 2024-05-03T12:29Z urls: reference: https://inceptdk.github.io/adaptr/reference article: https://inceptdk.github.io/adaptr/articles diff --git a/reference/summary.html b/reference/summary.html index 4e0ca0d..50b2113 100644 --- a/reference/summary.html +++ b/reference/summary.html @@ -288,7 +288,7 @@

Examples#> * Ideal design percentage: 70.4% #> #> Simulation details: -#> * Simulation time: 0.792 secs +#> * Simulation time: 0.695 secs #> * Base random seed: 12345 #> * Credible interval width: 95% #> * Number of posterior draws: 5000 diff --git a/search.json b/search.json index 17b7ab9..165aeb4 100644 --- a/search.json +++ b/search.json @@ -1 +1 @@ -[{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":null,"dir":"","previous_headings":"","what":"GNU General Public License","title":"GNU General Public License","text":"Version 3, 29 June 2007Copyright © 2007 Free Software Foundation, Inc.  Everyone permitted copy distribute verbatim copies license document, changing allowed.","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"preamble","dir":"","previous_headings":"","what":"Preamble","title":"GNU General Public License","text":"GNU General Public License free, copyleft license software kinds works. licenses software practical works designed take away freedom share change works. contrast, GNU General Public License intended guarantee freedom share change versions program–make sure remains free software users. , Free Software Foundation, use GNU General Public License software; applies also work released way authors. can apply programs, . speak free software, referring freedom, price. General Public Licenses designed make sure freedom distribute copies free software (charge wish), receive source code can get want , can change software use pieces new free programs, know can things. protect rights, need prevent others denying rights asking surrender rights. Therefore, certain responsibilities distribute copies software, modify : responsibilities respect freedom others. example, distribute copies program, whether gratis fee, must pass recipients freedoms received. must make sure , , receive can get source code. must show terms know rights. Developers use GNU GPL protect rights two steps: (1) assert copyright software, (2) offer License giving legal permission copy, distribute /modify . developers’ authors’ protection, GPL clearly explains warranty free software. users’ authors’ sake, GPL requires modified versions marked changed, problems attributed erroneously authors previous versions. devices designed deny users access install run modified versions software inside , although manufacturer can . fundamentally incompatible aim protecting users’ freedom change software. systematic pattern abuse occurs area products individuals use, precisely unacceptable. Therefore, designed version GPL prohibit practice products. problems arise substantially domains, stand ready extend provision domains future versions GPL, needed protect freedom users. Finally, every program threatened constantly software patents. States allow patents restrict development use software general-purpose computers, , wish avoid special danger patents applied free program make effectively proprietary. prevent , GPL assures patents used render program non-free. precise terms conditions copying, distribution modification follow.","code":""},{"path":[]},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"id_0-definitions","dir":"","previous_headings":"TERMS AND CONDITIONS","what":"0. Definitions","title":"GNU General Public License","text":"“License” refers version 3 GNU General Public License. “Copyright” also means copyright-like laws apply kinds works, semiconductor masks. “Program” refers copyrightable work licensed License. licensee addressed “”. “Licensees” “recipients” may individuals organizations. “modify” work means copy adapt part work fashion requiring copyright permission, making exact copy. resulting work called “modified version” earlier work work “based ” earlier work. “covered work” means either unmodified Program work based Program. “propagate” work means anything , without permission, make directly secondarily liable infringement applicable copyright law, except executing computer modifying private copy. Propagation includes copying, distribution (without modification), making available public, countries activities well. “convey” work means kind propagation enables parties make receive copies. Mere interaction user computer network, transfer copy, conveying. interactive user interface displays “Appropriate Legal Notices” extent includes convenient prominently visible feature (1) displays appropriate copyright notice, (2) tells user warranty work (except extent warranties provided), licensees may convey work License, view copy License. interface presents list user commands options, menu, prominent item list meets criterion.","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"id_1-source-code","dir":"","previous_headings":"TERMS AND CONDITIONS","what":"1. Source Code","title":"GNU General Public License","text":"“source code” work means preferred form work making modifications . “Object code” means non-source form work. “Standard Interface” means interface either official standard defined recognized standards body, , case interfaces specified particular programming language, one widely used among developers working language. “System Libraries” executable work include anything, work whole, () included normal form packaging Major Component, part Major Component, (b) serves enable use work Major Component, implement Standard Interface implementation available public source code form. “Major Component”, context, means major essential component (kernel, window system, ) specific operating system () executable work runs, compiler used produce work, object code interpreter used run . “Corresponding Source” work object code form means source code needed generate, install, (executable work) run object code modify work, including scripts control activities. However, include work’s System Libraries, general-purpose tools generally available free programs used unmodified performing activities part work. example, Corresponding Source includes interface definition files associated source files work, source code shared libraries dynamically linked subprograms work specifically designed require, intimate data communication control flow subprograms parts work. Corresponding Source need include anything users can regenerate automatically parts Corresponding Source. Corresponding Source work source code form work.","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"id_2-basic-permissions","dir":"","previous_headings":"TERMS AND CONDITIONS","what":"2. Basic Permissions","title":"GNU General Public License","text":"rights granted License granted term copyright Program, irrevocable provided stated conditions met. License explicitly affirms unlimited permission run unmodified Program. output running covered work covered License output, given content, constitutes covered work. License acknowledges rights fair use equivalent, provided copyright law. may make, run propagate covered works convey, without conditions long license otherwise remains force. may convey covered works others sole purpose make modifications exclusively , provide facilities running works, provided comply terms License conveying material control copyright. thus making running covered works must exclusively behalf, direction control, terms prohibit making copies copyrighted material outside relationship . Conveying circumstances permitted solely conditions stated . Sublicensing allowed; section 10 makes unnecessary.","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"id_3-protecting-users-legal-rights-from-anti-circumvention-law","dir":"","previous_headings":"TERMS AND CONDITIONS","what":"3. Protecting Users’ Legal Rights From Anti-Circumvention Law","title":"GNU General Public License","text":"covered work shall deemed part effective technological measure applicable law fulfilling obligations article 11 WIPO copyright treaty adopted 20 December 1996, similar laws prohibiting restricting circumvention measures. convey covered work, waive legal power forbid circumvention technological measures extent circumvention effected exercising rights License respect covered work, disclaim intention limit operation modification work means enforcing, work’s users, third parties’ legal rights forbid circumvention technological measures.","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"id_4-conveying-verbatim-copies","dir":"","previous_headings":"TERMS AND CONDITIONS","what":"4. Conveying Verbatim Copies","title":"GNU General Public License","text":"may convey verbatim copies Program’s source code receive , medium, provided conspicuously appropriately publish copy appropriate copyright notice; keep intact notices stating License non-permissive terms added accord section 7 apply code; keep intact notices absence warranty; give recipients copy License along Program. may charge price price copy convey, may offer support warranty protection fee.","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"id_5-conveying-modified-source-versions","dir":"","previous_headings":"TERMS AND CONDITIONS","what":"5. Conveying Modified Source Versions","title":"GNU General Public License","text":"may convey work based Program, modifications produce Program, form source code terms section 4, provided also meet conditions: ) work must carry prominent notices stating modified , giving relevant date. b) work must carry prominent notices stating released License conditions added section 7. requirement modifies requirement section 4 “keep intact notices”. c) must license entire work, whole, License anyone comes possession copy. License therefore apply, along applicable section 7 additional terms, whole work, parts, regardless packaged. License gives permission license work way, invalidate permission separately received . d) work interactive user interfaces, must display Appropriate Legal Notices; however, Program interactive interfaces display Appropriate Legal Notices, work need make . compilation covered work separate independent works, nature extensions covered work, combined form larger program, volume storage distribution medium, called “aggregate” compilation resulting copyright used limit access legal rights compilation’s users beyond individual works permit. Inclusion covered work aggregate cause License apply parts aggregate.","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"id_6-conveying-non-source-forms","dir":"","previous_headings":"TERMS AND CONDITIONS","what":"6. Conveying Non-Source Forms","title":"GNU General Public License","text":"may convey covered work object code form terms sections 4 5, provided also convey machine-readable Corresponding Source terms License, one ways: ) Convey object code , embodied , physical product (including physical distribution medium), accompanied Corresponding Source fixed durable physical medium customarily used software interchange. b) Convey object code , embodied , physical product (including physical distribution medium), accompanied written offer, valid least three years valid long offer spare parts customer support product model, give anyone possesses object code either (1) copy Corresponding Source software product covered License, durable physical medium customarily used software interchange, price reasonable cost physically performing conveying source, (2) access copy Corresponding Source network server charge. c) Convey individual copies object code copy written offer provide Corresponding Source. alternative allowed occasionally noncommercially, received object code offer, accord subsection 6b. d) Convey object code offering access designated place (gratis charge), offer equivalent access Corresponding Source way place charge. need require recipients copy Corresponding Source along object code. place copy object code network server, Corresponding Source may different server (operated third party) supports equivalent copying facilities, provided maintain clear directions next object code saying find Corresponding Source. Regardless server hosts Corresponding Source, remain obligated ensure available long needed satisfy requirements. e) Convey object code using peer--peer transmission, provided inform peers object code Corresponding Source work offered general public charge subsection 6d. separable portion object code, whose source code excluded Corresponding Source System Library, need included conveying object code work. “User Product” either (1) “consumer product”, means tangible personal property normally used personal, family, household purposes, (2) anything designed sold incorporation dwelling. determining whether product consumer product, doubtful cases shall resolved favor coverage. particular product received particular user, “normally used” refers typical common use class product, regardless status particular user way particular user actually uses, expects expected use, product. product consumer product regardless whether product substantial commercial, industrial non-consumer uses, unless uses represent significant mode use product. “Installation Information” User Product means methods, procedures, authorization keys, information required install execute modified versions covered work User Product modified version Corresponding Source. information must suffice ensure continued functioning modified object code case prevented interfered solely modification made. convey object code work section , , specifically use , User Product, conveying occurs part transaction right possession use User Product transferred recipient perpetuity fixed term (regardless transaction characterized), Corresponding Source conveyed section must accompanied Installation Information. requirement apply neither third party retains ability install modified object code User Product (example, work installed ROM). requirement provide Installation Information include requirement continue provide support service, warranty, updates work modified installed recipient, User Product modified installed. Access network may denied modification materially adversely affects operation network violates rules protocols communication across network. Corresponding Source conveyed, Installation Information provided, accord section must format publicly documented (implementation available public source code form), must require special password key unpacking, reading copying.","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"id_7-additional-terms","dir":"","previous_headings":"TERMS AND CONDITIONS","what":"7. Additional Terms","title":"GNU General Public License","text":"“Additional permissions” terms supplement terms License making exceptions one conditions. Additional permissions applicable entire Program shall treated though included License, extent valid applicable law. additional permissions apply part Program, part may used separately permissions, entire Program remains governed License without regard additional permissions. convey copy covered work, may option remove additional permissions copy, part . (Additional permissions may written require removal certain cases modify work.) may place additional permissions material, added covered work, can give appropriate copyright permission. Notwithstanding provision License, material add covered work, may (authorized copyright holders material) supplement terms License terms: ) Disclaiming warranty limiting liability differently terms sections 15 16 License; b) Requiring preservation specified reasonable legal notices author attributions material Appropriate Legal Notices displayed works containing ; c) Prohibiting misrepresentation origin material, requiring modified versions material marked reasonable ways different original version; d) Limiting use publicity purposes names licensors authors material; e) Declining grant rights trademark law use trade names, trademarks, service marks; f) Requiring indemnification licensors authors material anyone conveys material (modified versions ) contractual assumptions liability recipient, liability contractual assumptions directly impose licensors authors. non-permissive additional terms considered “restrictions” within meaning section 10. Program received , part , contains notice stating governed License along term restriction, may remove term. license document contains restriction permits relicensing conveying License, may add covered work material governed terms license document, provided restriction survive relicensing conveying. add terms covered work accord section, must place, relevant source files, statement additional terms apply files, notice indicating find applicable terms. Additional terms, permissive non-permissive, may stated form separately written license, stated exceptions; requirements apply either way.","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"id_8-termination","dir":"","previous_headings":"TERMS AND CONDITIONS","what":"8. Termination","title":"GNU General Public License","text":"may propagate modify covered work except expressly provided License. attempt otherwise propagate modify void, automatically terminate rights License (including patent licenses granted third paragraph section 11). However, cease violation License, license particular copyright holder reinstated () provisionally, unless copyright holder explicitly finally terminates license, (b) permanently, copyright holder fails notify violation reasonable means prior 60 days cessation. Moreover, license particular copyright holder reinstated permanently copyright holder notifies violation reasonable means, first time received notice violation License (work) copyright holder, cure violation prior 30 days receipt notice. Termination rights section terminate licenses parties received copies rights License. rights terminated permanently reinstated, qualify receive new licenses material section 10.","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"id_9-acceptance-not-required-for-having-copies","dir":"","previous_headings":"TERMS AND CONDITIONS","what":"9. Acceptance Not Required for Having Copies","title":"GNU General Public License","text":"required accept License order receive run copy Program. Ancillary propagation covered work occurring solely consequence using peer--peer transmission receive copy likewise require acceptance. However, nothing License grants permission propagate modify covered work. actions infringe copyright accept License. Therefore, modifying propagating covered work, indicate acceptance License .","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"id_10-automatic-licensing-of-downstream-recipients","dir":"","previous_headings":"TERMS AND CONDITIONS","what":"10. Automatic Licensing of Downstream Recipients","title":"GNU General Public License","text":"time convey covered work, recipient automatically receives license original licensors, run, modify propagate work, subject License. responsible enforcing compliance third parties License. “entity transaction” transaction transferring control organization, substantially assets one, subdividing organization, merging organizations. propagation covered work results entity transaction, party transaction receives copy work also receives whatever licenses work party’s predecessor interest give previous paragraph, plus right possession Corresponding Source work predecessor interest, predecessor can get reasonable efforts. may impose restrictions exercise rights granted affirmed License. example, may impose license fee, royalty, charge exercise rights granted License, may initiate litigation (including cross-claim counterclaim lawsuit) alleging patent claim infringed making, using, selling, offering sale, importing Program portion .","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"id_11-patents","dir":"","previous_headings":"TERMS AND CONDITIONS","what":"11. Patents","title":"GNU General Public License","text":"“contributor” copyright holder authorizes use License Program work Program based. work thus licensed called contributor’s “contributor version”. contributor’s “essential patent claims” patent claims owned controlled contributor, whether already acquired hereafter acquired, infringed manner, permitted License, making, using, selling contributor version, include claims infringed consequence modification contributor version. purposes definition, “control” includes right grant patent sublicenses manner consistent requirements License. contributor grants non-exclusive, worldwide, royalty-free patent license contributor’s essential patent claims, make, use, sell, offer sale, import otherwise run, modify propagate contents contributor version. following three paragraphs, “patent license” express agreement commitment, however denominated, enforce patent (express permission practice patent covenant sue patent infringement). “grant” patent license party means make agreement commitment enforce patent party. convey covered work, knowingly relying patent license, Corresponding Source work available anyone copy, free charge terms License, publicly available network server readily accessible means, must either (1) cause Corresponding Source available, (2) arrange deprive benefit patent license particular work, (3) arrange, manner consistent requirements License, extend patent license downstream recipients. “Knowingly relying” means actual knowledge , patent license, conveying covered work country, recipient’s use covered work country, infringe one identifiable patents country reason believe valid. , pursuant connection single transaction arrangement, convey, propagate procuring conveyance , covered work, grant patent license parties receiving covered work authorizing use, propagate, modify convey specific copy covered work, patent license grant automatically extended recipients covered work works based . patent license “discriminatory” include within scope coverage, prohibits exercise , conditioned non-exercise one rights specifically granted License. may convey covered work party arrangement third party business distributing software, make payment third party based extent activity conveying work, third party grants, parties receive covered work , discriminatory patent license () connection copies covered work conveyed (copies made copies), (b) primarily connection specific products compilations contain covered work, unless entered arrangement, patent license granted, prior 28 March 2007. Nothing License shall construed excluding limiting implied license defenses infringement may otherwise available applicable patent law.","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"id_12-no-surrender-of-others-freedom","dir":"","previous_headings":"TERMS AND CONDITIONS","what":"12. No Surrender of Others’ Freedom","title":"GNU General Public License","text":"conditions imposed (whether court order, agreement otherwise) contradict conditions License, excuse conditions License. convey covered work satisfy simultaneously obligations License pertinent obligations, consequence may convey . example, agree terms obligate collect royalty conveying convey Program, way satisfy terms License refrain entirely conveying Program.","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"id_13-use-with-the-gnu-affero-general-public-license","dir":"","previous_headings":"TERMS AND CONDITIONS","what":"13. Use with the GNU Affero General Public License","title":"GNU General Public License","text":"Notwithstanding provision License, permission link combine covered work work licensed version 3 GNU Affero General Public License single combined work, convey resulting work. terms License continue apply part covered work, special requirements GNU Affero General Public License, section 13, concerning interaction network apply combination .","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"id_14-revised-versions-of-this-license","dir":"","previous_headings":"TERMS AND CONDITIONS","what":"14. Revised Versions of this License","title":"GNU General Public License","text":"Free Software Foundation may publish revised /new versions GNU General Public License time time. new versions similar spirit present version, may differ detail address new problems concerns. version given distinguishing version number. Program specifies certain numbered version GNU General Public License “later version” applies , option following terms conditions either numbered version later version published Free Software Foundation. Program specify version number GNU General Public License, may choose version ever published Free Software Foundation. Program specifies proxy can decide future versions GNU General Public License can used, proxy’s public statement acceptance version permanently authorizes choose version Program. Later license versions may give additional different permissions. However, additional obligations imposed author copyright holder result choosing follow later version.","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"id_15-disclaimer-of-warranty","dir":"","previous_headings":"TERMS AND CONDITIONS","what":"15. Disclaimer of Warranty","title":"GNU General Public License","text":"WARRANTY PROGRAM, EXTENT PERMITTED APPLICABLE LAW. EXCEPT OTHERWISE STATED WRITING COPYRIGHT HOLDERS /PARTIES PROVIDE PROGRAM “” WITHOUT WARRANTY KIND, EITHER EXPRESSED IMPLIED, INCLUDING, LIMITED , IMPLIED WARRANTIES MERCHANTABILITY FITNESS PARTICULAR PURPOSE. ENTIRE RISK QUALITY PERFORMANCE PROGRAM . PROGRAM PROVE DEFECTIVE, ASSUME COST NECESSARY SERVICING, REPAIR CORRECTION.","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"id_16-limitation-of-liability","dir":"","previous_headings":"TERMS AND CONDITIONS","what":"16. Limitation of Liability","title":"GNU General Public License","text":"EVENT UNLESS REQUIRED APPLICABLE LAW AGREED WRITING COPYRIGHT HOLDER, PARTY MODIFIES /CONVEYS PROGRAM PERMITTED , LIABLE DAMAGES, INCLUDING GENERAL, SPECIAL, INCIDENTAL CONSEQUENTIAL DAMAGES ARISING USE INABILITY USE PROGRAM (INCLUDING LIMITED LOSS DATA DATA RENDERED INACCURATE LOSSES SUSTAINED THIRD PARTIES FAILURE PROGRAM OPERATE PROGRAMS), EVEN HOLDER PARTY ADVISED POSSIBILITY DAMAGES.","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"id_17-interpretation-of-sections-15-and-16","dir":"","previous_headings":"TERMS AND CONDITIONS","what":"17. Interpretation of Sections 15 and 16","title":"GNU General Public License","text":"disclaimer warranty limitation liability provided given local legal effect according terms, reviewing courts shall apply local law closely approximates absolute waiver civil liability connection Program, unless warranty assumption liability accompanies copy Program return fee. END TERMS CONDITIONS","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"how-to-apply-these-terms-to-your-new-programs","dir":"","previous_headings":"","what":"How to Apply These Terms to Your New Programs","title":"GNU General Public License","text":"develop new program, want greatest possible use public, best way achieve make free software everyone can redistribute change terms. , attach following notices program. safest attach start source file effectively state exclusion warranty; file least “copyright” line pointer full notice found. Also add information contact electronic paper mail. program terminal interaction, make output short notice like starts interactive mode: hypothetical commands show w show c show appropriate parts General Public License. course, program’s commands might different; GUI interface, use “box”. also get employer (work programmer) school, , sign “copyright disclaimer” program, necessary. information , apply follow GNU GPL, see . GNU General Public License permit incorporating program proprietary programs. program subroutine library, may consider useful permit linking proprietary applications library. want , use GNU Lesser General Public License instead License. first, please read .","code":" Copyright (C) This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . Copyright (C) This program comes with ABSOLUTELY NO WARRANTY; for details type 'show w'. This is free software, and you are welcome to redistribute it under certain conditions; type 'show c' for details."},{"path":"https://inceptdk.github.io/adaptr/articles/Advanced-example.html","id":"preamble","dir":"Articles","previous_headings":"","what":"Preamble","title":"Advanced example","text":"example, set trial three arms, one common control, undesirable binary outcome (e.g., mortality). examples creates custom version setup_trial_binom() function using non-flat priors event rates arm (setup_trial_binom() uses flat priors), returning event probabilities percentages (instead fractions), illustrate use custom function summarise raw outcome data. setup_trial() attempts validate custom functions assessing output trial specification, edge cases might elude validation. , therefore, urge users specifying custom functions carefully test complex functions actual use. go trouble writing nice set functions generating outcomes sampling posterior distributions, please consider adding package. way, others can benefit work helps validate . See GitHub page Contributing. Although user-written custom functions depend adaptr package, first thing load package: –set global seed ensure reproducible results vignette: define functions (illustration purposes sanity check) print outputs. , , invoked setup_trial() (final code chunk vignette).","code":"library(adaptr) #> Loading 'adaptr' package v1.4.0. #> For instructions, type 'help(\"adaptr\")' #> or see https://inceptdk.github.io/adaptr/. set.seed(89)"},{"path":"https://inceptdk.github.io/adaptr/articles/Advanced-example.html","id":"functions-for-generating-outcomes","dir":"Articles","previous_headings":"","what":"Functions for generating outcomes","title":"Advanced example","text":"function take single argument (allocs), character vector containing allocations (names trial arms) patients included since last adaptive analysis. function must return numeric vector, regardless actual outcome type (, e.g., categorical outcomes must encoded numeric). returned numeric vector must length, values order allocs. , third element allocs specifies allocation third patient randomised since last adaptive analysis, (correspondingly) third element returned vector patient’s outcome. sounds complicated, becomes clearer actually specify function (essentially re-implementation built-function used setup_trial_binom()): illustrate function works, first generate random allocations 50 patients using equal allocation probabilities, default behaviour sample(). enclosing call parentheses, resulting allocations printed: Next, generate random outcomes patients:","code":"get_ys_binom_custom <- function(allocs) { # Binary outcome coded as 0/1 - prepare returned vector of appropriate length y <- integer(length(allocs)) # Specify trial arms and true event probabilities for each arm # These values should exactly match those supplied to setup_trial # NB! This is not validated, so this is the user's responsibility arms <- c(\"Control\", \"Experimental arm A\", \"Experimental arm B\") true_ys <- c(0.25, 0.27, 0.20) # Loop through arms and generate outcomes for (i in seq_along(arms)) { # Indices of patients allocated to the current arm ii <- which(allocs == arms[i]) # Generate outcomes for all patients allocated to current arm y[ii] <- rbinom(length(ii), 1, true_ys[i]) } # Return outcome vector y } (allocs <- sample(c(\"Control\", \"Experimental arm A\", \"Experimental arm B\"), size = 50, replace = TRUE)) #> [1] \"Control\" \"Experimental arm B\" \"Experimental arm B\" #> [4] \"Experimental arm B\" \"Experimental arm B\" \"Experimental arm A\" #> [7] \"Experimental arm A\" \"Experimental arm A\" \"Experimental arm A\" #> [10] \"Experimental arm A\" \"Experimental arm B\" \"Experimental arm B\" #> [13] \"Experimental arm B\" \"Control\" \"Experimental arm B\" #> [16] \"Control\" \"Experimental arm A\" \"Experimental arm A\" #> [19] \"Experimental arm A\" \"Experimental arm B\" \"Control\" #> [22] \"Control\" \"Experimental arm B\" \"Control\" #> [25] \"Experimental arm A\" \"Control\" \"Experimental arm A\" #> [28] \"Control\" \"Experimental arm B\" \"Experimental arm B\" #> [31] \"Control\" \"Experimental arm B\" \"Control\" #> [34] \"Control\" \"Experimental arm A\" \"Experimental arm B\" #> [37] \"Control\" \"Experimental arm A\" \"Experimental arm A\" #> [40] \"Experimental arm A\" \"Experimental arm B\" \"Experimental arm A\" #> [43] \"Control\" \"Experimental arm B\" \"Control\" #> [46] \"Control\" \"Control\" \"Control\" #> [49] \"Experimental arm A\" \"Experimental arm A\" (ys <- get_ys_binom_custom(allocs)) #> [1] 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 #> [39] 0 0 0 1 1 0 0 1 0 1 1 0"},{"path":"https://inceptdk.github.io/adaptr/articles/Advanced-example.html","id":"functions-for-drawing-posterior-samples","dir":"Articles","previous_headings":"","what":"Functions for drawing posterior samples","title":"Advanced example","text":"setup_trial_binom() function uses beta-binomial conjugate prior models arm, beta(1, 1) priors. priors uniform (≈ “non-informative”) probability scale corresponds amount information provided 2 patients (1 1 without event), described greater detail , e.g., Ryan et al, 2019 (10.1136/bmjopen-2018-024256). custom function generating posterior draws also uses beta-binomial conjugate prior models, informative priors. Informative priors may prevent undue influence random, early fluctuations trial pulling posterior estimates closer prior limited data available. seek relatively weakly informative priors centred previous knowledge (beliefs), can actually define function generating posterior draws based informative priors, need derive prior.","code":""},{"path":"https://inceptdk.github.io/adaptr/articles/Advanced-example.html","id":"informative-priors","dir":"Articles","previous_headings":"Functions for drawing posterior samples","what":"Informative priors","title":"Advanced example","text":"assume prior knowledge corresponding belief best estimate true event probability control arm 0.25 (25%), true event probability 0.15 0.35 (15-35%) 95% probability. mean beta distribution simply [number events]/[number patients]. derive beta distribution reflects prior belief, use find_beta_params(), helper function included adaptr (see ?find_beta_params details): thus see prior belief prior roughly corresponds previous randomisation 60 patients 15 (alpha) experienced event 45 (beta) . Even though may expect event probabilities differ non-control arms, example consider prior appropriate arms consider event probabilities smaller/larger represented prior unlikely. , illustrate effects prior compared default beta(1, 1) prior used setup_trial_binom() single trial arm 20 patients randomised, 12 events 8 non-events. corresponds estimated event probability 0.6 (60%), far expected 0.25 (25%). come random fluctuations patients randomised, even prior beliefs correct. Next, illustrate effects prior 200 patients randomised arm, 56 events 144 non-events, corresponds estimated event probability 0.28 (28%), similar expected event probability. comparing previous plot, clearly see patients randomised, larger sample observed data starts dominate posterior, prior exerts less influence posterior distribution (posterior distributions alike despite different prior distributions).","code":"find_beta_params( theta = 0.25, # Event probability boundary = \"lower\", boundary_target = 0.15, interval_width = 0.95 ) #> alpha beta p2.5 p50.0 p97.5 #> 1 15 45 0.1498208 0.2472077 0.3659499"},{"path":"https://inceptdk.github.io/adaptr/articles/Advanced-example.html","id":"defining-the-function-to-generate-posterior-draws","dir":"Articles","previous_headings":"Functions for drawing posterior samples","what":"Defining the function to generate posterior draws","title":"Advanced example","text":"number important things aware specifying function. First, must accept following arguments (exact names, even used function): arms: character vector currently active arms trial. allocs: character vector allocations (trial arms) patients randomised trial, including randomised arms longer active. ys: numeric vector outcomes patients randomised trial, including randomised arms longer active. control: single character, current control arm; NULL trials without common control. n_draws: single integer, number posterior draws generate arm. Alternatively, unused arguments can left ellipsis (...) included final argument function. Second, order allocs ys must match: fifth element allocs represents allocation fifth patient, fifth element ys represent outcome patient. Third, allocs ys provided patients, including randomised arms longer active. done users situations may want use data generating posterior draws (currently active) arms. Fourth, adaptr restrict posterior samples drawn. Consequently, Markov chain Monte Carlo- variational inference-based methods may used, packages supplying functionality may called user-provided functions. However, using complex methods simple conjugate models substantially increases simulation run time. Consequently, simpler models well-suited use simulations. Fifth, function must return matrix numeric values length(arms) columns n_draws rows, currently active arms column names. , row must contain one posterior draw arm. NA’s allowed, even patients randomised arm yet, valid numeric values returned (e.g., drawn prior another diffuse posterior distribution). Even outcome truly numeric, vector outcomes provided function (ys) returned matrix posterior draws must encoded numeric. mind, ready specify function: now call function using previously generated allocs ys. avoid cluttering, generate 10 posterior draws arm example: Importantly, less 100 posterior draws arm allowed setting trial specification, avoid unstable results (see setup_trial_binom()).","code":"get_draws_binom_custom <- function(arms, allocs, ys, control, n_draws) { # Setup a list to store the posterior draws for each arm draws <- list() # Loop through the ACTIVE arms and generate posterior draws for (a in arms) { # Indices of patients allocated to the current arm ii <- which(allocs == a) # Sum the number of events in the current arm n_events <- sum(ys[ii]) # Compute the number of patients in the current arm n_patients <- length(ii) # Generate draws using the number of events, the number of patients # and the prior specified above: beta(15, 45) # Saved using the current arms' name in the list, ensuring that the # resulting matrix has column names corresponding to the ACTIVE arms draws[[a]] <- rbeta(n_draws, 15 + n_events, 45 + n_patients - n_events) } # Bind all elements of the list column-wise to form a matrix with # 1 named column per ACTIVE arm and 1 row per posterior draw. # Multiply result with 100, as we're using percentages and not proportions # in this example (just to correspond to the illustrated custom function to # generate RAW outcome estimates below) do.call(cbind, draws) * 100 } get_draws_binom_custom( # Only currently ACTIVE arms, but all are considered active at this time arms = c(\"Control\", \"Experimental arm A\", \"Experimental arm B\"), allocs = allocs, # Generated above ys = ys, # Generated above # Input control arm, argument is supplied even if not used in the function control = \"Control\", # Input number of draws (for brevity, only 10 draws here) n_draws = 10 ) #> Control Experimental arm A Experimental arm B #> [1,] 30.96555 29.34973 29.26143 #> [2,] 30.47382 23.22668 25.08249 #> [3,] 31.04807 31.76577 19.81416 #> [4,] 17.00712 24.30809 16.36256 #> [5,] 21.31251 27.74615 22.63147 #> [6,] 25.50944 24.16283 30.29049 #> [7,] 16.60420 29.49526 28.75436 #> [8,] 25.17899 33.29374 30.87149 #> [9,] 23.72043 27.78537 29.89836 #> [10,] 30.50004 28.43694 26.62115"},{"path":"https://inceptdk.github.io/adaptr/articles/Advanced-example.html","id":"specifying-the-function-to-calculate-raw-outcome-estimates","dir":"Articles","previous_headings":"","what":"Specifying the function to calculate raw outcome estimates","title":"Advanced example","text":"Finally, custom function may specified calculate raw summary estimates arm; raw estimates posterior estimates, can considered maximum likelihood point estimates example. function must take numeric vector (outcomes arm) return single numeric value. function called separately arm. express results percentages proportions example, function simply calculates outcome percentage arm: now call function outcomes \"Control\" arm, example:","code":"fun_raw_est_custom <- function(ys) { mean(ys) * 100 } cat(sprintf( \"Raw outcome percentage estimate in the 'Control' group: %.1f%%\", fun_raw_est_custom(ys[allocs == \"Control\"]) )) #> Raw outcome percentage estimate in the 'Control' group: 29.4%"},{"path":"https://inceptdk.github.io/adaptr/articles/Advanced-example.html","id":"setup-the-trial-specification","dir":"Articles","previous_headings":"","what":"Setup the trial specification","title":"Advanced example","text":"functions defined, can now setup trial specification. stated , validation custom functions carried trial setup: setup_trial() runs errors warnings, custom trial successfully specified may run run_trial() run_trials() calibrated calibrate_trial(). custom functions provided setup_trial() calls custom functions (uses objects defined user outside functions) functions loaded non-base R packages used, please aware exporting objects/functions prefixing namespace necessary simulations conducted using multiple cores. See setup_cluster() run_trial() additional details export necessary functions objects.","code":"setup_trial( arms = c(\"Control\", \"Experimental arm A\", \"Experimental arm B\"), # true_ys, true outcome percentages (since posterior draws and raw estimates # are returned as percentages, this must be a percentage as well, even if # probabilities are specified as proportions internally in the outcome # generating function specified above true_ys = c(25, 27, 20), # Supply the functions to generate outcomes and posterior draws fun_y_gen = get_ys_binom_custom, fun_draws = get_draws_binom_custom, # Define looks max_n = 2000, look_after_every = 100, # Define control and allocation strategy control = \"Control\", control_prob_fixed = \"sqrt-based\", # Define equivalence assessment - drop non-control arms at > 90% probability # of equivalence, defined as an absolute difference of 10 %-points # (specified on the percentage-point scale as the rest of probabilities in # the example) equivalence_prob = 0.9, equivalence_diff = 10, equivalence_only_first = TRUE, # Input the function used to calculate raw outcome estimates fun_raw_est = fun_raw_est_custom, # Description and additional information description = \"custom trial [binary outcome, weak priors]\", add_info = \"Trial using beta-binomial conjugate prior models and beta(15, 45) priors in each arm.\" ) #> Trial specification: custom trial [binary outcome, weak priors] #> * Undesirable outcome #> * Common control arm: Control #> * Control arm probability fixed at 0.414 (for 3 arms), 0.5 (for 2 arms) #> * Best arm: Experimental arm B #> #> Arms, true outcomes, starting allocation probabilities #> and allocation probability limits: #> arms true_ys start_probs fixed_probs min_probs max_probs #> Control 25 0.414 0.414 NA NA #> Experimental arm A 27 0.293 NA NA NA #> Experimental arm B 20 0.293 NA NA NA #> #> Maximum sample size: 2000 #> Maximum number of data looks: 20 #> Planned looks after every 100 #> patients have reached follow-up until final look after 2000 patients #> Number of patients randomised at each look: 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, 1500, 1600, 1700, 1800, 1900, 2000 #> #> Superiority threshold: 0.99 (all analyses) #> Inferiority threshold: 0.01 (all analyses) #> Equivalence threshold: 0.9 (all analyses) (only checked for first control) #> Absolute equivalence difference: 10 #> No futility threshold #> Soften power for all analyses: 1 (no softening) #> #> Additional info: Trial using beta-binomial conjugate prior models and beta(15, 45) priors in each arm."},{"path":"https://inceptdk.github.io/adaptr/articles/Basic-examples.html","id":"trial-designs-without-a-common-control-arm","dir":"Articles","previous_headings":"","what":"Trial designs without a common control arm","title":"Basic examples","text":"section, several examples trials without common control arm provided. General settings applicable trial designs (including trial specifications without common control arm) covered section.","code":""},{"path":"https://inceptdk.github.io/adaptr/articles/Basic-examples.html","id":"example-1-general-settings","dir":"Articles","previous_headings":"Trial designs without a common control arm","what":"Example 1: general settings","title":"Basic examples","text":"","code":"setup_trial_binom( # Four arms arms = c(\"A\", \"B\", \"C\", \"D\"), # Set true outcomes (in this example event probabilities) for all arms true_ys = c(0.3, 0.35, 0.31, 0.27), # 30%, 34%, 31% and 27%, respectively # Set starting allocation probabilities # Defaults to equal allocation if not specified start_probs = c(0.3, 0.3, 0.2, 0.2), # Set fixed allocation probability for first arm # NA corresponds to no limits for specific arms # Default (NULL) corresponds to no limits for all arms fixed_probs = c(0.3, NA, NA, NA), # Set minimum and maximum probability limits for some arms # NA corresponds to no limits for specific arms # Default (NULL) corresponds to no limits for all arms # Must be NA for arms with fixed_probs (first arm in this example) # sum(fixed_probs) + sum(min_probs) must not exceed 1 # sum(fixed_probs) + sum(max_probs) may be greater than 1, and must be at least # 1 if specified for all arms min_probs = c(NA, 0.2, NA, NA), max_probs = c(NA, 0.7, NA, NA), # Set looks - alternatively, specify both max_n AND look_after_every data_looks = seq(from = 300, to = 1000, by = 100), # No common control arm (as default, but explicitly specified in this example) control = NULL, # Set inferiority/superiority thresholds (different values than the defaults) # (see also the calibrate_trial() function) inferiority = 0.025, superiority = 0.975, # Define that the outcome is desirable (as opposed to the default setting) highest_is_best = TRUE, # No softening (the default setting, but made explicit here) soften_power = 1, # Use different simulation/summary settings than default cri_width = 0.89, # 89% credible intervals n_draws = 1000, # Only 1000 posterior draws in each arm robust = TRUE, # Summarise posteriors using medians/MAD-SDs (as default) # Trial description (used by print methods) description = \"example trial specification 1\" ) #> Trial specification: example trial specification 1 #> * Desirable outcome #> * No common control arm #> * Best arm: B #> #> Arms, true outcomes, starting allocation probabilities #> and allocation probability limits: #> arms true_ys start_probs fixed_probs min_probs max_probs #> A 0.30 0.3 0.3 NA NA #> B 0.35 0.3 NA 0.2 0.7 #> C 0.31 0.2 NA NA NA #> D 0.27 0.2 NA NA NA #> #> Maximum sample size: 1000 #> Maximum number of data looks: 8 #> Planned data looks after: 300, 400, 500, 600, 700, 800, 900, 1000 patients have reached follow-up #> Number of patients randomised at each look: 300, 400, 500, 600, 700, 800, 900, 1000 #> #> Superiority threshold: 0.975 (all analyses) #> Inferiority threshold: 0.025 (all analyses) #> No equivalence threshold #> No futility threshold (not relevant - no common control) #> Soften power for all analyses: 1 (no softening)"},{"path":"https://inceptdk.github.io/adaptr/articles/Basic-examples.html","id":"example-2-equivalence-testing-decreasing-softening","dir":"Articles","previous_headings":"Trial designs without a common control arm","what":"Example 2: equivalence testing, decreasing softening","title":"Basic examples","text":"common control arm Equivalence testing Different softening powers (decreasing softening trial progresses) Default settings many unspecified arguments","code":"setup_trial_binom( # Specify arms and true outcome probabilities (undesirable outcome as default) arms = c(\"A\", \"B\", \"C\", \"D\"), true_ys = c(0.2, 0.22, 0.24, 0.18), # Specify adaptive analysis looks using max_n and look_after_every # max_n does not need to be a multiple of look_after_every - a final look # will be conducted at max_n regardless max_n = 1250, # Maximum 1250 patients look_after_every = 100, # Look after every 100 patients # Assess equivalence between all arms: stop if >90 % probability that the # absolute difference between the best and worst arms is < 5 %-points # Note: equivalence_only_first must be NULL (default) in designs without a # common control arm (such as this trial) equivalence_prob = 0.9, equivalence_diff = 0.05, # Different softening powers at each look (13 possible looks in total) # Starts at 0 (softens all allocation probabilities to be equal) and ends at # 1 (no softening) for the final possible look in the trial soften_power = seq(from = 0, to = 1, length.out = 13) ) #> Trial specification: generic binomially distributed outcome trial #> * Undesirable outcome #> * No common control arm #> * Best arm: D #> #> Arms, true outcomes, starting allocation probabilities #> and allocation probability limits: #> arms true_ys start_probs fixed_probs min_probs max_probs #> A 0.20 0.25 NA NA NA #> B 0.22 0.25 NA NA NA #> C 0.24 0.25 NA NA NA #> D 0.18 0.25 NA NA NA #> #> Maximum sample size: 1250 #> Maximum number of data looks: 13 #> Planned looks after every 100 #> patients have reached follow-up until final look after 1250 patients #> Number of patients randomised at each look: 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, 1250 #> #> Superiority threshold: 0.99 (all analyses) #> Inferiority threshold: 0.01 (all analyses) #> Equivalence threshold: 0.9 (all analyses) (no common control) #> Absolute equivalence difference: 0.05 #> No futility threshold (not relevant - no common control) #> Soften power for each consequtive analysis: 0, 0.083, 0.167, 0.25, 0.333, 0.417, 0.5, 0.583, 0.667, 0.75, 0.833, 0.917, 1"},{"path":"https://inceptdk.github.io/adaptr/articles/Basic-examples.html","id":"trial-designs-with-a-common-control-arm","dir":"Articles","previous_headings":"","what":"Trial designs with a common control arm","title":"Basic examples","text":"section, several examples trials common control arm provided focus mostly options specific trial designs common control arm.","code":""},{"path":"https://inceptdk.github.io/adaptr/articles/Basic-examples.html","id":"example-3-common-control-and-sqrt-based-fixed-allocation","dir":"Articles","previous_headings":"Trial designs with a common control arm","what":"Example 3: common control and sqrt-based fixed allocation","title":"Basic examples","text":"common control arm square-root-transformation-based fixed allocation probabilities (see description setup_trial()) Assessment equivalence futility compared initial control (assessed superior arms become subsequent controls)","code":"setup_trial_binom( arms = c(\"A\", \"B\", \"C\", \"D\"), # Specify control arm control = \"A\", true_ys = c(0.2, 0.22, 0.24, 0.18), data_looks = seq(from = 100, to = 1000, by = 100), # Fixed, square-root-transformation-based allocation throughout control_prob_fixed = \"sqrt-based fixed\", # Assess equivalence: drop non-control arms if > 90% probability that they # are equivalent to the common control, defined as an absolute difference of # < 3 %-points equivalence_prob = 0.9, equivalence_diff = 0.03, # Only assess against the initial control (i.e., not assessed if an arm is # declared superior to the initial control and becomes the new control) equivalence_only_first = TRUE, # Assess futility: drop non-control arms if > 80% probability that they are # < 10 %-points better (in this case lower because outcome is undesirable in # this example) compared to the common control futility_prob = 0.8, futility_diff = 0.1, # Only assessed for the initial control, as described above futility_only_first = TRUE ) #> Trial specification: generic binomially distributed outcome trial #> * Undesirable outcome #> * Common control arm: A #> * Control arm probability fixed at 0.366 (for 4 arms), 0.414 (for 3 arms), 0.5 (for 2 arms) #> * Best arm: D #> #> Arms, true outcomes, starting allocation probabilities #> and allocation probability limits: #> arms true_ys start_probs fixed_probs min_probs max_probs #> A 0.20 0.366 0.366 NA NA #> B 0.22 0.211 0.211 NA NA #> C 0.24 0.211 0.211 NA NA #> D 0.18 0.211 0.211 NA NA #> #> Maximum sample size: 1000 #> Maximum number of data looks: 10 #> Planned data looks after: 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000 patients have reached follow-up #> Number of patients randomised at each look: 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000 #> #> Superiority threshold: 0.99 (all analyses) #> Inferiority threshold: 0.01 (all analyses) #> Equivalence threshold: 0.9 (all analyses) (only checked for first control) #> Absolute equivalence difference: 0.03 #> Futility threshold: 0.8 (all analyses) (only checked for first control) #> Absolute futility difference (in beneficial direction): 0.1 #> Soften power for all analyses: 1 (no softening - all arms fixed)"},{"path":"https://inceptdk.github.io/adaptr/articles/Basic-examples.html","id":"example-4-sqrt-based-initial-allocation-and-restricted-rar","dir":"Articles","previous_headings":"Trial designs with a common control arm","what":"Example 4: sqrt-based initial allocation and restricted RAR","title":"Basic examples","text":"Square-root-transformation-based initial allocation probabilities Square-root-transformation-based allocation control arm (including subsequent controls, non-control arm declared superior initial control) Restricted response-adaptive randomisation non-control arms","code":"setup_trial_binom( arms = c(\"A\", \"B\", \"C\", \"D\"), control = \"A\", true_ys = c(0.2, 0.22, 0.24, 0.18), data_looks = seq(from = 100, to = 1000, by = 100), # Square-root-transformation-based control arm allocation including for # subsequent controls and initial equal allocation to the non-control arms, # followed by response-adaptive randomisation control_prob_fixed = \"sqrt-based\", # Restricted response-adaptive randomisation # Minimum probabilities of 20% for non-control arms, must be NA for the # control arm with fixed allocation probability # Limits are ignored for arms that become subsequent controls # Limits are rescaled (i.e., increased proportionally) when arms are dropped min_probs = c(NA, 0.2, 0.2, 0.2), rescale_probs = \"limits\", # Constant softening of 0.5 (= square-root transformation) soften_power = 0.5 ) #> Trial specification: generic binomially distributed outcome trial #> * Undesirable outcome #> * Common control arm: A #> * Control arm probability fixed at 0.366 (for 4 arms), 0.414 (for 3 arms), 0.5 (for 2 arms) #> * Best arm: D #> #> Arms, true outcomes, starting allocation probabilities #> and allocation probability limits (min/max_probs rescaled): #> arms true_ys start_probs fixed_probs min_probs max_probs #> A 0.20 0.366 0.366 NA NA #> B 0.22 0.211 NA 0.2 NA #> C 0.24 0.211 NA 0.2 NA #> D 0.18 0.211 NA 0.2 NA #> #> Maximum sample size: 1000 #> Maximum number of data looks: 10 #> Planned data looks after: 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000 patients have reached follow-up #> Number of patients randomised at each look: 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000 #> #> Superiority threshold: 0.99 (all analyses) #> Inferiority threshold: 0.01 (all analyses) #> No equivalence threshold #> No futility threshold #> Soften power for all analyses: 0.5"},{"path":"https://inceptdk.github.io/adaptr/articles/Basic-examples.html","id":"example-5-sqrt-based-allocation-only-to-initial-control-arm","dir":"Articles","previous_headings":"Trial designs with a common control arm","what":"Example 5: sqrt-based allocation only to initial control arm","title":"Basic examples","text":"example similar (different restriction settings), use square-root-transformation-based allocation probabilities initial control arm. Hence, apply another arm declared superior becomes new control.","code":"setup_trial_binom( arms = c(\"A\", \"B\", \"C\", \"D\"), control = \"A\", true_ys = c(0.2, 0.22, 0.24, 0.18), data_looks = seq(from = 100, to = 1000, by = 100), # Square-root-transformation-based control arm allocation for the initial # control only and initial equal allocation to the non-control arms, followed # by response-adaptive randomisation control_prob_fixed = \"sqrt-based start\", # Restrict response-adaptive randomisation # Minimum probabilities of 20% for all non-control arms # - must be NA for the initial control arm with fixed allocation probability min_probs = c(NA, 0.2, 0.2, 0.2), # Maximum probabilities of 65% for all non-control arms # - must be NA for the initial control arm with fixed allocation probability max_probs = c(NA, 0.65, 0.65, 0.65), soften_power = 0.75 ) #> Trial specification: generic binomially distributed outcome trial #> * Undesirable outcome #> * Common control arm: A #> * Control arm probability fixed at 0.366 #> * Best arm: D #> #> Arms, true outcomes, starting allocation probabilities #> and allocation probability limits: #> arms true_ys start_probs fixed_probs min_probs max_probs #> A 0.20 0.366 0.366 NA NA #> B 0.22 0.211 NA 0.2 0.65 #> C 0.24 0.211 NA 0.2 0.65 #> D 0.18 0.211 NA 0.2 0.65 #> #> Maximum sample size: 1000 #> Maximum number of data looks: 10 #> Planned data looks after: 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000 patients have reached follow-up #> Number of patients randomised at each look: 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000 #> #> Superiority threshold: 0.99 (all analyses) #> Inferiority threshold: 0.01 (all analyses) #> No equivalence threshold #> No futility threshold #> Soften power for all analyses: 0.75"},{"path":"https://inceptdk.github.io/adaptr/articles/Basic-examples.html","id":"example-6-restricted-rar-matched-control-arm-allocation","dir":"Articles","previous_headings":"Trial designs with a common control arm","what":"Example 6: restricted RAR, matched control-arm allocation","title":"Basic examples","text":"Restricted response-adaptive randomisation Control-arm allocation probability matched highest non-control arm (re-scaling necessary) Applies initial subsequent control arms","code":"setup_trial_binom( arms = c(\"A\", \"B\", \"C\", \"D\"), control = \"A\", true_ys = c(0.2, 0.22, 0.24, 0.18), data_looks = seq(from = 100, to = 1000, by = 100), # Specify starting probabilities # When \"match\" is specified below in control_prob_fixed, the initial control # arm's initial allocation probability must match the highest initial # non-control arm allocation probability start_probs = c(0.3, 0.3, 0.2, 0.2), control_prob_fixed = \"match\", # Restrict response-adaptive randomisation # - these are applied AFTER \"matching\" when calculating new allocation # probabilities # - min_probs must be NA for the initial control arm when using matching min_probs = c(NA, 0.2, 0.2, 0.2), soften_power = 0.7 ) #> Trial specification: generic binomially distributed outcome trial #> * Undesirable outcome #> * Common control arm: A #> * Control arm probability matched to best non-control arm #> * Best arm: D #> #> Arms, true outcomes, starting allocation probabilities #> and allocation probability limits: #> arms true_ys start_probs fixed_probs min_probs max_probs #> A 0.20 0.3 NA NA NA #> B 0.22 0.3 NA 0.2 NA #> C 0.24 0.2 NA 0.2 NA #> D 0.18 0.2 NA 0.2 NA #> #> Maximum sample size: 1000 #> Maximum number of data looks: 10 #> Planned data looks after: 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000 patients have reached follow-up #> Number of patients randomised at each look: 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000 #> #> Superiority threshold: 0.99 (all analyses) #> Inferiority threshold: 0.01 (all analyses) #> No equivalence threshold #> No futility threshold #> Soften power for all analyses: 0.7"},{"path":"https://inceptdk.github.io/adaptr/articles/Basic-examples.html","id":"example-7-follow-up-and-data-collection-lag","dir":"Articles","previous_headings":"Trial designs with a common control arm","what":"Example 7: follow-up and data collection lag","title":"Basic examples","text":"example uses randomised_at_looks argument specify follow-/data collection lag. real use cases, usually considered, may affect relative performance different trial designs extent ‘final’ results patients reached follow-analysed may differ results adaptive analyses randomised patients included due outcome data available yet patients.","code":"setup_trial_binom( arms = c(\"A\", \"B\", \"C\", \"D\"), control = \"A\", true_ys = c(0.2, 0.22, 0.24, 0.18), # Analyses conducted every time 100 patients have follow-up data available data_looks = seq(from = 100, to = 1000, by = 100), # Specify the number of patients randomised at each look - in this case, 200 # more patients are randomised than the number of patients that # have follow-up data available at each look randomised_at_looks = seq(from = 300, to = 1200, by = 100) ) #> Trial specification: generic binomially distributed outcome trial #> * Undesirable outcome #> * Common control arm: A #> #> * Best arm: D #> #> Arms, true outcomes, starting allocation probabilities #> and allocation probability limits: #> arms true_ys start_probs fixed_probs min_probs max_probs #> A 0.20 0.25 NA NA NA #> B 0.22 0.25 NA NA NA #> C 0.24 0.25 NA NA NA #> D 0.18 0.25 NA NA NA #> #> Maximum sample size: 1200 #> Maximum number of data looks: 10 #> Planned data looks after: 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000 patients have reached follow-up #> Number of patients randomised at each look: 300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200 #> #> Superiority threshold: 0.99 (all analyses) #> Inferiority threshold: 0.01 (all analyses) #> No equivalence threshold #> No futility threshold #> Soften power for all analyses: 1 (no softening)"},{"path":"https://inceptdk.github.io/adaptr/articles/Basic-examples.html","id":"example-8-different-probability-thresholds-over-time","dir":"Articles","previous_headings":"Trial designs with a common control arm","what":"Example 8: different probability thresholds over time","title":"Basic examples","text":"example, specify different probability thresholds superiority inferiority stopping rules different adaptive analyses. Varying probability thresholds may similarly specified stopping rules equivalence futility. Importantly, probability thresholds must specified subsequent threshold never stricter previous threshold. Varying thresholds may also used make stopping rules first function later analyses (e.g., long stopping threshold superiority 1 stopping threshold inferiority 0, trials stopped arms dropped due rules).","code":"setup_trial_binom( arms = c(\"A\", \"B\", \"C\", \"D\"), control = \"A\", true_ys = c(0.2, 0.22, 0.24, 0.18), # Analyses conducted every time 100 patients have follow-up data available data_looks = seq(from = 100, to = 1000, by = 100), # Specify varying inferiority/superiority thresholds # When specifying varying thresholds, the number of thresholds must match # the number of analyses, and thresholds may never be stricter than the # threshold used in the previous analysis # Superiority threshold decreasing from 0.99 to 0.95 during the first five # analyses, and remains stationary at 0.95 after that superiority = c(seq(from = 0.99, to = 0.95, by = -0.01), rep(0.95, 5)), # Similarly for inferiority thresholds, but in the opposite direction inferiority = c(seq(from = 0.01, to = 0.05, by = 0.01), rep(0.05, 5)), ) #> Trial specification: generic binomially distributed outcome trial #> * Undesirable outcome #> * Common control arm: A #> #> * Best arm: D #> #> Arms, true outcomes, starting allocation probabilities #> and allocation probability limits: #> arms true_ys start_probs fixed_probs min_probs max_probs #> A 0.20 0.25 NA NA NA #> B 0.22 0.25 NA NA NA #> C 0.24 0.25 NA NA NA #> D 0.18 0.25 NA NA NA #> #> Maximum sample size: 1000 #> Maximum number of data looks: 10 #> Planned data looks after: 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000 patients have reached follow-up #> Number of patients randomised at each look: 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000 #> #> Superiority thresholds: #> 0.99, 0.98, 0.97, 0.96, 0.95, 0.95, 0.95, 0.95, 0.95, 0.95 #> Inferiority thresholds: #> 0.01, 0.02, 0.03, 0.04, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05 #> No equivalence threshold #> No futility threshold #> Soften power for all analyses: 1 (no softening)"},{"path":"https://inceptdk.github.io/adaptr/articles/Basic-examples.html","id":"example-9-minimum-allocation-probabilities-rescaled-when-arms-are-dropped","dir":"Articles","previous_headings":"Trial designs with a common control arm","what":"Example 9: minimum allocation probabilities rescaled when arms are dropped","title":"Basic examples","text":"example, trial design four arms restricted RAR (minimum allocation limits) specified, additional specification minimum allocation limits rescaled proportionally arms dropped (rescaling can similarly applied fixed allocation probabilities):","code":"setup_trial_binom( arms = c(\"A\", \"B\", \"C\", \"D\"), control = \"A\", true_ys = c(0.2, 0.2, 0.2, 0.2), min_probs = rep(0.15, 4), # Specify initial minimum allocation probabilities # Rescale allocation probability limits as arms are dropped rescale_probs = \"limits\", data_looks = seq(from = 100, to = 1000, by = 100) ) #> Trial specification: generic binomially distributed outcome trial #> * Undesirable outcome #> * Common control arm: A #> #> * Best arms: A and B and C and D #> #> Arms, true outcomes, starting allocation probabilities #> and allocation probability limits (min/max_probs rescaled): #> arms true_ys start_probs fixed_probs min_probs max_probs #> A 0.2 0.25 NA 0.15 NA #> B 0.2 0.25 NA 0.15 NA #> C 0.2 0.25 NA 0.15 NA #> D 0.2 0.25 NA 0.15 NA #> #> Maximum sample size: 1000 #> Maximum number of data looks: 10 #> Planned data looks after: 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000 patients have reached follow-up #> Number of patients randomised at each look: 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000 #> #> Superiority threshold: 0.99 (all analyses) #> Inferiority threshold: 0.01 (all analyses) #> No equivalence threshold #> No futility threshold #> Soften power for all analyses: 1 (no softening)"},{"path":"https://inceptdk.github.io/adaptr/articles/Overview.html","id":"usage-and-workflow-overview","dir":"Articles","previous_headings":"","what":"Usage and workflow overview","title":"Overview","text":"central functionality adaptr typical workflow illustrated .","code":""},{"path":"https://inceptdk.github.io/adaptr/articles/Overview.html","id":"setup","dir":"Articles","previous_headings":"Usage and workflow overview","what":"Setup","title":"Overview","text":"First, package loaded cluster parallel workers initiated setup_cluster() function facilitate parallel computing: Parallelisation supported many adaptr functions, cluster parallel workers can setup entire session using setup_cluster() early script example. Alternatively, parallelisation can controlled global \"mc.cores\" option (set calling options(mc.cores = )) cores argument many functions.","code":"library(adaptr) #> Loading 'adaptr' package v1.4.0. #> For instructions, type 'help(\"adaptr\")' #> or see https://inceptdk.github.io/adaptr/. setup_cluster(2)"},{"path":"https://inceptdk.github.io/adaptr/articles/Overview.html","id":"specify-trial-design","dir":"Articles","previous_headings":"Usage and workflow overview","what":"Specify trial design","title":"Overview","text":"Setup trial specification (defining trial design scenario) using general setup_trial() function, one special case variants using default priors setup_trial_binom() (binary, binomially distributed outcomes; used example) setup_trial_norm() (continuous, normally distributed outcomes). example trial specification following characteristics: binary, binomially distributed, undesirable (default) outcome Three arms designated common control Identical underlying outcome probabilities 25% arm Analyses conducted specific number patients outcome data available, patients randomised last look (lag due follow-data collection/verification) explicitly defined stopping thresholds inferiority superiority (default thresholds < 1% > 99%, respectively, apply) Equivalence stopping rule defined > 90% probability (equivalence_prob) -arm differences remaining arms < 5 %-points Response-adaptive randomisation minimum allocation probabilities 20% softening allocation ratios constant factor (soften_power) See ?setup_trial() details arguments vignette(\"Basic-examples\", \"adaptr\") basic example trial specifications thorough review general trial specification settings, vignette(\"Advanced-example\", \"adaptr\") advanced example including details specify user-written functions generating outcomes posterior draws. , trial specification setup human-readable overview printed: default, () probabilities shown 3 decimals. can changed explicitly print()ing specification prob_digits arguments, example:","code":"binom_trial <- setup_trial_binom( arms = c(\"Arm A\", \"Arm B\", \"Arm C\"), true_ys = c(0.25, 0.25, 0.25), min_probs = rep(0.20, 3), data_looks = seq(from = 300, to = 2000, by = 100), randomised_at_looks = c(seq(from = 400, to = 2000, by = 100), 2000), equivalence_prob = 0.9, equivalence_diff = 0.05, soften_power = 0.5 ) print(binom_trial, prob_digits = 3) #> Trial specification: generic binomially distributed outcome trial #> * Undesirable outcome #> * No common control arm #> * Best arms: Arm A and Arm B and Arm C #> #> Arms, true outcomes, starting allocation probabilities #> and allocation probability limits: #> arms true_ys start_probs fixed_probs min_probs max_probs #> Arm A 0.25 0.333 NA 0.2 NA #> Arm B 0.25 0.333 NA 0.2 NA #> Arm C 0.25 0.333 NA 0.2 NA #> #> Maximum sample size: 2000 #> Maximum number of data looks: 18 #> Planned data looks after: 300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, 1500, 1600, 1700, 1800, 1900, 2000 patients have reached follow-up #> Number of patients randomised at each look: 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, 1500, 1600, 1700, 1800, 1900, 2000, 2000 #> #> Superiority threshold: 0.99 (all analyses) #> Inferiority threshold: 0.01 (all analyses) #> Equivalence threshold: 0.9 (all analyses) (no common control) #> Absolute equivalence difference: 0.05 #> No futility threshold (not relevant - no common control) #> Soften power for all analyses: 0.5 print(binom_trial, prob_digits = 2) #> Trial specification: generic binomially distributed outcome trial #> * Undesirable outcome #> * No common control arm #> * Best arms: Arm A and Arm B and Arm C #> #> Arms, true outcomes, starting allocation probabilities #> and allocation probability limits: #> arms true_ys start_probs fixed_probs min_probs max_probs #> Arm A 0.25 0.33 NA 0.2 NA #> Arm B 0.25 0.33 NA 0.2 NA #> Arm C 0.25 0.33 NA 0.2 NA #> #> Maximum sample size: 2000 #> Maximum number of data looks: 18 #> Planned data looks after: 300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, 1500, 1600, 1700, 1800, 1900, 2000 patients have reached follow-up #> Number of patients randomised at each look: 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, 1500, 1600, 1700, 1800, 1900, 2000, 2000 #> #> Superiority threshold: 0.99 (all analyses) #> Inferiority threshold: 0.01 (all analyses) #> Equivalence threshold: 0.9 (all analyses) (no common control) #> Absolute equivalence difference: 0.05 #> No futility threshold (not relevant - no common control) #> Soften power for all analyses: 0.5"},{"path":"https://inceptdk.github.io/adaptr/articles/Overview.html","id":"calibration","dir":"Articles","previous_headings":"Usage and workflow overview","what":"Calibration","title":"Overview","text":"example trial specification, true -arm differences, stopping rules inferiority superiority explicitly defined. intentional, stopping rules calibrated obtain desired probability stopping superiority scenario -arm differences (corresponding Bayesian type 1 error rate). Trial specifications necessarily calibrated. Instead,simulations can run directly using run_trials() function covered (run_trial() single simulation). can followed assessment performance metrics, manually changing specification (including stopping rules) performance metrics considered acceptable. example, full calibration procedure performed. Calibration trial specification done using calibrate_trial() function, defaults calibrate constant, symmetrical stopping rules inferiority superiority (expecting trial specification identical outcomes arm), can used calibrate parameter trial specification towards performance metric user-defined calibration function (fun) specified. perform calibration, target value, search_range, tolerance value (tol), allowed direction tolerance value (dir) must specified (alternatively, defaults can used). note, number simulations calibration step lower generally recommended (reduce time required build vignette): calibration successful (, results used, calibration settings changed calibration repeated). calibrated, constant stopping threshold superiority printed results (0.9830921) can extracted using calibrated_binom_trial$best_x. Using default calibration functionality, calibrated, constant stopping threshold inferiority symmetrical, .e., 1 - stopping threshold superiority (0.0169079). calibrated trial specification may extracted using calibrated_binom_trial$best_trial_spec , printed, also include calibrated stopping thresholds. Calibration results may saved reloaded using path argument, avoid unnecessary repeated simulations.","code":"# Calibrate the trial specification calibrated_binom_trial <- calibrate_trial( trial_spec = binom_trial, n_rep = 1000, # 1000 simulations for each step (more generally recommended) base_seed = 4131, # Base random seed (for reproducible results) target = 0.05, # Target value for calibrated metric (default value) search_range = c(0.9, 1), # Search range for superiority stopping threshold tol = 0.01, # Tolerance range dir = -1 # Tolerance range only applies below target ) # Print result (to check if calibration is successful) calibrated_binom_trial #> Trial calibration: #> * Result: calibration successful #> * Best x: 0.9830921 #> * Best y: 0.045 #> #> Central settings: #> * Target: 0.05 #> * Tolerance: 0.01 (at or below target, range: 0.04 to 0.05) #> * Search range: 0.9 to 1 #> * Gaussian process controls: #> * - resolution: 5000 #> * - kappa: 0.5 #> * - pow: 1.95 #> * - lengthscale: 1 (constant) #> * - x scaled: yes #> * Noisy: no #> * Narrowing: yes #> #> Calibration/simulation details: #> * Total evaluations: 4 (previous + grid + iterations) #> * Repetitions: 1000 #> * Calibration time: 54.9 secs #> * Base random seed: 4131 #> #> See 'help(\"calibrate_trial\")' for details."},{"path":"https://inceptdk.github.io/adaptr/articles/Overview.html","id":"summarising-results","dir":"Articles","previous_headings":"Usage and workflow overview","what":"Summarising results","title":"Overview","text":"results simulations using calibrated trial specification conducted calibration procedure may extracted using calibrated_binom_trial$best_sims. results can summarised several functions. functions support different ‘selection strategies’ simulations ending superiority, .e., performance metrics can calculated assuming different arms used clinical practice arm ultimately superior. check_performance() function summarises performance metrics tidy data.frame, uncertainty measures (bootstrapped confidence intervals) requested. , performance metrics calculated considering ‘best’ arm (.e., one highest probability overall best) selected simulations ending superiority: Similar results list format (without uncertainty measures) can obtained using summary() method (known , e.g., regression models inR), comes print() method providing formatted results. simulation results printed directly, function called default arguments (arguments, e.g., selection strategies may also directly supplied print() method). Individual simulation results can extracted tidy data.frame using extract_results(): Finally, probabilities different remaining arms statuses (uncertainty) last adaptive analysis can summarised using check_remaining_arms() function (dropped arms shown empty text string):","code":"# Calculate performance metrics with uncertainty measures binom_trial_performance <- check_performance( calibrated_binom_trial$best_sims, select_strategy = \"best\", uncertainty = TRUE, # Calculate uncertainty measures n_boot = 1000, # 1000 bootstrap samples (more typically recommended) ci_width = 0.95, # 95% confidence intervals (default) boot_seed = \"base\" # Use same random seed for bootstrapping as for simulations ) # Print results print(binom_trial_performance, digits = 2) #> metric est err_sd err_mad lo_ci hi_ci #> 1 n_summarised 1000.00 0.00 0.00 1000.00 1000.00 #> 2 size_mean 1757.20 11.26 11.12 1736.20 1779.10 #> 3 size_sd 370.74 9.31 9.34 353.87 389.70 #> 4 size_median 2000.00 0.00 0.00 2000.00 2000.00 #> 5 size_p25 1500.00 47.25 0.00 1400.00 1500.00 #> 6 size_p75 2000.00 0.00 0.00 2000.00 2000.00 #> 7 size_p0 400.00 NA NA NA NA #> 8 size_p100 2000.00 NA NA NA NA #> 9 sum_ys_mean 440.16 2.90 2.91 434.50 445.89 #> 10 sum_ys_sd 95.56 2.34 2.41 91.15 100.14 #> 11 sum_ys_median 487.00 1.36 0.74 484.00 489.00 #> 12 sum_ys_p25 366.00 9.63 8.90 353.00 387.00 #> 13 sum_ys_p75 506.00 1.09 1.48 504.00 508.00 #> 14 sum_ys_p0 88.00 NA NA NA NA #> 15 sum_ys_p100 572.00 NA NA NA NA #> 16 ratio_ys_mean 0.25 0.00 0.00 0.25 0.25 #> 17 ratio_ys_sd 0.01 0.00 0.00 0.01 0.01 #> 18 ratio_ys_median 0.25 0.00 0.00 0.25 0.25 #> 19 ratio_ys_p25 0.24 0.00 0.00 0.24 0.24 #> 20 ratio_ys_p75 0.26 0.00 0.00 0.26 0.26 #> 21 ratio_ys_p0 0.19 NA NA NA NA #> 22 ratio_ys_p100 0.30 NA NA NA NA #> 23 prob_conclusive 0.42 0.02 0.01 0.39 0.45 #> 24 prob_superior 0.04 0.01 0.01 0.03 0.06 #> 25 prob_equivalence 0.38 0.02 0.01 0.35 0.41 #> 26 prob_futility 0.00 0.00 0.00 0.00 0.00 #> 27 prob_max 0.58 0.02 0.01 0.55 0.61 #> 28 prob_select_arm_Arm A 0.35 0.01 0.01 0.32 0.38 #> 29 prob_select_arm_Arm B 0.33 0.01 0.01 0.30 0.36 #> 30 prob_select_arm_Arm C 0.32 0.01 0.01 0.29 0.35 #> 31 prob_select_none 0.00 0.00 0.00 0.00 0.00 #> 32 rmse 0.02 0.00 0.00 0.02 0.02 #> 33 rmse_te NA NA NA NA NA #> 34 mae 0.01 0.00 0.00 0.01 0.01 #> 35 mae_te NA NA NA NA NA #> 36 idp NA NA NA NA NA binom_trial_summary <- summary( calibrated_binom_trial$best_sims, select_strategy = \"best\" ) print(binom_trial_summary, digits = 2) #> Multiple simulation results: generic binomially distributed outcome trial #> * Undesirable outcome #> * Number of simulations: 1000 #> * Number of simulations summarised: 1000 (all trials) #> * No common control arm #> * Selection strategy: best remaining available #> * Treatment effect compared to: no comparison #> #> Performance metrics (using posterior estimates from final analysis [all patients]): #> * Sample sizes: mean 1757.20 (SD: 370.74) | median 2000.00 (IQR: 1500.00 to 2000.00) [range: 400.00 to 2000.00] #> * Total summarised outcomes: mean 440.16 (SD: 95.56) | median 487.00 (IQR: 366.00 to 506.00) [range: 88.00 to 572.00] #> * Total summarised outcome rates: mean 0.2503 (SD: 0.0109) | median 0.2500 (IQR: 0.2435 to 0.2573) [range: 0.1900 to 0.2950] #> * Conclusive: 42.50% #> * Superiority: 4.50% #> * Equivalence: 38.00% #> * Futility: 0.00% [not assessed] #> * Inconclusive at max sample size: 57.50% #> * Selection probabilities: Arm A: 35.10% | Arm B: 32.90% | Arm C: 32.00% | None: 0.00% #> * RMSE / MAE: 0.01767 / 0.01164 #> * RMSE / MAE treatment effect: not estimated / not estimated #> * Ideal design percentage: not estimable #> #> Simulation details: #> * Simulation time: 20.4 secs #> * Base random seed: 4131 #> * Credible interval width: 95% #> * Number of posterior draws: 5000 #> * Estimation method: posterior medians with MAD-SDs binom_trial_results <- extract_results( calibrated_binom_trial$best_sims, select_strategy = \"best\" ) nrow(binom_trial_results) # Number of rows/simulations #> [1] 1000 head(binom_trial_results) # Print the first rows #> sim final_n sum_ys ratio_ys final_status superior_arm selected_arm #> 1 1 2000 478 0.2390 equivalence Arm A #> 2 2 2000 488 0.2440 max Arm A #> 3 3 2000 521 0.2605 max Arm C #> 4 4 2000 500 0.2500 max Arm C #> 5 5 2000 471 0.2355 max Arm A #> 6 6 2000 503 0.2515 max Arm B #> err sq_err err_te sq_err_te #> 1 -0.0134029565 1.796392e-04 NA NA #> 2 -0.0118977741 1.415570e-04 NA NA #> 3 0.0004940695 2.441046e-07 NA NA #> 4 -0.0127647255 1.629382e-04 NA NA #> 5 -0.0232813002 5.420189e-04 NA NA #> 6 -0.0154278469 2.380185e-04 NA NA check_remaining_arms( calibrated_binom_trial$best_sims, ci_width = 0.95 # 95% confidence intervals (default) ) #> arm_Arm A arm_Arm B arm_Arm C n prop se lo_ci #> 1 active active active 528 0.528 0.02172556 0.48541868 #> 2 equivalence equivalence 121 0.121 0.02964793 0.06289112 #> 3 equivalence equivalence 120 0.120 0.02966479 0.06185807 #> 4 equivalence equivalence 108 0.108 0.02986637 0.04946299 #> 5 equivalence equivalence equivalence 31 0.031 0.03112876 0.00000000 #> 6 superior 22 0.022 0.03127299 0.00000000 #> 7 superior 14 0.014 0.03140064 0.00000000 #> 8 superior 9 0.009 0.03148015 0.00000000 #> hi_ci #> 1 0.57058132 #> 2 0.17910888 #> 3 0.17814193 #> 4 0.16653701 #> 5 0.09201126 #> 6 0.08329394 #> 7 0.07554412 #> 8 0.07069997"},{"path":"https://inceptdk.github.io/adaptr/articles/Overview.html","id":"visualising-results","dir":"Articles","previous_headings":"Usage and workflow overview","what":"Visualising results","title":"Overview","text":"Several visualisation functions included (optional, require ggplot2 package installed). Convergence stability one performance metrics may visually assessed using plot_convergence() function: Plotting metrics possible; see plot_convergence() documentation. simulation results may also split separate, consecutive batches assessing convergence, assess stability: status probabilities overall trial according trial progress can visualised using plot_status() function: Similarly, status probabilities one specific trial arms can visualised: Finally, various metrics may summarised progress one multiple trial simulations using plot_history() function, requires non-sparse results (sparse argument must FALSE calibrate_trials(), run_trials(), run_trial(), leading additional results saved - functions work sparse results). illustrated .","code":"plot_convergence( calibrated_binom_trial$best_sims, metrics = c(\"size mean\", \"prob_superior\", \"prob_equivalence\"), # select_strategy can be specified, but does not affect the chosen metrics ) plot_convergence( calibrated_binom_trial$best_sims, metrics = c(\"size mean\", \"prob_superior\", \"prob_equivalence\"), n_split = 4 ) plot_status( calibrated_binom_trial$best_sims, x_value = \"total n\" # Total number of randomised patients at X-axis ) plot_status( calibrated_binom_trial$best_sims, x_value = \"total n\", arm = NA # NA for all arms or character vector for specific arms )"},{"path":"https://inceptdk.github.io/adaptr/articles/Overview.html","id":"use-calibrated-stopping-thresholds-in-another-scenario","dir":"Articles","previous_headings":"Usage and workflow overview","what":"Use calibrated stopping thresholds in another scenario","title":"Overview","text":"calibrated stopping thresholds (calibrated scenario -arm differences) may used run simulations overall trial specification, according different scenario (.e., -arm differences present) assess performance metrics (including Bayesian analogue power). First, new trial specification setup using settings , except -arm differences calibrated stopping thresholds: Simulations using trial specification calibrated stopping thresholds differences present can conducted using run_trials() function. , specify non-sparse results returned (illustrate plot_history() function). , simulations may saved reloaded using path argument. calculate performance metrics : Similarly, overall trial statuses scenario differences visualised: Statuses arm scenario also visualised: can plot median interquartile ranges allocation probabilities arm time using plot_history() function (requiring non-sparse results, leading substantially larger objects files saved): Similarly, median (interquartile range) number patients allocated arm trial progresses can visualised: Plotting metrics possible; see plot_history() documentation.","code":"binom_trial_calib_diff <- setup_trial_binom( arms = c(\"Arm A\", \"Arm B\", \"Arm C\"), true_ys = c(0.25, 0.20, 0.30), # Different outcomes in the arms min_probs = rep(0.20, 3), data_looks = seq(from = 300, to = 2000, by = 100), randomised_at_looks = c(seq(from = 400, to = 2000, by = 100), 2000), # Stopping rules for inferiority/superiority explicitly defined # using the calibration results inferiority = 1 - calibrated_binom_trial$best_x, superiority = calibrated_binom_trial$best_x, equivalence_prob = 0.9, equivalence_diff = 0.05, soften_power = 0.5 ) binom_trial_diff_sims <- run_trials( binom_trial_calib_diff, n_rep = 1000, # 1000 simulations (more generally recommended) base_seed = 1234, # Reproducible results sparse = FALSE # Return additional results for visualisation ) check_performance( binom_trial_diff_sims, select_strategy = \"best\", uncertainty = TRUE, n_boot = 1000, # 1000 bootstrap samples (more typically recommended) ci_width = 0.95, boot_seed = \"base\" ) #> metric est err_sd err_mad lo_ci hi_ci #> 1 n_summarised 1000.000 0.000 0.000 1000.000 1000.000 #> 2 size_mean 1245.100 16.618 17.272 1215.185 1277.702 #> 3 size_sd 510.702 7.414 7.436 496.194 525.386 #> 4 size_median 1200.000 46.824 0.000 1200.000 1300.000 #> 5 size_p25 800.000 35.902 0.000 800.000 900.000 #> 6 size_p75 1700.000 46.345 0.000 1600.000 1700.000 #> 7 size_p0 400.000 NA NA NA NA #> 8 size_p100 2000.000 NA NA NA NA #> 9 sum_ys_mean 287.066 3.697 3.827 280.241 294.549 #> 10 sum_ys_sd 113.660 1.697 1.650 110.337 116.954 #> 11 sum_ys_median 286.000 5.981 7.413 274.500 295.000 #> 12 sum_ys_p25 194.750 7.147 7.413 180.731 207.756 #> 13 sum_ys_p75 382.250 6.961 7.042 370.000 395.250 #> 14 sum_ys_p0 85.000 NA NA NA NA #> 15 sum_ys_p100 518.000 NA NA NA NA #> 16 ratio_ys_mean 0.233 0.000 0.001 0.232 0.234 #> 17 ratio_ys_sd 0.016 0.000 0.000 0.015 0.016 #> 18 ratio_ys_median 0.232 0.001 0.001 0.231 0.233 #> 19 ratio_ys_p25 0.222 0.001 0.001 0.220 0.224 #> 20 ratio_ys_p75 0.243 0.001 0.001 0.241 0.244 #> 21 ratio_ys_p0 0.196 NA NA NA NA #> 22 ratio_ys_p100 0.298 NA NA NA NA #> 23 prob_conclusive 0.882 0.011 0.010 0.862 0.902 #> 24 prob_superior 0.719 0.015 0.015 0.690 0.747 #> 25 prob_equivalence 0.163 0.012 0.012 0.139 0.185 #> 26 prob_futility 0.000 0.000 0.000 0.000 0.000 #> 27 prob_max 0.118 0.011 0.010 0.098 0.138 #> 28 prob_select_arm_Arm A 0.033 0.005 0.004 0.023 0.043 #> 29 prob_select_arm_Arm B 0.967 0.005 0.004 0.957 0.977 #> 30 prob_select_arm_Arm C 0.000 0.000 0.000 0.000 0.000 #> 31 prob_select_none 0.000 0.000 0.000 0.000 0.000 #> 32 rmse 0.020 0.001 0.001 0.019 0.022 #> 33 rmse_te NA NA NA NA NA #> 34 mae 0.011 0.000 0.000 0.010 0.012 #> 35 mae_te NA NA NA NA NA #> 36 idp 98.350 0.264 0.222 97.849 98.850 plot_status(binom_trial_diff_sims, x_value = \"total n\") plot_status(binom_trial_diff_sims, x_value = \"total n\", arm = NA) plot_history( binom_trial_diff_sims, x_value = \"total n\", y_value = \"prob\" ) plot_history( binom_trial_diff_sims, x_value = \"total n\", y_value = \"n all\" )"},{"path":"https://inceptdk.github.io/adaptr/articles/Overview.html","id":"citation","dir":"Articles","previous_headings":"","what":"Citation","title":"Overview","text":"use package, please consider citing :","code":"citation(package = \"adaptr\") #> To cite package 'adaptr' in publications use: #> #> Granholm A, Jensen AKG, Lange T, Kaas-Hansen BS (2022). adaptr: an R #> package for simulating and comparing adaptive clinical trials. #> Journal of Open Source Software, 7(72), 4284. URL #> https://doi.org/10.21105/joss.04284. #> #> A BibTeX entry for LaTeX users is #> #> @Article{, #> title = {{adaptr}: an R package for simulating and comparing adaptive clinical trials}, #> author = {Anders Granholm and Aksel Karl Georg Jensen and Theis Lange and Benjamin Skov Kaas-Hansen}, #> journal = {Journal of Open Source Software}, #> year = {2022}, #> volume = {7}, #> number = {72}, #> pages = {4284}, #> url = {https://doi.org/10.21105/joss.04284}, #> doi = {10.21105/joss.04284}, #> }"},{"path":"https://inceptdk.github.io/adaptr/authors.html","id":null,"dir":"","previous_headings":"","what":"Authors","title":"Authors and Citation","text":"Anders Granholm. Author, maintainer. Benjamin Skov Kaas-Hansen. Author. Aksel Karl Georg Jensen. Contributor. Theis Lange. Contributor.","code":""},{"path":"https://inceptdk.github.io/adaptr/authors.html","id":"citation","dir":"","previous_headings":"","what":"Citation","title":"Authors and Citation","text":"Granholm , Jensen AKG, Lange T, Kaas-Hansen BS (2022). adaptr: R package simulating comparing adaptive clinical trials. Journal Open Source Software, 7(72), 4284. URL https://doi.org/10.21105/joss.04284.","code":"@Article{, title = {{adaptr}: an R package for simulating and comparing adaptive clinical trials}, author = {Anders Granholm and Aksel Karl Georg Jensen and Theis Lange and Benjamin Skov Kaas-Hansen}, journal = {Journal of Open Source Software}, year = {2022}, volume = {7}, number = {72}, pages = {4284}, url = {https://doi.org/10.21105/joss.04284}, doi = {10.21105/joss.04284}, }"},{"path":"https://inceptdk.github.io/adaptr/index.html","id":"adaptr-","dir":"","previous_headings":"","what":"Adaptive Trial Simulator","title":"Adaptive Trial Simulator","text":"adaptr package simulates adaptive (multi-arm, multi-stage) clinical trials using adaptive stopping, adaptive arm dropping /response-adaptive randomisation. package developed part INCEPT (Intensive Care Platform Trial) project, primarily supported grant Sygeforsikringen “danmark”.","code":""},{"path":"https://inceptdk.github.io/adaptr/index.html","id":"resources","dir":"","previous_headings":"","what":"Resources","title":"Adaptive Trial Simulator","text":"Website - stand-alone website full package documentation adaptr: R package simulating comparing adaptive clinical trials - article Journal Open Source Software describing package overview methodological considerations regarding adaptive stopping, arm dropping randomisation clinical trials - article Journal Clinical Epidemiology describing key methodological considerations adaptive trials description workflow simulation-based example using package Examples: Effects duration follow-lag data collection performance adaptive clinical trials - article Pharmaceutical Statistics describing simulation study (code) using adaptr assess performance adaptive clinical trials according different follow-/data collection lags. Effects sceptical priors performance adaptive clinical trials binary outcomes - article Pharmaceutical Statistics describing simulation study (code) using adaptr assess performance adaptive clinical trials according different sceptical priors.","code":""},{"path":"https://inceptdk.github.io/adaptr/index.html","id":"installation","dir":"","previous_headings":"","what":"Installation","title":"Adaptive Trial Simulator","text":"easiest way install CRAN directly: Alternatively, can install development version GitHub - requires remotes-package installed. development version may contain additional features yet available CRAN version, may stable fully documented:","code":"install.packages(\"adaptr\") # install.packages(\"remotes\") remotes::install_github(\"INCEPTdk/adaptr@dev\")"},{"path":"https://inceptdk.github.io/adaptr/index.html","id":"usage-and-workflow-overview","dir":"","previous_headings":"","what":"Usage and workflow overview","title":"Adaptive Trial Simulator","text":"central functionality adaptr typical workflow illustrated .","code":""},{"path":"https://inceptdk.github.io/adaptr/index.html","id":"setup","dir":"","previous_headings":"Usage and workflow overview","what":"Setup","title":"Adaptive Trial Simulator","text":"First, package loaded cluster parallel workers initiated setup_cluster() function facilitate parallel computing:","code":"library(adaptr) #> Loading 'adaptr' package v1.4.0. #> For instructions, type 'help(\"adaptr\")' #> or see https://inceptdk.github.io/adaptr/. setup_cluster(2)"},{"path":"https://inceptdk.github.io/adaptr/index.html","id":"specify-trial-design","dir":"","previous_headings":"Usage and workflow overview","what":"Specify trial design","title":"Adaptive Trial Simulator","text":"Setup trial specification (defining trial design scenario) using general setup_trial() function, one special case variants using default priors setup_trial_binom() (binary, binomially distributed outcomes; used example) setup_trial_norm() (continuous, normally distributed outcomes).","code":"# Setup a trial using a binary, binomially distributed, undesirable outcome binom_trial <- setup_trial_binom( arms = c(\"Arm A\", \"Arm B\", \"Arm C\"), # Scenario with identical outcomes in all arms true_ys = c(0.25, 0.25, 0.25), # Response-adaptive randomisation with minimum 20% allocation in all arms min_probs = rep(0.20, 3), # Number of patients with data available at each analysis data_looks = seq(from = 300, to = 2000, by = 100), # Number of patients randomised at each analysis (higher than the numbers # with data, except at last look, due to follow-up/data collection lag) randomised_at_looks = c(seq(from = 400, to = 2000, by = 100), 2000), # Stopping rules for inferiority/superiority not explicitly defined # Stop for equivalence at > 90% probability of differences < 5 %-points equivalence_prob = 0.9, equivalence_diff = 0.05 ) # Print trial specification print(binom_trial, prob_digits = 3) #> Trial specification: generic binomially distributed outcome trial #> * Undesirable outcome #> * No common control arm #> * Best arms: Arm A and Arm B and Arm C #> #> Arms, true outcomes, starting allocation probabilities #> and allocation probability limits: #> arms true_ys start_probs fixed_probs min_probs max_probs #> Arm A 0.25 0.333 NA 0.2 NA #> Arm B 0.25 0.333 NA 0.2 NA #> Arm C 0.25 0.333 NA 0.2 NA #> #> Maximum sample size: 2000 #> Maximum number of data looks: 18 #> Planned data looks after: 300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, 1500, 1600, 1700, 1800, 1900, 2000 patients have reached follow-up #> Number of patients randomised at each look: 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, 1500, 1600, 1700, 1800, 1900, 2000, 2000 #> #> Superiority threshold: 0.99 (all analyses) #> Inferiority threshold: 0.01 (all analyses) #> Equivalence threshold: 0.9 (all analyses) (no common control) #> Absolute equivalence difference: 0.05 #> No futility threshold (not relevant - no common control) #> Soften power for all analyses: 1 (no softening)"},{"path":"https://inceptdk.github.io/adaptr/index.html","id":"calibration","dir":"","previous_headings":"Usage and workflow overview","what":"Calibration","title":"Adaptive Trial Simulator","text":"example trial specification, true -arm differences, stopping rules inferiority superiority explicitly defined. intentional, stopping rules calibrated obtain desired probability stopping superiority scenario -arm differences (corresponding Bayesian type 1 error rate). Trial specifications necessarily calibrated, simulations can run directly using run_trials() function covered (run_trial() single simulation). Calibration trial specification done using calibrate_trial() function, defaults calibrate constant, symmetrical stopping rules inferiority superiority (expecting trial specification identical outcomes arm), can used calibrate parameter trial specification towards performance metric. calibration successful - calibrated, constant stopping threshold superiority printed results (0.9814318) can extracted using calibrated_binom_trial$best_x. Using default calibration functionality, calibrated, constant stopping threshold inferiority symmetrical, .e., 1 - stopping threshold superiority (0.0185682). calibrated trial specification may extracted using calibrated_binom_trial$best_trial_spec , printed, also include calibrated stopping thresholds. Calibration results may saved (reloaded) using path argument, avoid unnecessary repeated simulations.","code":"# Calibrate the trial specification calibrated_binom_trial <- calibrate_trial( trial_spec = binom_trial, n_rep = 1000, # 1000 simulations for each step (more generally recommended) base_seed = 4131, # Base random seed (for reproducible results) target = 0.05, # Target value for calibrated metric (default value) search_range = c(0.9, 1), # Search range for superiority stopping threshold tol = 0.01, # Tolerance range dir = -1 # Tolerance range only applies below target ) # Print result (to check if calibration is successful) calibrated_binom_trial #> Trial calibration: #> * Result: calibration successful #> * Best x: 0.9814318 #> * Best y: 0.048 #> #> Central settings: #> * Target: 0.05 #> * Tolerance: 0.01 (at or below target, range: 0.04 to 0.05) #> * Search range: 0.9 to 1 #> * Gaussian process controls: #> * - resolution: 5000 #> * - kappa: 0.5 #> * - pow: 1.95 #> * - lengthscale: 1 (constant) #> * - x scaled: yes #> * Noisy: no #> * Narrowing: yes #> #> Calibration/simulation details: #> * Total evaluations: 7 (previous + grid + iterations) #> * Repetitions: 1000 #> * Calibration time: 3.66 mins #> * Base random seed: 4131 #> #> See 'help(\"calibrate_trial\")' for details."},{"path":"https://inceptdk.github.io/adaptr/index.html","id":"summarising-results","dir":"","previous_headings":"Usage and workflow overview","what":"Summarising results","title":"Adaptive Trial Simulator","text":"results simulations using calibrated trial specification conducted calibration procedure may extracted using calibrated_binom_trial$best_sims. results can summarised several functions. functions support different ‘selection strategies’ simulations ending superiority, .e., performance metrics can calculated assuming different arms used clinical practice arm ultimately superior. check_performance() function summarises performance metrics tidy data.frame, uncertainty measures (bootstrapped confidence intervals) requested. , performance metrics calculated considering ‘best’ arm (.e., one highest probability overall best) selected simulations ending superiority: Similar results list format (without uncertainty measures) can obtained using summary() method, comes print() method providing formatted results: Individual simulation results may extracted tidy data.frame using extract_results(). Finally, probabilities different remaining arms statuses (uncertainty) last adaptive analysis can summarised using check_remaining_arms() function.","code":"# Calculate performance metrics with uncertainty measures binom_trial_performance <- check_performance( calibrated_binom_trial$best_sims, select_strategy = \"best\", uncertainty = TRUE, # Calculate uncertainty measures n_boot = 1000, # 1000 bootstrap samples (more typically recommended) ci_width = 0.95, # 95% confidence intervals (default) boot_seed = \"base\" # Use same random seed for bootstrapping as for simulations ) # Print results print(binom_trial_performance, digits = 2) #> metric est err_sd err_mad lo_ci hi_ci #> 1 n_summarised 1000.00 0.00 0.00 1000.00 1000.00 #> 2 size_mean 1749.60 11.36 10.97 1727.20 1772.10 #> 3 size_sd 373.74 9.64 9.74 355.15 392.58 #> 4 size_median 2000.00 0.00 0.00 2000.00 2000.00 #> 5 size_p25 1400.00 52.43 0.00 1400.00 1500.00 #> 6 size_p75 2000.00 0.00 0.00 2000.00 2000.00 #> 7 size_p0 400.00 NA NA NA NA #> 8 size_p100 2000.00 NA NA NA NA #> 9 sum_ys_mean 438.69 2.95 2.85 432.74 444.66 #> 10 sum_ys_sd 96.20 2.42 2.37 91.28 100.79 #> 11 sum_ys_median 486.00 1.98 2.97 483.00 490.00 #> 12 sum_ys_p25 364.75 10.95 9.64 352.00 395.00 #> 13 sum_ys_p75 506.00 1.15 1.48 504.00 508.00 #> 14 sum_ys_p0 88.00 NA NA NA NA #> 15 sum_ys_p100 565.00 NA NA NA NA #> 16 ratio_ys_mean 0.25 0.00 0.00 0.25 0.25 #> 17 ratio_ys_sd 0.01 0.00 0.00 0.01 0.01 #> 18 ratio_ys_median 0.25 0.00 0.00 0.25 0.25 #> 19 ratio_ys_p25 0.24 0.00 0.00 0.24 0.24 #> 20 ratio_ys_p75 0.26 0.00 0.00 0.26 0.26 #> 21 ratio_ys_p0 0.20 NA NA NA NA #> 22 ratio_ys_p100 0.30 NA NA NA NA #> 23 prob_conclusive 0.43 0.02 0.01 0.40 0.46 #> 24 prob_superior 0.05 0.01 0.01 0.04 0.06 #> 25 prob_equivalence 0.38 0.02 0.01 0.35 0.41 #> 26 prob_futility 0.00 0.00 0.00 0.00 0.00 #> 27 prob_max 0.57 0.02 0.01 0.54 0.60 #> 28 prob_select_arm_Arm A 0.32 0.02 0.01 0.29 0.35 #> 29 prob_select_arm_Arm B 0.31 0.01 0.01 0.28 0.34 #> 30 prob_select_arm_Arm C 0.37 0.02 0.02 0.34 0.40 #> 31 prob_select_none 0.00 0.00 0.00 0.00 0.00 #> 32 rmse 0.02 0.00 0.00 0.02 0.02 #> 33 rmse_te NA NA NA NA NA #> 34 mae 0.01 0.00 0.00 0.01 0.01 #> 35 mae_te NA NA NA NA NA #> 36 idp NA NA NA NA NA binom_trial_summary <- summary( calibrated_binom_trial$best_sims, select_strategy = \"best\" ) print(binom_trial_summary) #> Multiple simulation results: generic binomially distributed outcome trial #> * Undesirable outcome #> * Number of simulations: 1000 #> * Number of simulations summarised: 1000 (all trials) #> * No common control arm #> * Selection strategy: best remaining available #> * Treatment effect compared to: no comparison #> #> Performance metrics (using posterior estimates from final analysis [all patients]): #> * Sample sizes: mean 1749.6 (SD: 373.7) | median 2000.0 (IQR: 1400.0 to 2000.0) [range: 400.0 to 2000.0] #> * Total summarised outcomes: mean 438.7 (SD: 96.2) | median 486.0 (IQR: 364.8 to 506.0) [range: 88.0 to 565.0] #> * Total summarised outcome rates: mean 0.251 (SD: 0.011) | median 0.250 (IQR: 0.244 to 0.258) [range: 0.198 to 0.295] #> * Conclusive: 42.9% #> * Superiority: 4.8% #> * Equivalence: 38.1% #> * Futility: 0.0% [not assessed] #> * Inconclusive at max sample size: 57.1% #> * Selection probabilities: Arm A: 31.8% | Arm B: 31.0% | Arm C: 37.2% | None: 0.0% #> * RMSE / MAE: 0.01730 / 0.01102 #> * RMSE / MAE treatment effect: not estimated / not estimated #> * Ideal design percentage: not estimable #> #> Simulation details: #> * Simulation time: 33.1 secs #> * Base random seed: 4131 #> * Credible interval width: 95% #> * Number of posterior draws: 5000 #> * Estimation method: posterior medians with MAD-SDs"},{"path":"https://inceptdk.github.io/adaptr/index.html","id":"visualising-results","dir":"","previous_headings":"Usage and workflow overview","what":"Visualising results","title":"Adaptive Trial Simulator","text":"Several visualisation functions included (optional, require ggplot2 package installed). Convergence stability one performance metrics may visually assessed using plot_convergence() function: empirical cumulative distribution functions continuous performance metrics may also visualised: status probabilities overall trial (specific arms) according trial progress can visualised using plot_status() function: Finally, various metrics may summarised progress one multiple trial simulations using plot_history() function, requires non-sparse results (sparse argument must FALSE calibrate_trials(), run_trials(), run_trial(), leading additional results saved).","code":"plot_convergence( calibrated_binom_trial$best_sims, metrics = c(\"size mean\", \"prob_superior\", \"prob_equivalence\"), # select_strategy can be specified, but does not affect the chosen metrics ) plot_metrics_ecdf( calibrated_binom_trial$best_sims, metrics = \"size\" ) # Overall trial status probabilities plot_status( calibrated_binom_trial$best_sims, x_value = \"total n\" # Total number of randomised patients at X-axis )"},{"path":"https://inceptdk.github.io/adaptr/index.html","id":"use-calibrated-stopping-thresholds-in-another-scenario","dir":"","previous_headings":"Usage and workflow overview","what":"Use calibrated stopping thresholds in another scenario","title":"Adaptive Trial Simulator","text":"calibrated stopping thresholds (calibrated scenario -arm differences) may used run simulations overall trial specification, according different scenario (.e., -arm differences present) assess performance metrics (including Bayesian analogue power). First, new trial specification setup using settings , except -arm differences calibrated stopping thresholds: Simulations using trial specification calibrated stopping thresholds differences present can conducted using run_trials() function performance metrics calculated : , simulations may saved reloaded using path argument. Similarly, overall trial statuses scenario differences can visualised:","code":"binom_trial_calib_diff <- setup_trial_binom( arms = c(\"Arm A\", \"Arm B\", \"Arm C\"), true_ys = c(0.25, 0.20, 0.30), # Different outcomes in the arms min_probs = rep(0.20, 3), data_looks = seq(from = 300, to = 2000, by = 100), randomised_at_looks = c(seq(from = 400, to = 2000, by = 100), 2000), # Stopping rules for inferiority/superiority explicitly defined # using the calibration results inferiority = 1 - calibrated_binom_trial$best_x, superiority = calibrated_binom_trial$best_x, equivalence_prob = 0.9, equivalence_diff = 0.05 ) binom_trial_diff_sims <- run_trials( binom_trial_calib_diff, n_rep = 1000, # 1000 simulations (more generally recommended) base_seed = 1234 # Reproducible results ) check_performance( binom_trial_diff_sims, select_strategy = \"best\", uncertainty = TRUE, n_boot = 1000, # 1000 bootstrap samples (more typically recommended) ci_width = 0.95, boot_seed = \"base\" ) #> metric est err_sd err_mad lo_ci hi_ci #> 1 n_summarised 1000.000 0.000 0.000 1000.000 1000.000 #> 2 size_mean 1242.300 16.620 16.976 1209.895 1273.025 #> 3 size_sd 531.190 7.251 7.604 516.617 544.091 #> 4 size_median 1200.000 22.220 0.000 1200.000 1300.000 #> 5 size_p25 800.000 36.095 0.000 700.000 800.000 #> 6 size_p75 1700.000 42.453 0.000 1700.000 1800.000 #> 7 size_p0 400.000 NA NA NA NA #> 8 size_p100 2000.000 NA NA NA NA #> 9 sum_ys_mean 284.999 3.695 3.726 277.724 291.991 #> 10 sum_ys_sd 117.265 1.701 1.732 113.765 120.311 #> 11 sum_ys_median 279.000 5.268 4.448 269.500 289.512 #> 12 sum_ys_p25 186.000 6.682 7.413 174.000 197.019 #> 13 sum_ys_p75 390.000 7.633 7.413 374.000 402.250 #> 14 sum_ys_p0 81.000 NA NA NA NA #> 15 sum_ys_p100 519.000 NA NA NA NA #> 16 ratio_ys_mean 0.232 0.000 0.001 0.231 0.233 #> 17 ratio_ys_sd 0.016 0.000 0.000 0.015 0.017 #> 18 ratio_ys_median 0.230 0.001 0.000 0.230 0.232 #> 19 ratio_ys_p25 0.221 0.000 0.000 0.220 0.222 #> 20 ratio_ys_p75 0.242 0.001 0.001 0.240 0.243 #> 21 ratio_ys_p0 0.195 NA NA NA NA #> 22 ratio_ys_p100 0.298 NA NA NA NA #> 23 prob_conclusive 0.877 0.011 0.010 0.857 0.898 #> 24 prob_superior 0.731 0.014 0.015 0.706 0.759 #> 25 prob_equivalence 0.146 0.011 0.011 0.125 0.167 #> 26 prob_futility 0.000 0.000 0.000 0.000 0.000 #> 27 prob_max 0.123 0.011 0.010 0.102 0.143 #> 28 prob_select_arm_Arm A 0.038 0.006 0.006 0.026 0.049 #> 29 prob_select_arm_Arm B 0.962 0.006 0.006 0.951 0.974 #> 30 prob_select_arm_Arm C 0.000 0.000 0.000 0.000 0.000 #> 31 prob_select_none 0.000 0.000 0.000 0.000 0.000 #> 32 rmse 0.020 0.001 0.001 0.019 0.022 #> 33 rmse_te NA NA NA NA NA #> 34 mae 0.011 0.000 0.000 0.010 0.012 #> 35 mae_te NA NA NA NA NA #> 36 idp 98.100 0.306 0.297 97.549 98.700 plot_status(binom_trial_diff_sims, x_value = \"total n\")"},{"path":"https://inceptdk.github.io/adaptr/index.html","id":"issues-and-enhancements","dir":"","previous_headings":"","what":"Issues and enhancements","title":"Adaptive Trial Simulator","text":"use GitHub issue tracker bug/issue reports proposals enhancements.","code":""},{"path":"https://inceptdk.github.io/adaptr/index.html","id":"contributing","dir":"","previous_headings":"","what":"Contributing","title":"Adaptive Trial Simulator","text":"welcome contributions directly code improve performance well new functionality. latter, please first explain motivate issue. Changes code base follow steps: Fork repository Make branch appropriate name fork Implement changes fork, make sure passes R CMD check (neither errors, warnings, notes) add bullet top NEWS.md short description change, GitHub handle id pull request implementing change (check NEWS.md file see formatting) Create pull request dev branch adaptr","code":""},{"path":"https://inceptdk.github.io/adaptr/index.html","id":"citation","dir":"","previous_headings":"","what":"Citation","title":"Adaptive Trial Simulator","text":"use package, please consider citing :","code":"citation(package = \"adaptr\") #> #> To cite package 'adaptr' in publications use: #> #> Granholm A, Jensen AKG, Lange T, Kaas-Hansen BS (2022). adaptr: an R #> package for simulating and comparing adaptive clinical trials. #> Journal of Open Source Software, 7(72), 4284. URL #> https://doi.org/10.21105/joss.04284. #> #> A BibTeX entry for LaTeX users is #> #> @Article{, #> title = {{adaptr}: an R package for simulating and comparing adaptive clinical trials}, #> author = {Anders Granholm and Aksel Karl Georg Jensen and Theis Lange and Benjamin Skov Kaas-Hansen}, #> journal = {Journal of Open Source Software}, #> year = {2022}, #> volume = {7}, #> number = {72}, #> pages = {4284}, #> url = {https://doi.org/10.21105/joss.04284}, #> doi = {10.21105/joss.04284}, #> }"},{"path":"https://inceptdk.github.io/adaptr/reference/adaptr-package.html","id":null,"dir":"Reference","previous_headings":"","what":"adaptr: Adaptive Trial Simulator — adaptr-package","title":"adaptr: Adaptive Trial Simulator — adaptr-package","text":"Adaptive Trial Simulator adaptr package simulates adaptive (multi-arm, multi-stage) randomised clinical trials using adaptive stopping, adaptive arm dropping /response-adaptive randomisation. package developed part INCEPT (Intensive Care Platform Trial) project, funded primarily grant Sygeforsikringen \"danmark\".","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/adaptr-package.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"adaptr: Adaptive Trial Simulator — adaptr-package","text":"adaptr package contains following primary functions (order typical use): setup_cluster() initiates parallel computation cluster can used run simulations post-processing parallel, increasing speed. Details parallelisation options running adaptr functions parallel described setup_cluster() documentation. setup_trial() function general function sets trial specification. simpler, special-case functions setup_trial_binom() setup_trial_norm() may used easier specification trial designs using binary, binomially distributed continuous, normally distributed outcomes, respectively, limitations flexibility. calibrate_trial() function calibrates trial specification obtain certain value performance metric (typically used calibrate Bayesian type 1 error rate scenario -arm differences), using functions . run_trial() run_trials() functions used conduct single multiple simulations, respectively, according trial specification setup described #2. extract_results(), check_performance() summary() functions used extract results multiple trial simulations, calculate performance metrics, summarise results. plot_convergence() function assesses stability performance metrics according number simulations conducted. plot_metrics_ecdf() function plots empirical cumulative distribution functions numerical performance metrics. check_remaining_arms() function summarises combinations remaining arms across multiple trials simulations. plot_status() plot_history() functions used plot overall trial/arm statuses multiple simulated trials history trial metrics time single/multiple simulated trials, respectively. information see documentation function Overview vignette (vignette(\"Overview\", package = \"adaptr\")) example functions work combination. examples guidance setting trial specifications, see setup_trial() documentation, Basic examples vignette (vignette(\"Basic-examples\", package = \"adaptr\")) Advanced example vignette (vignette(\"Advanced-example\", package = \"adaptr\")). using package, please consider citing using citation(package = \"adaptr\").","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/adaptr-package.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"adaptr: Adaptive Trial Simulator — adaptr-package","text":"Granholm , Jensen AKG, Lange T, Kaas-Hansen BS (2022). adaptr: R package simulating comparing adaptive clinical trials. Journal Open Source Software, 7(72), 4284. doi:10.21105/joss.04284 Granholm , Kaas-Hansen BS, Lange T, Schjørring OL, Andersen LW, Perner , Jensen AKG, Møller MH (2022). overview methodological considerations regarding adaptive stopping, arm dropping randomisation clinical trials. J Clin Epidemiol. doi:10.1016/j.jclinepi.2022.11.002 Website/manual GitHub repository Examples studies using adaptr: Granholm , Lange T, Harhay MO, Jensen AKG, Perner , Møller MH, Kaas-Hansen BS (2023). Effects duration follow-lag data collection performance adaptive clinical trials. Pharm Stat. doi:10.1002/pst.2342 Granholm , Lange T, Harhay MO, Perner , Møller MH, Kaas-Hansen BS (2024). Effects sceptical priors performance adaptive clinical trials binary outcomes. Pharm Stat. doi:10.1002/pst.2387","code":""},{"path":[]},{"path":"https://inceptdk.github.io/adaptr/reference/adaptr-package.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"adaptr: Adaptive Trial Simulator — adaptr-package","text":"Maintainer: Anders Granholm andersgran@gmail.com (ORCID) Authors: Benjamin Skov Kaas-Hansen epiben@hey.com (ORCID) contributors: Aksel Karl Georg Jensen akje@sund.ku.dk (ORCID) [contributor] Theis Lange thlan@sund.ku.dk (ORCID) [contributor]","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/assert_pkgs.html","id":null,"dir":"Reference","previous_headings":"","what":"Check availability of required packages — assert_pkgs","title":"Check availability of required packages — assert_pkgs","text":"Used internally, helper function check SUGGESTED packages available. halt execution queried packages available provide installation instructions.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/assert_pkgs.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check availability of required packages — assert_pkgs","text":"","code":"assert_pkgs(pkgs = NULL)"},{"path":"https://inceptdk.github.io/adaptr/reference/assert_pkgs.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check availability of required packages — assert_pkgs","text":"pkgs, character vector name(s) package(s) check.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/assert_pkgs.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check availability of required packages — assert_pkgs","text":"TRUE packages available, otherwise execution halted error.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/calculate_idp.html","id":null,"dir":"Reference","previous_headings":"","what":"Calculate the ideal design percentage — calculate_idp","title":"Calculate the ideal design percentage — calculate_idp","text":"Used internally check_performance(), calculates ideal design percentage described function's documentation.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/calculate_idp.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Calculate the ideal design percentage — calculate_idp","text":"","code":"calculate_idp(sels, arms, true_ys, highest_is_best)"},{"path":"https://inceptdk.github.io/adaptr/reference/calculate_idp.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Calculate the ideal design percentage — calculate_idp","text":"sels character vector specifying selected arms (according selection strategies described extract_results()). arms character vector unique names trial arms. true_ys numeric vector specifying true outcomes (e.g., event probabilities, mean values, etc.) trial arms. highest_is_best single logical, specifies whether larger estimates outcome favourable ; defaults FALSE, corresponding , e.g., undesirable binary outcomes (e.g., mortality) continuous outcome lower numbers preferred (e.g., hospital length stay).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/calculate_idp.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Calculate the ideal design percentage — calculate_idp","text":"single numeric value 0 100 corresponding ideal design percentage.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/calibrate_trial.html","id":null,"dir":"Reference","previous_headings":"","what":"Calibrate trial specification — calibrate_trial","title":"Calibrate trial specification — calibrate_trial","text":"function calibrates trial specification using Gaussian process-based Bayesian optimisation algorithm. function calibrates input trial specification object (using repeated calls run_trials() adjusting trial specification) target value within search_range single input dimension (x) order find optimal value (y). default (expectedly common use case) calibrate trial specification adjust superiority inferiority thresholds obtain certain probability superiority; used trial specification identical underlying outcomes (-arm differences), probability estimate Bayesian analogue total type-1 error rate outcome driving adaptations, -arm differences present, corresponds estimate Bayesian analogue power. default perform calibration varying single, constant, symmetric thresholds superiority / inferiority throughout trial design, described Details, default values chosen function well case. Advanced users may use function calibrate trial specifications according metrics - see Details specify custom function used modify (recreate) trial specification object calibration process. underlying Gaussian process model control hyperparameters described Details, model partially based code Gramacy 2020 (permission; see References).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/calibrate_trial.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Calibrate trial specification — calibrate_trial","text":"","code":"calibrate_trial( trial_spec, n_rep = 1000, cores = NULL, base_seed = NULL, fun = NULL, target = 0.05, search_range = c(0.9, 1), tol = target/10, dir = 0, init_n = 2, iter_max = 25, resolution = 5000, kappa = 0.5, pow = 1.95, lengthscale = 1, scale_x = TRUE, noisy = is.null(base_seed), narrow = !noisy & !is.null(base_seed), prev_x = NULL, prev_y = NULL, path = NULL, overwrite = FALSE, version = NULL, compress = TRUE, sparse = TRUE, progress = NULL, export = NULL, export_envir = parent.frame(), verbose = FALSE, plot = FALSE )"},{"path":"https://inceptdk.github.io/adaptr/reference/calibrate_trial.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Calibrate trial specification — calibrate_trial","text":"trial_spec trial_spec object, generated validated setup_trial(), setup_trial_binom() setup_trial_norm() function. n_rep single integer, number simulations run evaluation. Values < 100 permitted; values < 1000 permitted recommended . cores NULL single integer. NULL, default value/cluster set setup_cluster() used control whether simulations run parallel default cluster sequentially main process; cluster/value specified setup_cluster(), cores set value stored global \"mc.cores\" option (previously set options(mc.cores = ), 1 option specified. resulting number cores = 1, computations run sequentially primary process, cores > 1, new parallel cluster setup using parallel library removed function completes. See setup_cluster() details. base_seed single integer NULL (default); random seed used basis simulation runs (see run_trials()) random number generation within rest calibration process; used, global random seed restored function run.Note: providing base_seed highly recommended, generally lead faster stable calibration. fun NULL (default), case trial specification calibrated using default process described Details; otherwise user-supplied function used calibration process, structure described Details. target single finite numeric value (defaults 0.05); target value y calibrate trial_spec object . search_range finite numeric vector length 2; lower upper boundaries search best x. Defaults c(0.9, 1.0). tol single finite numeric value (defaults target / 10); accepted tolerance (direction(s) specified dir) accepted; y-value within accepted tolerance target obtained, calibration stops.Note: tol specified sensible considering n_rep; e.g., probability superiority targeted n_rep == 1000, tol 0.01 correspond 10 simulated trials. low tol relative n_rep may lead slow calibration calibration succeed regardless number iterations.Important: even large number simulations conducted, using low tol may lead calibration succeeding may also affected factors, e.g., total number simulated patients, possible maximum differences simulated outcomes, number posterior draws (n_draws setup_trial() family functions), affects minimum differences posterior probabilities simulating trials thus can affect calibration, including using default calibration function. Increasing number posterior draws number repetitions attempted desired tolerance achieved lower numbers. dir single numeric value; specifies direction(s) tolerance range. 0 (default) tolerance range target - tol target + tol. < 0, range target - tol target, > 0, range target target + tol. init_n single integer >= 2. number initial evaluations evenly spread search_range, one evaluation boundary (thus, default value 2 minimum permitted value; calibrating according different target default, higher value may sensible). iter_max single integer > 0 (default 25). maximum number new evaluations initial grid (size specified init_n) set . calibration unsuccessful maximum number iterations, prev_x prev_y arguments (described ) may used start new calibration process re-using previous evaluations. resolution single integer (defaults 5000), size grid predictions used select next value evaluate made.Note: memory use substantially increase higher values. See also narrow argument . kappa single numeric value > 0 (default 0.5); corresponding width uncertainty bounds used find next target evaluate. See Details. pow single numerical value [1, 2] range (default 1.95), controlling smoothness Gaussian process. See Details. lengthscale single numerical value (defaults 1) numerical vector length 2; values must finite non-negative. single value provided, used lengthscale hyperparameter; numerical vector length 2 provided, second value must higher first optimal lengthscale range found using optimisation algorithm. value 0, small amount noise added lengthscales must > 0. Controls smoothness combination pow. See Details. scale_x single logical value; TRUE (default) x-values scaled [0, 1] range according minimum/maximum values provided. FALSE, model use original scale. distances original scale small, scaling may preferred. returned values always original scale. See Details. noisy single logical value; FALSE, noiseless process assumed, interpolation values performed (.e., uncertainty x-values assumed). TRUE, y-values assumed come noisy process, regression performed (.e., uncertainty evaluated x-values assumed included predictions). Specifying FALSE requires base_seed supplied, generally recommended, usually lead faster stable calibration. low n_rep used (trials calibrated metrics default), specifying TRUE may necessary even using valid base_seed. Defaults TRUE base_seed supplied FALSE . narrow single logical value. FALSE, predictions evenly spread full x-range. TRUE, prediction grid spread evenly interval consisting two x-values corresponding y-values closest target opposite directions. Can TRUE base_seed provided noisy FALSE (default value TRUE case, otherwise FALSE), function can safely assumed monotonically increasing decreasing (generally reasonable default used fun), case lead faster search smoother prediction grid relevant region without increasing memory use. prev_x, prev_y numeric vectors equal lengths, corresponding previous evaluations. provided, used calibration process (added initial grid setup, values grid matching values prev_x leading evaluations skipped). path single character string NULL (default); valid file path provided, calibration results either saved path (file exist overwrite TRUE, see ) previous results loaded returned (file exists, overwrite FALSE, input trial_spec central control settings identical previous run, otherwise error produced). Results saved/loaded using saveRDS() / readRDS() functions. overwrite single logical, defaults FALSE, case previous results loaded valid file path provided path object path contains input trial_spec previous calibration used central control settings (otherwise, function errors). TRUE valid file path provided path, complete calibration function run results saved using saveRDS(), regardless whether previous result saved path. version passed saveRDS() saving calibration results, defaults NULL (saveRDS()), means current default version used. Ignored calibration results saved. compress passed saveRDS() saving calibration results, defaults TRUE (saveRDS()), see saveRDS() options. Ignored calibration results saved. sparse, progress, export, export_envir passed run_trials(), see description . verbose single logical, defaults FALSE. TRUE, function print details calibration progress. plot single logical, defaults FALSE. TRUE, function print plots Gaussian process model predictions return part final object; requires ggplot2 package installed.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/calibrate_trial.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Calibrate trial specification — calibrate_trial","text":"list special class \"trial_calibration\", contains following elements can extracted using $ [[: success: single logical, TRUE calibration succeeded best result within tolerance range, FALSE calibration process ended allowed iterations without obtaining result within tolerance range. best_x: single numerical value, x-value (original, input scale) best y-value found, regardless success. best_y: single numerical value, best y-value obtained, regardless success. best_trial_spec: best calibrated version original trial_spec object supplied, regardless success (.e., returned trial specification object adequately calibrated success TRUE). best_sims: trial simulation results (run_trials()) leading best y-value, regardless success. new simulations conducted (e.g., best y-value one prev_y-values), NULL. evaluations: two-column data.frame containing variables x y, corresponding x-values y-values (including values supplied prev_x/prev_y). input_trial_spec: unaltered, uncalibrated, original trial_spec-object provided function. elapsed_time: total run time calibration process. control: list central settings provided function. fun: function used calibration; NULL supplied starting calibration, default function (described Details) returned used function. adaptr_version: version adaptr package used run calibration process. plots: list containing ggplot2 plot objects Gaussian process suggestion step, included plot TRUE.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/calibrate_trial.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Calibrate trial specification — calibrate_trial","text":"Default calibration fun NULL (default), default calibration strategy employed. , target y probability superiority (described check_performance() summary()), function calibrate constant stopping thresholds superiority inferiority (described setup_trial(), setup_trial_binom(), setup_trial_norm()), corresponds Bayesian analogues type 1 error rate differences arms trial specification, expect common use case, power, differences arms trial specification. stopping calibration process , default case, use input x stopping threshold superiority 1 - x stopping threshold inferiority, respectively, .e., stopping thresholds constant symmetric. underlying default function calibrated typically essentially noiseless high enough number simulations used appropriate random base_seed, generally monotonically decreasing. default values control hyperparameters set normally work well case (including init_n, kappa, pow, lengthscale, narrow, scale_x, etc.). Thus, initial grid evaluations used case, base_seed provided, noiseless process assumed narrowing search range iteration performed, uncertainty bounds used acquisition function (corresponding quantiles posterior predictive distribution) relatively narrow. Specifying calibration functions user-specified calibration function following structure: Note: changes trial specification validated; users define calibration function need ensure changes calibrated trial specifications lead invalid values; otherwise, procedure prone error simulations run. Especially, users aware changing true_ys trial specification generated using simplified setup_trial_binom() setup_trial_norm() functions requires changes multiple places object, including functions used generate random outcomes, cases (otherwise doubt) re-generating trial_spec instead modifying preferred safer leads proper validation. Note: y values corresponding certain x values known, user may directly return values without running simulations (e.g., default case x 1 require >100% <0% probabilities stopping rules, impossible, hence y value case definition 1). Gaussian process optimisation function control hyperparameters calibration function uses relatively simple Gaussian optimisation function settings work well default calibration function, can changed required, considered calibrating according targets (effects using settings may evaluated greater detail setting verbose plot TRUE). function may perform interpolation (.e., assuming noiseless, deterministic process uncertainty values already evaluated) regression (.e., assuming noisy, stochastic process), controlled noisy argument. covariance matrix (kernel) defined : exp(-||x - x'||^pow / lengthscale) ||x -x'|| corresponding matrix containing absolute Euclidean distances values x (values prediction grid), scaled [0, 1] range scale_x TRUE original scale FALSE. Scaling generally recommended (leads comparable predictable effects pow lengthscale, regardless true scale), also recommended range values smaller range. absolute distances raised power pow, must value [1, 2] range. Together lengthscale, pow controls smoothness Gaussian process model, 1 corresponding less smoothing (.e., piecewise straight lines evaluations lengthscale 1) values > 1 corresponding smoothing. raising absolute distances chosen power pow, resulting matrix divided lengthscale. default 1 (change), values < 1 leads faster decay correlations thus less smoothing (wiggly fits), values > 1 leads smoothing (less wiggly fits). single specific value supplied lengthscale used; range values provided, secondary optimisation process determines value use within range. minimal noise (\"jitter\") always added diagonals matrices relevant ensure numerical stability; noisy TRUE, \"nugget\" value determined using secondary optimisation process Predictions made equally spaced grid x values size resolution; narrow TRUE, grid spread x values corresponding y values closest closes target, respectively, leading finer grid range relevance (described , used processes assumed noiseless used process can safely assumed monotonically increasing decreasing within search_range). suggest next x value evaluations, function uses acquisition function based bi-directional uncertainty bounds (posterior predictive distributions) widths controlled kappa hyperparameter. Higher kappa/wider uncertainty bounds leads increased exploration (.e., algorithm prone select values high uncertainty, relatively far existing evaluations), lower kappa/narrower uncertainty bounds leads increased exploitation (.e., algorithm prone select values less uncertainty, closer best predicted mean values). value x grid leading one boundaries smallest absolute distance target chosen (within narrowed range, narrow TRUE). See Greenhill et al, 2020 References general description acquisition functions. IMPORTANT: recommend control hyperparameters explicitly specified, even default calibration function. Although default values sensible default calibration function, may change future. , generally recommend users perform small-scale comparisons (.e., fewer simulations final calibration) calibration process different hyperparameters specific use cases beyond default (possibly guided setting verbose plot options TRUE) running substantial number calibrations simulations, exact choices may important influence speed likelihood success calibration process. responsibility user specify sensible values settings hyperparameters.","code":"# The function must take the arguments x and trial_spec # trial_spec is the original trial_spec object which should be modified # (alternatively, it may be re-specified, but the argument should still # be included, even if ignored) function(x, trial_spec) { # Calibrate trial_spec, here as in the default function trial_spec$superiority <- x trial_spec$inferiority <- 1 - x # If relevant, known y values corresponding to specific x values may be # returned without running simulations (here done as in the default # function). In that case, a code block line the one below can be included, # with changed x/y values - of note, the other return values should not be # changed if (x == 1) { return(list(sims = NULL, trial_spec = trial_spec, y = 0)) } # Run simulations - this block should be included unchanged sims <- run_trials(trial_spec, n_rep = n_rep, cores = cores, base_seed = base_seed, sparse = sparse, progress = progress, export = export, export_envir = export_envir) # Return results - only the y value here should be changed # summary() or check_performance() will often be used here list(sims = sims, trial_spec = trial_spec, y = summary(sims)$prob_superior) }"},{"path":"https://inceptdk.github.io/adaptr/reference/calibrate_trial.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Calibrate trial specification — calibrate_trial","text":"Gramacy RB (2020). Chapter 5: Gaussian Process Regression. : Surrogates: Gaussian Process Modeling, Design Optimization Applied Sciences. Chapman Hall/CRC, Boca Raton, Florida, USA. Available online. Greenhill S, Rana S, Gupta S, Vellanki P, Venkatesh S (2020). Bayesian Optimization Adaptive Experimental Design: Review. IEEE Access, 8, 13937-13948. doi:10.1109/ACCESS.2020.2966228","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/calibrate_trial.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Calibrate trial specification — calibrate_trial","text":"","code":"if (FALSE) { # Setup a trial specification to calibrate # This trial specification has similar event rates in all arms # and as the default calibration settings are used, this corresponds to # assessing the Bayesian type 1 error rate for this design and scenario binom_trial <- setup_trial_binom(arms = c(\"A\", \"B\"), true_ys = c(0.25, 0.25), data_looks = 1:5 * 200) # Run calibration using default settings for most parameters res <- calibrate_trial(binom_trial, n_rep = 1000, base_seed = 23) # Print calibration summary result res }"},{"path":"https://inceptdk.github.io/adaptr/reference/cat0.html","id":null,"dir":"Reference","previous_headings":"","what":"cat() with sep = ","title":"cat() with sep = ","text":"Used internally. Passes everything cat() enforces sep = \"\". Relates cat() paste0() relates paste().","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/cat0.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"cat() with sep = ","text":"","code":"cat0(...)"},{"path":"https://inceptdk.github.io/adaptr/reference/cat0.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"cat() with sep = ","text":"... strings concatenated printed.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/check_performance.html","id":null,"dir":"Reference","previous_headings":"","what":"Check performance metrics for trial simulations — check_performance","title":"Check performance metrics for trial simulations — check_performance","text":"Calculates performance metrics trial specification based simulation results run_trials() function, bootstrapped uncertainty measures requested. Uses extract_results(), may used directly extract key trial results without summarising. function also used summary() calculate performance metrics presented function.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/check_performance.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check performance metrics for trial simulations — check_performance","text":"","code":"check_performance( object, select_strategy = \"control if available\", select_last_arm = FALSE, select_preferences = NULL, te_comp = NULL, raw_ests = FALSE, final_ests = NULL, restrict = NULL, uncertainty = FALSE, n_boot = 5000, ci_width = 0.95, boot_seed = NULL, cores = NULL )"},{"path":"https://inceptdk.github.io/adaptr/reference/check_performance.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check performance metrics for trial simulations — check_performance","text":"object trial_results object, output run_trials() function. select_strategy single character string. trial stopped due superiority (1 arm remaining, select_last_arm set TRUE trial designs common control arm; see ), parameter specifies arm considered selected calculating trial design performance metrics, described ; corresponds consequence inconclusive trial, .e., arm used practice. following options available must written exactly (case sensitive, abbreviated): \"control available\" (default): selects first control arm trials common control arm arm active end--trial, otherwise arm selected. trial designs without common control, arm selected. \"none\": selects arm trials ending superiority. \"control\": similar \"control available\", throw error used trial designs without common control arm. \"final control\": selects final control arm regardless whether trial stopped practical equivalence, futility, maximum sample size; strategy can specified trial designs common control arm. \"control best\": selects first control arm still active end--trial, otherwise selects best remaining arm (defined remaining arm highest probability best last adaptive analysis conducted). works trial designs common control arm. \"best\": selects best remaining arm (described \"control best\"). \"list best\": selects first remaining arm specified list (specified using select_preferences, technically character vector). none arms active end--trial, best remaining arm selected (described ). \"list\": specified , arms provided list remain active end--trial, arm selected. select_last_arm single logical, defaults FALSE. TRUE, remaining active arm (last control) selected trials common control arm ending equivalence futility, considering options specified select_strategy. Must FALSE trial designs without common control arm. select_preferences character vector specifying number arms used selection one \"list best\" \"list\" options specified select_strategy. Can contain valid arms available trial. te_comp character string, treatment-effect comparator. Can either NULL (default) case first control arm used trial designs common control arm, string naming single trial arm. used calculating err_te sq_err_te (error squared error treatment effect comparing selected arm comparator arm, described ). raw_ests single logical. FALSE (default), posterior estimates (post_ests post_ests_all, see setup_trial() run_trial()) used calculate err sq_err (error squared error estimated compared specified effect selected arm) err_te sq_err_te (error squared error treatment effect comparing selected arm comparator arm, described te_comp ). TRUE, raw estimates (raw_ests raw_ests_all, see setup_trial() run_trial()) used instead posterior estimates. final_ests single logical. TRUE (recommended) final estimates calculated using outcome data patients randomised trials stopped used (post_ests_all raw_ests_all, see setup_trial() run_trial()); FALSE, estimates calculated arm arm stopped (last adaptive analysis ) using data patients reach followed time point patients randomised used (post_ests raw_ests, see setup_trial() run_trial()). NULL (default), argument set FALSE outcome data available immediate randomisation patients (backwards compatibility, final posterior estimates may vary slightly situation, even using data); otherwise said TRUE. See setup_trial() details estimates calculated. restrict single character string NULL. NULL (default), results summarised simulations; \"superior\", results summarised simulations ending superiority ; \"selected\", results summarised simulations ending selected arm (according specified arm selection strategy simulations ending superiority). summary measures (e.g., prob_conclusive) substantially different interpretations restricted, calculated nonetheless. uncertainty single logical; FALSE (default) uncertainty measures calculated, TRUE, non-parametric bootstrapping used calculate uncertainty measures. n_boot single integer (default 5000); number bootstrap samples use uncertainty = TRUE. Values < 100 allowed values < 1000 lead warning, results likely unstable cases. ci_width single numeric >= 0 < 1, width percentile-based bootstrapped confidence intervals. Defaults 0.95, corresponding 95% confidence intervals. boot_seed single integer, NULL (default), \"base\". value provided, value used initiate random seeds bootstrapping global random seed restored function run. \"base\" specified, base_seed specified run_trials() used. Regardless whether simulations run sequentially parallel, bootstrapped results identical boot_seed specified. cores NULL single integer. NULL, default value set setup_cluster() used control whether extractions simulation results done parallel default cluster sequentially main process; value specified setup_cluster(), cores set value stored global \"mc.cores\" option (previously set options(mc.cores = ), 1 option specified. cores = 1, computations run sequentially primary process, cores > 1, new parallel cluster setup using parallel library removed function completes. See setup_cluster() details.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/check_performance.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check performance metrics for trial simulations — check_performance","text":"tidy data.frame added class trial_performance (control number digits printed, see print()), columns \"metric\" (described ), \"est\" (estimate metric), following four columns uncertainty = TRUE: \"err_sd\"(bootstrapped SDs), \"err_mad\" (bootstrapped MAD-SDs, described setup_trial() stats::mad()), \"lo_ci\", \"hi_ci\", latter two corresponding lower/upper limits percentile-based bootstrapped confidence intervals. Bootstrap estimates calculated minimum (_p0) maximum values (_p100) size, sum_ys, ratio_ys, non-parametric bootstrapping minimum/maximum values sensible - bootstrap estimates values NA. following performance metrics calculated: n_summarised: number simulations summarised. size_mean, size_sd, size_median, size_p25, size_p75, size_p0, size_p100: mean, standard deviation, median well 25-, 75-, 0- (min), 100- (max) percentiles sample sizes (number patients randomised simulated trial) summarised trial simulations. sum_ys_mean, sum_ys_sd, sum_ys_median, sum_ys_p25, sum_ys_p75, sum_ys_p0, sum_ys_p100: mean, standard deviation, median well 25-, 75-, 0- (min), 100- (max) percentiles total sum_ys across arms summarised trial simulations (e.g., total number events trials binary outcome, sums continuous values patients across arms trials continuous outcome). Always uses outcomes randomised patients regardless whether patients outcome data available time trial stopping (corresponding sum_ys_all results run_trial()). ratio_ys_mean, ratio_ys_sd, ratio_ys_median, ratio_ys_p25, ratio_ys_p75, ratio_ys_p0, ratio_ys_p100: mean, standard deviation, median well 25-, 75-, 0- (min), 100- (max) percentiles final ratio_ys (sum_ys described divided total number patients randomised) across arms summarised trial simulations. prob_conclusive: proportion (0 1) conclusive trial simulations, .e., simulations stopped maximum sample size without superiority, equivalence futility decision. prob_superior, prob_equivalence, prob_futility, prob_max: proportion (0 1) trial simulations stopped superiority, equivalence, futility inconclusive maximum allowed sample size, respectively.Note: metrics may make sense summarised simulation results restricted. prob_select_*: selection probabilities arm selection, according specified selection strategy. Contains one element per arm, named prob_select_arm_ prob_select_none probability selecting arm. rmse, rmse_te: root mean squared errors estimates selected arm treatment effect, described extract_results(). mae, mae_te: median absolute errors estimates selected arm treatment effect, described extract_results(). idp: ideal design percentage (IDP; 0-100%), see Details.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/check_performance.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Check performance metrics for trial simulations — check_performance","text":"ideal design percentage (IDP) returned based Viele et al, 2020 doi:10.1177/1740774519877836 (also described Granholm et al, 2022 doi:10.1016/j.jclinepi.2022.11.002 , also describes performance measures) adapted work trials desirable/undesirable outcomes non-binary outcomes. Briefly, expected outcome calculated sum true outcomes arm multiplied corresponding selection probabilities (ignoring simulations selected arm). IDP calculated : desirable outcomes (highest_is_best TRUE):100 * (expected outcome - lowest true outcome) / (highest true outcome - lowest true outcome) undesirable outcomes (highest_is_best FALSE):100 - IDP calculated desirable outcomes","code":""},{"path":[]},{"path":"https://inceptdk.github.io/adaptr/reference/check_performance.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check performance metrics for trial simulations — check_performance","text":"","code":"# Setup a trial specification binom_trial <- setup_trial_binom(arms = c(\"A\", \"B\", \"C\", \"D\"), control = \"A\", true_ys = c(0.20, 0.18, 0.22, 0.24), data_looks = 1:20 * 100) # Run 10 simulations with a specified random base seed res <- run_trials(binom_trial, n_rep = 10, base_seed = 12345) # Check performance measures, without assuming that any arm is selected in # the inconclusive simulations, with bootstrapped uncertainty measures # (unstable in this example due to the very low number of simulations # summarised): check_performance(res, select_strategy = \"none\", uncertainty = TRUE, n_boot = 1000, boot_seed = \"base\") #> metric est err_sd err_mad lo_ci hi_ci #> 1 n_summarised 10.000 0.000 0.000 10.000 10.000 #> 2 size_mean 1840.000 162.458 237.216 1520.000 2000.000 #> 3 size_sd 505.964 297.470 250.048 0.000 772.873 #> 4 size_median 2000.000 66.847 0.000 2000.000 2000.000 #> 5 size_p25 2000.000 362.022 0.000 800.000 2000.000 #> 6 size_p75 2000.000 0.000 0.000 2000.000 2000.000 #> 7 size_p0 400.000 NA NA NA NA #> 8 size_p100 2000.000 NA NA NA NA #> 9 sum_ys_mean 369.900 33.912 36.324 293.050 419.500 #> 10 sum_ys_sd 105.352 46.692 56.287 19.191 162.759 #> 11 sum_ys_median 390.000 16.984 4.448 373.000 418.500 #> 12 sum_ys_p25 376.500 67.721 16.309 152.750 392.000 #> 13 sum_ys_p75 408.500 21.318 25.945 388.500 460.000 #> 14 sum_ys_p0 84.000 NA NA NA NA #> 15 sum_ys_p100 466.000 NA NA NA NA #> 16 ratio_ys_mean 0.202 0.005 0.005 0.193 0.212 #> 17 ratio_ys_sd 0.016 0.003 0.003 0.008 0.021 #> 18 ratio_ys_median 0.196 0.006 0.003 0.190 0.210 #> 19 ratio_ys_p25 0.194 0.005 0.003 0.181 0.200 #> 20 ratio_ys_p75 0.209 0.009 0.009 0.195 0.230 #> 21 ratio_ys_p0 0.180 NA NA NA NA #> 22 ratio_ys_p100 0.233 NA NA NA NA #> 23 prob_conclusive 0.100 0.102 0.148 0.000 0.300 #> 24 prob_superior 0.100 0.102 0.148 0.000 0.300 #> 25 prob_equivalence 0.000 0.000 0.000 0.000 0.000 #> 26 prob_futility 0.000 0.000 0.000 0.000 0.000 #> 27 prob_max 0.900 0.102 0.148 0.700 1.000 #> 28 prob_select_arm_A 0.000 0.000 0.000 0.000 0.000 #> 29 prob_select_arm_B 0.100 0.102 0.148 0.000 0.300 #> 30 prob_select_arm_C 0.000 0.000 0.000 0.000 0.000 #> 31 prob_select_arm_D 0.000 0.000 0.000 0.000 0.000 #> 32 prob_select_none 0.900 0.102 0.148 0.700 1.000 #> 33 rmse 0.023 0.000 0.000 0.023 0.023 #> 34 rmse_te 0.182 0.000 0.000 0.182 0.182 #> 35 mae 0.023 0.000 0.000 0.023 0.023 #> 36 mae_te 0.182 0.000 0.000 0.182 0.182 #> 37 idp 100.000 0.000 0.000 100.000 100.000"},{"path":"https://inceptdk.github.io/adaptr/reference/check_remaining_arms.html","id":null,"dir":"Reference","previous_headings":"","what":"Check remaining arm combinations — check_remaining_arms","title":"Check remaining arm combinations — check_remaining_arms","text":"function summarises numbers proportions combinations remaining arms (.e., excluding arms dropped inferiority futility analysis, arms dropped equivalence earlier analyses trials common control) across multiple simulated trial results. function supplements extract_results(), check_performance(), summary() functions, especially useful designs > 2 arms, provides details functions mentioned .","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/check_remaining_arms.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check remaining arm combinations — check_remaining_arms","text":"","code":"check_remaining_arms(object, ci_width = 0.95)"},{"path":"https://inceptdk.github.io/adaptr/reference/check_remaining_arms.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check remaining arm combinations — check_remaining_arms","text":"object trial_results object, output run_trials() function. ci_width single numeric >= 0 < 1, width approximate confidence intervals proportions combinations (calculated analytically). Defaults 0.95, corresponding 95% confidence intervals.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/check_remaining_arms.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check remaining arm combinations — check_remaining_arms","text":"data.frame containing combinations remaining arms, sorted descending order , following columns: arm_*, one column per arm, named arm_. columns contain empty character string \"\" dropped arms (including arms dropped final analysis), otherwise \"superior\", \"control\", \"equivalence\" (equivalent final analysis), \"active\", described run_trial(). n integer vector, number trial simulations ending combination remaining arms specified preceding columns. prop numeric vector, proportion trial simulations ending combination remaining arms specified preceding columns. se,lo_ci,hi_ci: standard error prop confidence intervals width specified ci_width.","code":""},{"path":[]},{"path":"https://inceptdk.github.io/adaptr/reference/check_remaining_arms.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check remaining arm combinations — check_remaining_arms","text":"","code":"# Setup a trial specification binom_trial <- setup_trial_binom(arms = c(\"A\", \"B\", \"C\", \"D\"), control = \"A\", true_ys = c(0.20, 0.18, 0.22, 0.24), data_looks = 1:20 * 200, equivalence_prob = 0.7, equivalence_diff = 0.03, equivalence_only_first = FALSE) # Run 35 simulations with a specified random base seed res <- run_trials(binom_trial, n_rep = 25, base_seed = 12345) # Check remaining arms (printed with fewer digits) print(check_remaining_arms(res), digits = 3) #> arm_A arm_B arm_C arm_D n prop se lo_ci hi_ci #> 1 superior 5 0.20 0.179 0 0.551 #> 2 superior 5 0.20 0.179 0 0.551 #> 3 control active active active 5 0.20 0.179 0 0.551 #> 4 control equivalence 1 0.04 0.196 0 0.424"},{"path":"https://inceptdk.github.io/adaptr/reference/cov_mat.html","id":null,"dir":"Reference","previous_headings":"","what":"Estimates covariance matrices used by Gaussian process optimisation — cov_mat","title":"Estimates covariance matrices used by Gaussian process optimisation — cov_mat","text":"Used internally, estimates covariance matrices used Gaussian process optimisation function. Calculates pairwise absolute distances raised power (defaults 2) using pow_abs_dist() function, divides result lengthscale hyperparameter (defaults 1, .e., changes due division), subsequently returns inverse exponentiation resulting matrix.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/cov_mat.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Estimates covariance matrices used by Gaussian process optimisation — cov_mat","text":"","code":"cov_mat(x1, x2 = x1, g = NULL, pow = 2, lengthscale = 1)"},{"path":"https://inceptdk.github.io/adaptr/reference/cov_mat.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Estimates covariance matrices used by Gaussian process optimisation — cov_mat","text":"x1 numeric vector, length corresponding number rows returned matrix. x2 numeric vector, length corresponding number columns returned matrix. specified, x1 used x2. g single numerical value; jitter/nugget value added diagonal NULL (default); supplied x1 x2, avoid potentially negative values matrix diagonal due numerical instability. pow single numeric value, power distances raised . Defaults 2, corresponding pairwise, squared, Euclidean distances. lengthscale single numerical value; lengthscale hyperparameter matrix returned pow_abs_dist() divided inverse exponentiation done.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/cov_mat.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Estimates covariance matrices used by Gaussian process optimisation — cov_mat","text":"Covariance matrix length(x1) rows length(x2) columns used Gaussian process optimiser.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/dispatch_trial_runs.html","id":null,"dir":"Reference","previous_headings":"","what":"Simulate single trial after setting seed — dispatch_trial_runs","title":"Simulate single trial after setting seed — dispatch_trial_runs","text":"Helper function dispatch running several trials lapply() parallel::parLapply(), setting seeds correctly base_seed used calling run_trials(). Used internally calls run_trials() function.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/dispatch_trial_runs.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Simulate single trial after setting seed — dispatch_trial_runs","text":"","code":"dispatch_trial_runs(is, trial_spec, seeds, sparse, cores, cl = NULL)"},{"path":"https://inceptdk.github.io/adaptr/reference/dispatch_trial_runs.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Simulate single trial after setting seed — dispatch_trial_runs","text":"vector integers, simulation numbers/indices. trial_spec trial specification provided setup_trial(), setup_trial_binom() setup_trial_norm(). sparse single logical, described run_trial(); defaults TRUE running multiple simulations, case data necessary summarise simulations saved simulation. FALSE, detailed data simulation saved, allowing detailed printing individual trial results plotting using plot_history() (plot_status() require non-sparse results). cores NULL single integer. NULL, default value/cluster set setup_cluster() used control whether simulations run parallel default cluster sequentially main process; cluster/value specified setup_cluster(), cores set value stored global \"mc.cores\" option (previously set options(mc.cores = ), 1 option specified. resulting number cores = 1, computations run sequentially primary process, cores > 1, new parallel cluster setup using parallel library removed function completes. See setup_cluster() details. cl NULL (default) running sequentially, otherwise parallel cluster parallel computation cores > 1.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/dispatch_trial_runs.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Simulate single trial after setting seed — dispatch_trial_runs","text":"Single trial simulation object, described run_trial().","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/equivalent_funs.html","id":null,"dir":"Reference","previous_headings":"","what":"Assert equivalent functions — equivalent_funs","title":"Assert equivalent functions — equivalent_funs","text":"Used internally. Compares definitions two functions (ignoring environments, bytecodes, etc., comparing function arguments bodies, using deparse()).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/equivalent_funs.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Assert equivalent functions — equivalent_funs","text":"","code":"equivalent_funs(fun1, fun2)"},{"path":"https://inceptdk.github.io/adaptr/reference/equivalent_funs.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Assert equivalent functions — equivalent_funs","text":"fun1, fun2 functions compare.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/equivalent_funs.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Assert equivalent functions — equivalent_funs","text":"Single logical.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/extract_history.html","id":null,"dir":"Reference","previous_headings":"","what":"Extract history — extract_history","title":"Extract history — extract_history","text":"Used internally. Extracts relevant parameters conducted adaptive analysis single trial.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/extract_history.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Extract history — extract_history","text":"","code":"extract_history(object, metric = \"prob\")"},{"path":"https://inceptdk.github.io/adaptr/reference/extract_history.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Extract history — extract_history","text":"object single trial_result run_trial(), works run argument sparse = FALSE. metric either \"prob\" (default), case allocation probabilities adaptive analysis returned; \"n\"/\"n \", case total number patients available follow-data (\"n\") allocated (\"n \") arm adaptive analysis returned; \"pct\"/\"pct \" case proportions patients allocated available follow-data (\"pct\") allocated total (\"pct \") arm total number patients returned; \"sum ys\"/\"sum ys \", case total summed available outcome data (\"sum ys\") total summed outcome data including outcomes patients randomised necessarily reached follow-yet (\"sum ys \") arm adaptive analysis returned; \"ratio ys\"/\"ratio ys \", case total summed outcomes specified \"sum ys\"/\"sum ys \" divided number patients analysis adaptive returned.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/extract_history.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Extract history — extract_history","text":"tidy data.frame (one row per arm per look) containing following columns: look: consecutive numbers (integers) interim look. look_ns: total number patients (integers) outcome data available current adaptive analysis look arms trial. look_ns_all: total number patients (integers) randomised current adaptive analysis look arms trial. arm: current arm trial. value: described metric.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/extract_results.html","id":null,"dir":"Reference","previous_headings":"","what":"Extract simulation results — extract_results","title":"Extract simulation results — extract_results","text":"function extracts relevant information multiple simulations trial specification tidy data.frame (1 simulation per row). See also check_performance() summary() functions, uses output function summarise simulation results.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/extract_results.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Extract simulation results — extract_results","text":"","code":"extract_results( object, select_strategy = \"control if available\", select_last_arm = FALSE, select_preferences = NULL, te_comp = NULL, raw_ests = FALSE, final_ests = NULL, cores = NULL )"},{"path":"https://inceptdk.github.io/adaptr/reference/extract_results.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Extract simulation results — extract_results","text":"object trial_results object, output run_trials() function. select_strategy single character string. trial stopped due superiority (1 arm remaining, select_last_arm set TRUE trial designs common control arm; see ), parameter specifies arm considered selected calculating trial design performance metrics, described ; corresponds consequence inconclusive trial, .e., arm used practice. following options available must written exactly (case sensitive, abbreviated): \"control available\" (default): selects first control arm trials common control arm arm active end--trial, otherwise arm selected. trial designs without common control, arm selected. \"none\": selects arm trials ending superiority. \"control\": similar \"control available\", throw error used trial designs without common control arm. \"final control\": selects final control arm regardless whether trial stopped practical equivalence, futility, maximum sample size; strategy can specified trial designs common control arm. \"control best\": selects first control arm still active end--trial, otherwise selects best remaining arm (defined remaining arm highest probability best last adaptive analysis conducted). works trial designs common control arm. \"best\": selects best remaining arm (described \"control best\"). \"list best\": selects first remaining arm specified list (specified using select_preferences, technically character vector). none arms active end--trial, best remaining arm selected (described ). \"list\": specified , arms provided list remain active end--trial, arm selected. select_last_arm single logical, defaults FALSE. TRUE, remaining active arm (last control) selected trials common control arm ending equivalence futility, considering options specified select_strategy. Must FALSE trial designs without common control arm. select_preferences character vector specifying number arms used selection one \"list best\" \"list\" options specified select_strategy. Can contain valid arms available trial. te_comp character string, treatment-effect comparator. Can either NULL (default) case first control arm used trial designs common control arm, string naming single trial arm. used calculating err_te sq_err_te (error squared error treatment effect comparing selected arm comparator arm, described ). raw_ests single logical. FALSE (default), posterior estimates (post_ests post_ests_all, see setup_trial() run_trial()) used calculate err sq_err (error squared error estimated compared specified effect selected arm) err_te sq_err_te (error squared error treatment effect comparing selected arm comparator arm, described te_comp ). TRUE, raw estimates (raw_ests raw_ests_all, see setup_trial() run_trial()) used instead posterior estimates. final_ests single logical. TRUE (recommended) final estimates calculated using outcome data patients randomised trials stopped used (post_ests_all raw_ests_all, see setup_trial() run_trial()); FALSE, estimates calculated arm arm stopped (last adaptive analysis ) using data patients reach followed time point patients randomised used (post_ests raw_ests, see setup_trial() run_trial()). NULL (default), argument set FALSE outcome data available immediate randomisation patients (backwards compatibility, final posterior estimates may vary slightly situation, even using data); otherwise said TRUE. See setup_trial() details estimates calculated. cores NULL single integer. NULL, default value set setup_cluster() used control whether extractions simulation results done parallel default cluster sequentially main process; value specified setup_cluster(), cores set value stored global \"mc.cores\" option (previously set options(mc.cores = ), 1 option specified. cores = 1, computations run sequentially primary process, cores > 1, new parallel cluster setup using parallel library removed function completes. See setup_cluster() details.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/extract_results.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Extract simulation results — extract_results","text":"data.frame containing following columns: sim: simulation number (1 total number simulations). final_n: final sample size simulation. sum_ys: sum total counts arms, e.g., total number events trials binary outcome (setup_trial_binom()) sum arm totals trials continuous outcome (setup_trial_norm()). Always uses outcome data randomised patients regardless whether patients outcome data available time trial stopping (corresponding sum_ys_all results run_trial()). ratio_ys: calculated sum_ys/final_n (described ). final_status: final trial status simulation, either \"superiority\", \"equivalence\", \"futility\", \"max\", described run_trial(). superior_arm: final superior arm simulations stopped superiority. NA simulations stopped superiority. selected_arm: final selected arm (described ). correspond superior_arm simulations stopped superiority NA arm selected. See select_strategy . err: squared error estimate selected arm, calculated estimated effect - true effect selected arm. sq_err: squared error estimate selected arm, calculated err^2 selected arm, err defined . err_te: error treatment effect comparing selected arm comparator arm (specified te_comp). Calculated :(estimated effect selected arm - estimated effect comparator arm) - (true effect selected arm - true effect comparator arm) NA simulations without selected arm, comparator specified (see te_comp ), selected arm comparator arm. sq_err_te: squared error treatment effect comparing selected arm comparator arm (specified te_comp), calculated err_te^2, err_te defined .","code":""},{"path":[]},{"path":"https://inceptdk.github.io/adaptr/reference/extract_results.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Extract simulation results — extract_results","text":"","code":"# Setup a trial specification binom_trial <- setup_trial_binom(arms = c(\"A\", \"B\", \"C\", \"D\"), control = \"A\", true_ys = c(0.20, 0.18, 0.22, 0.24), data_looks = 1:20 * 100) # Run 10 simulations with a specified random base seed res <- run_trials(binom_trial, n_rep = 10, base_seed = 12345) # Extract results and Select the control arm if available # in simulations not ending with superiority extract_results(res, select_strategy = \"control\") #> sim final_n sum_ys ratio_ys final_status superior_arm selected_arm #> 1 1 2000 387 0.1935 max A #> 2 2 2000 391 0.1955 max A #> 3 3 2000 359 0.1795 max #> 4 4 2000 389 0.1945 max A #> 5 5 2000 373 0.1865 max A #> 6 6 400 84 0.2100 superiority B B #> 7 7 2000 395 0.1975 max A #> 8 8 2000 442 0.2210 max A #> 9 9 2000 413 0.2065 max A #> 10 10 2000 466 0.2330 max A #> err sq_err err_te sq_err_te #> 1 0.027072360 7.329127e-04 NA NA #> 2 0.027225919 7.412507e-04 NA NA #> 3 NA NA NA NA #> 4 0.028619492 8.190753e-04 NA NA #> 5 -0.014477338 2.095933e-04 NA NA #> 6 -0.022699865 5.152839e-04 -0.1820624 0.0331467 #> 7 0.009098866 8.278937e-05 NA NA #> 8 0.010663973 1.137203e-04 NA NA #> 9 0.015544164 2.416210e-04 NA NA #> 10 0.019152691 3.668256e-04 NA NA"},{"path":"https://inceptdk.github.io/adaptr/reference/extract_results_batch.html","id":null,"dir":"Reference","previous_headings":"","what":"Extract results from a batch of trials from an object with multiple trials — extract_results_batch","title":"Extract results from a batch of trials from an object with multiple trials — extract_results_batch","text":"Used internally extract_results(). Extracts results batch simulations simulation object multiple simulation results returned run_trials(), used facilitate parallelisation.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/extract_results_batch.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Extract results from a batch of trials from an object with multiple trials — extract_results_batch","text":"","code":"extract_results_batch( trial_results, control = control, select_strategy = select_strategy, select_last_arm = select_last_arm, select_preferences = select_preferences, te_comp = te_comp, which_ests = which_ests, te_comp_index = te_comp_index, te_comp_true_y = te_comp_true_y )"},{"path":"https://inceptdk.github.io/adaptr/reference/extract_results_batch.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Extract results from a batch of trials from an object with multiple trials — extract_results_batch","text":"trial_results list trial results summarise, current batch. control single character string, common control arm trial specification (NULL none). select_strategy single character string. trial stopped due superiority (1 arm remaining, select_last_arm set TRUE trial designs common control arm; see ), parameter specifies arm considered selected calculating trial design performance metrics, described ; corresponds consequence inconclusive trial, .e., arm used practice. following options available must written exactly (case sensitive, abbreviated): \"control available\" (default): selects first control arm trials common control arm arm active end--trial, otherwise arm selected. trial designs without common control, arm selected. \"none\": selects arm trials ending superiority. \"control\": similar \"control available\", throw error used trial designs without common control arm. \"final control\": selects final control arm regardless whether trial stopped practical equivalence, futility, maximum sample size; strategy can specified trial designs common control arm. \"control best\": selects first control arm still active end--trial, otherwise selects best remaining arm (defined remaining arm highest probability best last adaptive analysis conducted). works trial designs common control arm. \"best\": selects best remaining arm (described \"control best\"). \"list best\": selects first remaining arm specified list (specified using select_preferences, technically character vector). none arms active end--trial, best remaining arm selected (described ). \"list\": specified , arms provided list remain active end--trial, arm selected. select_last_arm single logical, defaults FALSE. TRUE, remaining active arm (last control) selected trials common control arm ending equivalence futility, considering options specified select_strategy. Must FALSE trial designs without common control arm. select_preferences character vector specifying number arms used selection one \"list best\" \"list\" options specified select_strategy. Can contain valid arms available trial. te_comp character string, treatment-effect comparator. Can either NULL (default) case first control arm used trial designs common control arm, string naming single trial arm. used calculating err_te sq_err_te (error squared error treatment effect comparing selected arm comparator arm, described ). which_ests single character string, combination raw_ests final_ests arguments extract_results(). te_comp_index single integer, index treatment effect comparator arm (NULL none). te_comp_true_y single numeric value, true y value treatment effect comparator arm (NULL none).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/extract_results_batch.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Extract results from a batch of trials from an object with multiple trials — extract_results_batch","text":"data.frame containing columns returned extract_results() described function (sim start 1, changed relevant extract_results()).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/extract_statuses.html","id":null,"dir":"Reference","previous_headings":"","what":"Extract statuses — extract_statuses","title":"Extract statuses — extract_statuses","text":"Used internally. Extracts overall trial statuses statuses single arm multiple trial simulations. Works sparse results.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/extract_statuses.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Extract statuses — extract_statuses","text":"","code":"extract_statuses(object, x_value, arm = NULL)"},{"path":"https://inceptdk.github.io/adaptr/reference/extract_statuses.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Extract statuses — extract_statuses","text":"object trial_results object run_trials(). x_value single character string, determining whether number adaptive analysis looks (\"look\", default), total cumulated number patients randomised (\"total n\") total cumulated number patients outcome data available adaptive analysis (\"followed n\") plotted x-axis. arm character vector containing one unique, valid arm names, NA, NULL (default). NULL, overall trial statuses plotted, otherwise specified arms arms (NA specified) plotted.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/extract_statuses.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Extract statuses — extract_statuses","text":"tidy data.frame (one row possible status per look) containing following columns: x: look numbers total number patients look, specified x_value. status: possible status (\"Recruiting\", \"Inferiority\" (relevant individual arms), \"Futility\", \"Equivalence\", \"Superiority\", relevant). p: proportion (0-1) patients status value x. value: described metric.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/find_beta_params.html","id":null,"dir":"Reference","previous_headings":"","what":"Find beta distribution parameters from thresholds — find_beta_params","title":"Find beta distribution parameters from thresholds — find_beta_params","text":"Helper function find beta distribution parameters corresponding fewest possible patients events/non-events specified event proportion. Used Advanced example vignette (vignette(\"Advanced-example\", \"adaptr\")) derive beta prior distributions use beta-binomial conjugate models, based belief true event probability lies within specified percentile-based interval (defaults 95%). May similarly used users derive beta priors.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/find_beta_params.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Find beta distribution parameters from thresholds — find_beta_params","text":"","code":"find_beta_params( theta = NULL, boundary_target = NULL, boundary = \"lower\", interval_width = 0.95, n_dec = 0, max_n = 10000 )"},{"path":"https://inceptdk.github.io/adaptr/reference/find_beta_params.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Find beta distribution parameters from thresholds — find_beta_params","text":"theta single numeric > 0 < 1, expected true event probability. boundary_target single numeric > 0 < 1, target lower upper boundary interval. boundary single character string, either \"lower\" (default) \"upper\", used select boundary use finding appropriate parameters beta distribution. interval_width width credible interval whose lower/upper boundary used (see boundary_target); must > 0 < 1; defaults 0.95. n_dec single non-negative integer; returned parameters rounded number decimals. Defaults 0, case parameters correspond whole number patients. max_n single integer > 0 (default 10000), maximum total sum parameters, corresponding maximum total number patients considered function finding optimal parameter values. Corresponds maximum number patients contributing information beta prior; default number patients unlikely used beta prior.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/find_beta_params.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Find beta distribution parameters from thresholds — find_beta_params","text":"single-row data.frame five columns: two shape parameters beta distribution (alpha, beta), rounded according n_dec, actual lower upper boundaries interval median (appropriate names, e.g. p2.5, p50, p97.5 95% interval), using rounded values.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/fmt_dig.html","id":null,"dir":"Reference","previous_headings":"","what":"Format digits before printing — fmt_dig","title":"Format digits before printing — fmt_dig","text":"Used internally.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/fmt_dig.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Format digits before printing — fmt_dig","text":"","code":"fmt_dig(x, dig)"},{"path":"https://inceptdk.github.io/adaptr/reference/fmt_dig.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Format digits before printing — fmt_dig","text":"x numeric, numeric value(s) format. dig single integer, number digits.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/fmt_dig.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Format digits before printing — fmt_dig","text":"Formatted character string.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/fmt_pct.html","id":null,"dir":"Reference","previous_headings":"","what":"Create formatted label with absolute and relative frequencies (percentages) — fmt_pct","title":"Create formatted label with absolute and relative frequencies (percentages) — fmt_pct","text":"Used internally.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/fmt_pct.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create formatted label with absolute and relative frequencies (percentages) — fmt_pct","text":"","code":"fmt_pct(e, n, dec = 1)"},{"path":"https://inceptdk.github.io/adaptr/reference/fmt_pct.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create formatted label with absolute and relative frequencies (percentages) — fmt_pct","text":"e integer, numerator (e.g., number events). n integer, denominator (e.g., total number patients). dec integer, number decimals percentage.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/fmt_pct.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Create formatted label with absolute and relative frequencies (percentages) — fmt_pct","text":"Formatted character string.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/get_draws_binom.html","id":null,"dir":"Reference","previous_headings":"","what":"Generate draws from posterior beta-binomial distributions — get_draws_binom","title":"Generate draws from posterior beta-binomial distributions — get_draws_binom","text":"Used internally. function generates draws posterior distributions using separate beta-binomial models (binomial outcome, conjugate beta prior) arm, flat (beta(1, 1)) priors.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/get_draws_binom.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Generate draws from posterior beta-binomial distributions — get_draws_binom","text":"","code":"get_draws_binom(arms, allocs, ys, control, n_draws)"},{"path":"https://inceptdk.github.io/adaptr/reference/get_draws_binom.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Generate draws from posterior beta-binomial distributions — get_draws_binom","text":"arms character vector, currently active arms specified setup_trial() / setup_trial_binom() / setup_trial_norm(). allocs character vector, allocations patients (including allocations currently inactive arms). ys numeric vector, outcomes patients order alloc (including outcomes patients currently inactive arms). control unused argument built-functions setup_trial_binom() setup_trial_norm, required argument supplied run_trial() function, may used user-defined functions used generate posterior draws. n_draws single integer, number posterior draws.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/get_draws_binom.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Generate draws from posterior beta-binomial distributions — get_draws_binom","text":"matrix (numeric values) length(arms) columns n_draws rows, arms column names.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/get_draws_generic.html","id":null,"dir":"Reference","previous_headings":"","what":"Generic documentation for get_draws_* functions — get_draws_generic","title":"Generic documentation for get_draws_* functions — get_draws_generic","text":"Used internally. See setup_trial() function documentation additional details specify functions generate posterior draws.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/get_draws_generic.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Generic documentation for get_draws_* functions — get_draws_generic","text":"arms character vector, currently active arms specified setup_trial() / setup_trial_binom() / setup_trial_norm(). allocs character vector, allocations patients (including allocations currently inactive arms). ys numeric vector, outcomes patients order alloc (including outcomes patients currently inactive arms). control unused argument built-functions setup_trial_binom() setup_trial_norm, required argument supplied run_trial() function, may used user-defined functions used generate posterior draws. n_draws single integer, number posterior draws.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/get_draws_generic.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Generic documentation for get_draws_* functions — get_draws_generic","text":"matrix (numeric values) length(arms) columns n_draws rows, arms column names.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/get_draws_norm.html","id":null,"dir":"Reference","previous_headings":"","what":"Generate draws from posterior normal distributions — get_draws_norm","title":"Generate draws from posterior normal distributions — get_draws_norm","text":"Used internally. function generates draws posterior, normal distributions continuous outcomes. Technically, posteriors use priors (simulation speed), corresponding use improper flat priors. posteriors correspond (give similar results) using normal-normal models (normally distributed outcome, conjugate normal prior) arm, assuming non-informative, flat prior used. Thus, posteriors directly correspond normal distributions groups' mean mean groups' standard error standard deviation. necessary always return valid draws, cases < 2 patients randomised arm, posterior draws come extremely wide normal distribution mean corresponding mean included patients outcome data standard deviation corresponding difference highest lowest recorded outcomes patients available outcome data multiplied 1000.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/get_draws_norm.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Generate draws from posterior normal distributions — get_draws_norm","text":"","code":"get_draws_norm(arms, allocs, ys, control, n_draws)"},{"path":"https://inceptdk.github.io/adaptr/reference/get_draws_norm.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Generate draws from posterior normal distributions — get_draws_norm","text":"arms character vector, currently active arms specified setup_trial() / setup_trial_binom() / setup_trial_norm(). allocs character vector, allocations patients (including allocations currently inactive arms). ys numeric vector, outcomes patients order alloc (including outcomes patients currently inactive arms). control unused argument built-functions setup_trial_binom() setup_trial_norm, required argument supplied run_trial() function, may used user-defined functions used generate posterior draws. n_draws single integer, number posterior draws.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/get_draws_norm.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Generate draws from posterior normal distributions — get_draws_norm","text":"matrix (numeric values) length(arms) columns n_draws rows, arms column names.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/get_ys_binom.html","id":null,"dir":"Reference","previous_headings":"","what":"Generate binary outcomes from binomial distributions — get_ys_binom","title":"Generate binary outcomes from binomial distributions — get_ys_binom","text":"Used internally. Function factory used generate function generates binary outcomes binomial distributions.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/get_ys_binom.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Generate binary outcomes from binomial distributions — get_ys_binom","text":"","code":"get_ys_binom(arms, event_probs)"},{"path":"https://inceptdk.github.io/adaptr/reference/get_ys_binom.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Generate binary outcomes from binomial distributions — get_ys_binom","text":"arms character vector arms specified setup_trial_binom(). event_probs numeric vector true event probabilities arms specified setup_trial_binom().","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/get_ys_binom.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Generate binary outcomes from binomial distributions — get_ys_binom","text":"function takes argument allocs (character vector allocations) returns numeric vector similar length corresponding, randomly generated outcomes (0 1, binomial distribution).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/get_ys_norm.html","id":null,"dir":"Reference","previous_headings":"","what":"Generate normally distributed continuous outcomes — get_ys_norm","title":"Generate normally distributed continuous outcomes — get_ys_norm","text":"Used internally. Function factory used generate function generates outcomes normal distributions.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/get_ys_norm.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Generate normally distributed continuous outcomes — get_ys_norm","text":"","code":"get_ys_norm(arms, means, sds)"},{"path":"https://inceptdk.github.io/adaptr/reference/get_ys_norm.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Generate normally distributed continuous outcomes — get_ys_norm","text":"arms character vector, arms specified setup_trial_norm(). means numeric vector, true means arms specified setup_trial_norm(). sds numeric vector, true standard deviations (sds) arms specified setup_trial_norm().","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/get_ys_norm.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Generate normally distributed continuous outcomes — get_ys_norm","text":"function takes argument allocs (character vector allocations) returns numeric vector length corresponding, randomly generated outcomes (normal distributions).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/gp_opt.html","id":null,"dir":"Reference","previous_headings":"","what":"Gaussian process-based optimisation — gp_opt","title":"Gaussian process-based optimisation — gp_opt","text":"Used internally. Simple Gaussian process-based Bayesian optimisation function, used find next value evaluate (x) calibrate_trial() function. Uses single input dimension, may rescaled [0, 1] range function, covariance structure based absolute distances values, raised power (pow) subsequently divided lengthscale inverse exponentiation resulting matrix used. pow lengthscale hyperparameters consequently control smoothness controlling rate decay correlations distance. optimisation algorithm uses bi-directional uncertainty bounds acquisition function suggests next target evaluate, wider uncertainty bounds (higher kappa) leading increased 'exploration' (.e., function prone suggest new target values uncertainty high often best evaluation far) narrower uncertainty bounds leading increased 'exploitation' (.e., function prone suggest new target values relatively close mean predictions model). dir argument controls whether suggested value (based uncertainty bounds) value closest target either direction (dir = 0), target (dir > 0), target (dir < 0), , preferred. function evaluated noise-free monotonically increasing decreasing, optimisation function can narrow range predictions based input evaluations (narrow = TRUE), leading finer grid potential new targets suggest compared predictions spaced full range. new value evaluate function suggested already evaluated, random noise added ensure evaluation new value (narrow FALSE, noise based random draw normal distribution current suggested value mean standard deviation x values SD, truncated range x-values; narrow TRUE, new value drawn uniform distribution within current narrowed range suggested. strategies, process repeated suggested value 'new'). Gaussian process model used partially based code Gramacy 2020 (permission), see References.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/gp_opt.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Gaussian process-based optimisation — gp_opt","text":"","code":"gp_opt( x, y, target, dir = 0, resolution = 5000, kappa = 1.96, pow = 1.95, lengthscale = 1, scale_x = TRUE, noisy = FALSE, narrow = FALSE )"},{"path":"https://inceptdk.github.io/adaptr/reference/gp_opt.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Gaussian process-based optimisation — gp_opt","text":"x numeric vector, previous values function calibrated evaluated. y numeric vector, corresponding results previous evaluations x values (must length x). target single numeric value, desired target value calibration process. dir single numeric value (default 0), used selecting next value evaluate . See which_nearest() description. resolution single integer (default 5000), size grid predictions used select next value evaluate made.Note: memory use time substantially increase higher values. kappa single numeric value > 0 (default 1.96), used width uncertainty bounds (based Gaussian process posterior predictive distribution), used select next value evaluate . pow single numerical value, passed cov_mat() controls smoothness Gaussian process. 1 (smoothness, piecewise straight lines subsequent x/y-coordinate lengthscale described 1) 2; defaults 1.95, leads slightly faster decay correlations x values internally scaled [0, 1]-range compared 2. lengthscale single numerical value (default 1) numerical vector length 2; values must finite non-negative. single value provided, used lengthscale hyperparameter passed directly cov_mat(). numerical vector length 2 provided, second value must higher first optimal lengthscale range found using optimisation algorithm. value 0, minimum amount noise added lengthscales must > 0. Controls smoothness/decay combination pow. scale_x single logical value; TRUE (default) x-values scaled [0, 1] range according minimum/maximum values provided. FALSE, model use original scale. distances original scale small, scaling may preferred. returned values always original scale. noisy single logical value. FALSE (default), noiseless process assumed, interpolation values performed (.e., uncertainty evaluated x-values); TRUE, y-values assumed come noisy process, regression performed (.e., uncertainty evaluated x-values included predictions, amount estimated using optimisation algorithm). narrow single logical value. FALSE (default), predictions evenly spread full x-range. TRUE, prediction grid spread evenly interval consisting two x-values corresponding y-values closest target opposite directions. setting used noisy FALSE function can safely assumed monotonically increasing decreasing, case lead faster search smoother prediction grid relevant region without increasing memory use.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/gp_opt.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Gaussian process-based optimisation — gp_opt","text":"List containing two elements, next_x, single numerical value, suggested next x value evaluate function, predictions, data.frame resolution rows four columns: x, x grid values predictions made; y_hat, predicted means, lub uub, lower upper uncertainty bounds predictions according kappa.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/gp_opt.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Gaussian process-based optimisation — gp_opt","text":"Gramacy RB (2020). Chapter 5: Gaussian Process Regression. : Surrogates: Gaussian Process Modeling, Design Optimization Applied Sciences. Chapman Hall/CRC, Boca Raton, Florida, USA. Available online. Greenhill S, Rana S, Gupta S, Vellanki P, Venkatesh S (2020). Bayesian Optimization Adaptive Experimental Design: Review. IEEE Access, 8, 13937-13948. doi:10.1109/ACCESS.2020.2966228","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/make_x_scale.html","id":null,"dir":"Reference","previous_headings":"","what":"Make x-axis scale for history/status plots — make_x_scale","title":"Make x-axis scale for history/status plots — make_x_scale","text":"Used internally. Prepares x-axis scale history/status plots. Requires ggplot2 package installed.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/make_x_scale.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Make x-axis scale for history/status plots — make_x_scale","text":"","code":"make_x_scale(x_value)"},{"path":"https://inceptdk.github.io/adaptr/reference/make_x_scale.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Make x-axis scale for history/status plots — make_x_scale","text":"x_value single character string, determining whether number adaptive analysis looks (\"look\", default), total cumulated number patients randomised (\"total n\") total cumulated number patients outcome data available adaptive analysis (\"followed n\") plotted x-axis.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/make_x_scale.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Make x-axis scale for history/status plots — make_x_scale","text":"appropriate scale ggplot2 plot x-axis according value specified x_value.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/make_y_scale.html","id":null,"dir":"Reference","previous_headings":"","what":"Make y-axis scale for history/status plots — make_y_scale","title":"Make y-axis scale for history/status plots — make_y_scale","text":"Used internally. Prepares y-axis scale history/status plots. Requires ggplot2 package installed.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/make_y_scale.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Make y-axis scale for history/status plots — make_y_scale","text":"","code":"make_y_scale(y_value)"},{"path":"https://inceptdk.github.io/adaptr/reference/make_y_scale.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Make y-axis scale for history/status plots — make_y_scale","text":"y_value single character string, determining values plotted y-axis. following options available: allocation probabilities (\"prob\", default), total number patients outcome data available (\"n\") randomised (\"n \") arm, percentage patients outcome data available (\"pct\") randomised (\"pct \") arm current total, sum available (\"sum ys\") outcome data outcome data randomised patients including outcome data available time current adaptive analysis (\"sum ys \"), ratio outcomes defined \"sum ys\"/\"sum ys \" divided corresponding number patients arm.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/make_y_scale.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Make y-axis scale for history/status plots — make_y_scale","text":"appropriate scale ggplot2 plot y-axis according value specified y_value.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/plot_convergence.html","id":null,"dir":"Reference","previous_headings":"","what":"Plot convergence of performance metrics — plot_convergence","title":"Plot convergence of performance metrics — plot_convergence","text":"Plots performance metrics according number simulations conducted multiple simulated trials. simulated trial results may split number batches illustrate stability performance metrics across different simulations. Calculations done according specified selection restriction strategies described extract_results() check_performance(). Requires ggplot2 package installed.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/plot_convergence.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Plot convergence of performance metrics — plot_convergence","text":"","code":"plot_convergence( object, metrics = \"size mean\", resolution = 100, select_strategy = \"control if available\", select_last_arm = FALSE, select_preferences = NULL, te_comp = NULL, raw_ests = FALSE, final_ests = NULL, restrict = NULL, n_split = 1, nrow = NULL, ncol = NULL, cores = NULL )"},{"path":"https://inceptdk.github.io/adaptr/reference/plot_convergence.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Plot convergence of performance metrics — plot_convergence","text":"object trial_results object, output run_trials() function. metrics performance metrics plot, described check_performance(). Multiple metrics may plotted time. Valid metrics include: size_mean, size_sd, size_median, size_p25, size_p75, size_p0, size_p100, sum_ys_mean, sum_ys_sd, sum_ys_median, sum_ys_p25, sum_ys_p75, sum_ys_p0, sum_ys_p100, ratio_ys_mean, ratio_ys_sd, ratio_ys_median, ratio_ys_p25, ratio_ys_p75, ratio_ys_p0, ratio_ys_p100, prob_conclusive, prob_superior, prob_equivalence, prob_futility, prob_max, prob_select_* (* either \"arm_ arm names none), rmse, rmse_te, mae, mae_te, idp. may specified , case sensitive, either spaces underlines. Defaults \"size mean\". resolution single positive integer, number points calculated plotted, defaults 100 must >= 10. Higher numbers lead smoother plots, increases computation time. value specified higher number simulations (simulations per split), maximum possible value used instead. select_strategy single character string. trial stopped due superiority (1 arm remaining, select_last_arm set TRUE trial designs common control arm; see ), parameter specifies arm considered selected calculating trial design performance metrics, described ; corresponds consequence inconclusive trial, .e., arm used practice. following options available must written exactly (case sensitive, abbreviated): \"control available\" (default): selects first control arm trials common control arm arm active end--trial, otherwise arm selected. trial designs without common control, arm selected. \"none\": selects arm trials ending superiority. \"control\": similar \"control available\", throw error used trial designs without common control arm. \"final control\": selects final control arm regardless whether trial stopped practical equivalence, futility, maximum sample size; strategy can specified trial designs common control arm. \"control best\": selects first control arm still active end--trial, otherwise selects best remaining arm (defined remaining arm highest probability best last adaptive analysis conducted). works trial designs common control arm. \"best\": selects best remaining arm (described \"control best\"). \"list best\": selects first remaining arm specified list (specified using select_preferences, technically character vector). none arms active end--trial, best remaining arm selected (described ). \"list\": specified , arms provided list remain active end--trial, arm selected. select_last_arm single logical, defaults FALSE. TRUE, remaining active arm (last control) selected trials common control arm ending equivalence futility, considering options specified select_strategy. Must FALSE trial designs without common control arm. select_preferences character vector specifying number arms used selection one \"list best\" \"list\" options specified select_strategy. Can contain valid arms available trial. te_comp character string, treatment-effect comparator. Can either NULL (default) case first control arm used trial designs common control arm, string naming single trial arm. used calculating err_te sq_err_te (error squared error treatment effect comparing selected arm comparator arm, described ). raw_ests single logical. FALSE (default), posterior estimates (post_ests post_ests_all, see setup_trial() run_trial()) used calculate err sq_err (error squared error estimated compared specified effect selected arm) err_te sq_err_te (error squared error treatment effect comparing selected arm comparator arm, described te_comp ). TRUE, raw estimates (raw_ests raw_ests_all, see setup_trial() run_trial()) used instead posterior estimates. final_ests single logical. TRUE (recommended) final estimates calculated using outcome data patients randomised trials stopped used (post_ests_all raw_ests_all, see setup_trial() run_trial()); FALSE, estimates calculated arm arm stopped (last adaptive analysis ) using data patients reach followed time point patients randomised used (post_ests raw_ests, see setup_trial() run_trial()). NULL (default), argument set FALSE outcome data available immediate randomisation patients (backwards compatibility, final posterior estimates may vary slightly situation, even using data); otherwise said TRUE. See setup_trial() details estimates calculated. restrict single character string NULL. NULL (default), results summarised simulations; \"superior\", results summarised simulations ending superiority ; \"selected\", results summarised simulations ending selected arm (according specified arm selection strategy simulations ending superiority). summary measures (e.g., prob_conclusive) substantially different interpretations restricted, calculated nonetheless. n_split single positive integer, number consecutive batches simulation results split , plotted separately. Default 1 (splitting); maximum value number simulations summarised (restrictions) divided 10. nrow, ncol number rows columns plotting multiple metrics plot (using faceting ggplot2). Defaults NULL, case determined automatically. cores NULL single integer. NULL, default value set setup_cluster() used control whether extractions simulation results done parallel default cluster sequentially main process; value specified setup_cluster(), cores set value stored global \"mc.cores\" option (previously set options(mc.cores = ), 1 option specified. cores = 1, computations run sequentially primary process, cores > 1, new parallel cluster setup using parallel library removed function completes. See setup_cluster() details.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/plot_convergence.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Plot convergence of performance metrics — plot_convergence","text":"ggplot2 plot object.","code":""},{"path":[]},{"path":"https://inceptdk.github.io/adaptr/reference/plot_convergence.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Plot convergence of performance metrics — plot_convergence","text":"","code":"#### Only run examples if ggplot2 is installed #### if (requireNamespace(\"ggplot2\", quietly = TRUE)){ # Setup a trial specification binom_trial <- setup_trial_binom(arms = c(\"A\", \"B\", \"C\", \"D\"), control = \"A\", true_ys = c(0.20, 0.18, 0.22, 0.24), data_looks = 1:20 * 100) # Run multiple simulation with a fixed random base seed res_mult <- run_trials(binom_trial, n_rep = 25, base_seed = 678) # NOTE: the number of simulations in this example is smaller than # recommended - the plots reflect that, and show that performance metrics # are not stable and have likely not converged yet # Convergence plot of mean sample sizes plot_convergence(res_mult, metrics = \"size mean\") } if (requireNamespace(\"ggplot2\", quietly = TRUE)){ # Convergence plot of mean sample sizes and ideal design percentages, # with simulations split in 2 batches plot_convergence(res_mult, metrics = c(\"size mean\", \"idp\"), n_split = 2) }"},{"path":"https://inceptdk.github.io/adaptr/reference/plot_history.html","id":null,"dir":"Reference","previous_headings":"","what":"Plot trial metric history — plot_history","title":"Plot trial metric history — plot_history","text":"Plots history relevant metrics progress single multiple trial simulations. Simulated trials contribute time stopped, .e., trials stopped earlier others, contribute summary statistics later adaptive looks. Data individual arms trial contribute complete trial stopped. history plots require non-sparse results (sparse set FALSE; see run_trial() run_trials()) ggplot2 package installed.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/plot_history.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Plot trial metric history — plot_history","text":"","code":"plot_history(object, x_value = \"look\", y_value = \"prob\", line = NULL, ...) # S3 method for trial_result plot_history(object, x_value = \"look\", y_value = \"prob\", line = NULL, ...) # S3 method for trial_results plot_history( object, x_value = \"look\", y_value = \"prob\", line = NULL, ribbon = list(width = 0.5, alpha = 0.2), cores = NULL, ... )"},{"path":"https://inceptdk.github.io/adaptr/reference/plot_history.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Plot trial metric history — plot_history","text":"object trial_results object, output run_trials() function. x_value single character string, determining whether number adaptive analysis looks (\"look\", default), total cumulated number patients randomised (\"total n\") total cumulated number patients outcome data available adaptive analysis (\"followed n\") plotted x-axis. y_value single character string, determining values plotted y-axis. following options available: allocation probabilities (\"prob\", default), total number patients outcome data available (\"n\") randomised (\"n \") arm, percentage patients outcome data available (\"pct\") randomised (\"pct \") arm current total, sum available (\"sum ys\") outcome data outcome data randomised patients including outcome data available time current adaptive analysis (\"sum ys \"), ratio outcomes defined \"sum ys\"/\"sum ys \" divided corresponding number patients arm. line list styling lines per ggplot2 conventions (e.g., linetype, linewidth). ... additional arguments, used. ribbon list, line appropriate trial_results objects (.e., multiple simulations run). Also allows specify width interval: must 0 1, 0.5 (default) showing inter-quartile ranges. cores NULL single integer. NULL, default value set setup_cluster() used control whether extractions simulation results done parallel default cluster sequentially main process; value specified setup_cluster(), cores set value stored global \"mc.cores\" option (previously set options(mc.cores = ), 1 option specified. cores = 1, computations run sequentially primary process, cores > 1, new parallel cluster setup using parallel library removed function completes. See setup_cluster() details.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/plot_history.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Plot trial metric history — plot_history","text":"ggplot2 plot object.","code":""},{"path":[]},{"path":"https://inceptdk.github.io/adaptr/reference/plot_history.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Plot trial metric history — plot_history","text":"","code":"#### Only run examples if ggplot2 is installed #### if (requireNamespace(\"ggplot2\", quietly = TRUE)){ # Setup a trial specification binom_trial <- setup_trial_binom(arms = c(\"A\", \"B\", \"C\", \"D\"), control = \"A\", true_ys = c(0.20, 0.18, 0.22, 0.24), data_looks = 1:20 * 100) # Run a single simulation with a fixed random seed res <- run_trial(binom_trial, seed = 12345) # Plot total allocations to each arm according to overall total allocations plot_history(res, x_value = \"total n\", y_value = \"n\") } if (requireNamespace(\"ggplot2\", quietly = TRUE)){ # Run multiple simulation with a fixed random base seed # Notice that sparse = FALSE is required res_mult <- run_trials(binom_trial, n_rep = 15, base_seed = 12345, sparse = FALSE) # Plot allocation probabilities at each look plot_history(res_mult, x_value = \"look\", y_value = \"prob\") # Other y_value options are available but not shown in these examples }"},{"path":"https://inceptdk.github.io/adaptr/reference/plot_metrics_ecdf.html","id":null,"dir":"Reference","previous_headings":"","what":"Plot empirical cumulative distribution functions of performance metrics — plot_metrics_ecdf","title":"Plot empirical cumulative distribution functions of performance metrics — plot_metrics_ecdf","text":"Plots empirical cumulative distribution functions (ECDFs) numerical performance metrics across multiple simulations \"trial_results\" object returned run_trials(). Requires ggplot2 package installed.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/plot_metrics_ecdf.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Plot empirical cumulative distribution functions of performance metrics — plot_metrics_ecdf","text":"","code":"plot_metrics_ecdf( object, metrics = c(\"size\", \"sum_ys\", \"ratio_ys\"), select_strategy = \"control if available\", select_last_arm = FALSE, select_preferences = NULL, te_comp = NULL, raw_ests = FALSE, final_ests = NULL, restrict = NULL, nrow = NULL, ncol = NULL, cores = NULL )"},{"path":"https://inceptdk.github.io/adaptr/reference/plot_metrics_ecdf.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Plot empirical cumulative distribution functions of performance metrics — plot_metrics_ecdf","text":"object trial_results object, output run_trials() function. metrics performance metrics plot, described extract_results(). Multiple metrics may plotted time. Valid metrics include: size, sum_ys, ratio_ys_mean, sq_err, sq_err_te, err, err_te, abs_err, abs_err_te, (described extract_results(), addition abs_err abs_err_te, absolute errors, .e., abs(err) abs(err_te)). may specified using either spaces underlines (case sensitive). Defaults plotting size, sum_ys, ratio_ys_mean. select_strategy single character string. trial stopped due superiority (1 arm remaining, select_last_arm set TRUE trial designs common control arm; see ), parameter specifies arm considered selected calculating trial design performance metrics, described ; corresponds consequence inconclusive trial, .e., arm used practice. following options available must written exactly (case sensitive, abbreviated): \"control available\" (default): selects first control arm trials common control arm arm active end--trial, otherwise arm selected. trial designs without common control, arm selected. \"none\": selects arm trials ending superiority. \"control\": similar \"control available\", throw error used trial designs without common control arm. \"final control\": selects final control arm regardless whether trial stopped practical equivalence, futility, maximum sample size; strategy can specified trial designs common control arm. \"control best\": selects first control arm still active end--trial, otherwise selects best remaining arm (defined remaining arm highest probability best last adaptive analysis conducted). works trial designs common control arm. \"best\": selects best remaining arm (described \"control best\"). \"list best\": selects first remaining arm specified list (specified using select_preferences, technically character vector). none arms active end--trial, best remaining arm selected (described ). \"list\": specified , arms provided list remain active end--trial, arm selected. select_last_arm single logical, defaults FALSE. TRUE, remaining active arm (last control) selected trials common control arm ending equivalence futility, considering options specified select_strategy. Must FALSE trial designs without common control arm. select_preferences character vector specifying number arms used selection one \"list best\" \"list\" options specified select_strategy. Can contain valid arms available trial. te_comp character string, treatment-effect comparator. Can either NULL (default) case first control arm used trial designs common control arm, string naming single trial arm. used calculating err_te sq_err_te (error squared error treatment effect comparing selected arm comparator arm, described ). raw_ests single logical. FALSE (default), posterior estimates (post_ests post_ests_all, see setup_trial() run_trial()) used calculate err sq_err (error squared error estimated compared specified effect selected arm) err_te sq_err_te (error squared error treatment effect comparing selected arm comparator arm, described te_comp ). TRUE, raw estimates (raw_ests raw_ests_all, see setup_trial() run_trial()) used instead posterior estimates. final_ests single logical. TRUE (recommended) final estimates calculated using outcome data patients randomised trials stopped used (post_ests_all raw_ests_all, see setup_trial() run_trial()); FALSE, estimates calculated arm arm stopped (last adaptive analysis ) using data patients reach followed time point patients randomised used (post_ests raw_ests, see setup_trial() run_trial()). NULL (default), argument set FALSE outcome data available immediate randomisation patients (backwards compatibility, final posterior estimates may vary slightly situation, even using data); otherwise said TRUE. See setup_trial() details estimates calculated. restrict single character string NULL. NULL (default), results summarised simulations; \"superior\", results summarised simulations ending superiority ; \"selected\", results summarised simulations ending selected arm (according specified arm selection strategy simulations ending superiority). summary measures (e.g., prob_conclusive) substantially different interpretations restricted, calculated nonetheless. nrow, ncol number rows columns plotting multiple metrics plot (using faceting ggplot2). Defaults NULL, case determined automatically. cores NULL single integer. NULL, default value set setup_cluster() used control whether extractions simulation results done parallel default cluster sequentially main process; value specified setup_cluster(), cores set value stored global \"mc.cores\" option (previously set options(mc.cores = ), 1 option specified. cores = 1, computations run sequentially primary process, cores > 1, new parallel cluster setup using parallel library removed function completes. See setup_cluster() details.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/plot_metrics_ecdf.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Plot empirical cumulative distribution functions of performance metrics — plot_metrics_ecdf","text":"ggplot2 plot object.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/plot_metrics_ecdf.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Plot empirical cumulative distribution functions of performance metrics — plot_metrics_ecdf","text":"Note arguments related arm selection error calculation relevant errors visualised.","code":""},{"path":[]},{"path":"https://inceptdk.github.io/adaptr/reference/plot_metrics_ecdf.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Plot empirical cumulative distribution functions of performance metrics — plot_metrics_ecdf","text":"","code":"#### Only run examples if ggplot2 is installed #### if (requireNamespace(\"ggplot2\", quietly = TRUE)){ # Setup a trial specification binom_trial <- setup_trial_binom(arms = c(\"A\", \"B\", \"C\", \"D\"), control = \"A\", true_ys = c(0.20, 0.18, 0.22, 0.24), data_looks = 1:20 * 100) # Run multiple simulation with a fixed random base seed res_mult <- run_trials(binom_trial, n_rep = 25, base_seed = 678) # NOTE: the number of simulations in this example is smaller than # recommended - the plots reflect that, and would likely be smoother if # a larger number of trials had been simulated # Plot ECDFs of continuous performance metrics plot_metrics_ecdf(res_mult) }"},{"path":"https://inceptdk.github.io/adaptr/reference/plot_status.html","id":null,"dir":"Reference","previous_headings":"","what":"Plot statuses — plot_status","title":"Plot statuses — plot_status","text":"Plots statuses time multiple simulated trials (overall one specific arms). Requires ggplot2 package installed.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/plot_status.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Plot statuses — plot_status","text":"","code":"plot_status( object, x_value = \"look\", arm = NULL, area = list(alpha = 0.5), nrow = NULL, ncol = NULL ) # S3 method for trial_results plot_status( object, x_value = \"look\", arm = NULL, area = list(alpha = 0.5), nrow = NULL, ncol = NULL )"},{"path":"https://inceptdk.github.io/adaptr/reference/plot_status.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Plot statuses — plot_status","text":"object trial_results object, output run_trials() function. x_value single character string, determining whether number adaptive analysis looks (\"look\", default), total cumulated number patients randomised (\"total n\") total cumulated number patients outcome data available adaptive analysis (\"followed n\") plotted x-axis. arm character vector containing one unique, valid arm names, NA, NULL (default). NULL, overall trial statuses plotted, otherwise specified arms arms (NA specified) plotted. area list styling settings area per ggplot2 conventions (e.g., alpha, linewidth). default (list(alpha = 0.5)) sets transparency 50% overlain shaded areas visible. nrow, ncol number rows columns plotting statuses multiple arms plot (using faceting ggplot2). Defaults NULL, case determined automatically relevant.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/plot_status.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Plot statuses — plot_status","text":"ggplot2 plot object.","code":""},{"path":[]},{"path":"https://inceptdk.github.io/adaptr/reference/plot_status.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Plot statuses — plot_status","text":"","code":"#### Only run examples if ggplot2 is installed #### if (requireNamespace(\"ggplot2\", quietly = TRUE)){ # Setup a trial specification binom_trial <- setup_trial_binom(arms = c(\"A\", \"B\", \"C\", \"D\"), control = \"A\", true_ys = c(0.20, 0.18, 0.22, 0.24), data_looks = 1:20 * 100) # Run multiple simulation with a fixed random base seed res_mult <- run_trials(binom_trial, n_rep = 25, base_seed = 12345) # Plot trial statuses at each look according to total allocations plot_status(res_mult, x_value = \"total n\") } if (requireNamespace(\"ggplot2\", quietly = TRUE)){ # Plot trial statuses for all arms plot_status(res_mult, arm = NA) }"},{"path":"https://inceptdk.github.io/adaptr/reference/pow_abs_dist.html","id":null,"dir":"Reference","previous_headings":"","what":"Calculates matrix of absolute distances raised to a power — pow_abs_dist","title":"Calculates matrix of absolute distances raised to a power — pow_abs_dist","text":"Used internally, calculates absolute distances values matrix possibly unequal dimensions, raises power.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/pow_abs_dist.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Calculates matrix of absolute distances raised to a power — pow_abs_dist","text":"","code":"pow_abs_dist(x1, x2 = x1, pow = 2)"},{"path":"https://inceptdk.github.io/adaptr/reference/pow_abs_dist.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Calculates matrix of absolute distances raised to a power — pow_abs_dist","text":"x1 numeric vector, length corresponding number rows returned matrix. x2 numeric vector, length corresponding number columns returned matrix. specified, x1 used x2. pow single numeric value, power distances raised . Defaults 2, corresponding pairwise, squared, Euclidean distances.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/pow_abs_dist.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Calculates matrix of absolute distances raised to a power — pow_abs_dist","text":"Matrix length(x1) rows length(x2) columns including calculated absolute pairwise distances raised pow.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/print.html","id":null,"dir":"Reference","previous_headings":"","what":"Print methods for adaptive trial objects — print","title":"Print methods for adaptive trial objects — print","text":"Prints contents first input x human-friendly way, see Details information.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/print.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Print methods for adaptive trial objects — print","text":"","code":"# S3 method for trial_spec print(x, prob_digits = 3, ...) # S3 method for trial_result print(x, prob_digits = 3, ...) # S3 method for trial_performance print(x, digits = 3, ...) # S3 method for trial_results print( x, select_strategy = \"control if available\", select_last_arm = FALSE, select_preferences = NULL, te_comp = NULL, raw_ests = FALSE, final_ests = NULL, restrict = NULL, digits = 1, cores = NULL, ... ) # S3 method for trial_results_summary print(x, digits = 1, ...) # S3 method for trial_calibration print(x, ...)"},{"path":"https://inceptdk.github.io/adaptr/reference/print.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Print methods for adaptive trial objects — print","text":"x object print, see Details. prob_digits single integer (default 3), number digits used printing probabilities, allocation probabilities softening powers (2 extra digits added stopping rule probability thresholds trial specifications outcome rates summarised results multiple simulations). ... additional arguments, used. digits single integer, number digits used printing numeric results. Default 3 outputs check_performance() 1 outputs run_trials() accompanying summary() method. select_strategy single character string. trial stopped due superiority (1 arm remaining, select_last_arm set TRUE trial designs common control arm; see ), parameter specifies arm considered selected calculating trial design performance metrics, described ; corresponds consequence inconclusive trial, .e., arm used practice. following options available must written exactly (case sensitive, abbreviated): \"control available\" (default): selects first control arm trials common control arm arm active end--trial, otherwise arm selected. trial designs without common control, arm selected. \"none\": selects arm trials ending superiority. \"control\": similar \"control available\", throw error used trial designs without common control arm. \"final control\": selects final control arm regardless whether trial stopped practical equivalence, futility, maximum sample size; strategy can specified trial designs common control arm. \"control best\": selects first control arm still active end--trial, otherwise selects best remaining arm (defined remaining arm highest probability best last adaptive analysis conducted). works trial designs common control arm. \"best\": selects best remaining arm (described \"control best\"). \"list best\": selects first remaining arm specified list (specified using select_preferences, technically character vector). none arms active end--trial, best remaining arm selected (described ). \"list\": specified , arms provided list remain active end--trial, arm selected. select_last_arm single logical, defaults FALSE. TRUE, remaining active arm (last control) selected trials common control arm ending equivalence futility, considering options specified select_strategy. Must FALSE trial designs without common control arm. select_preferences character vector specifying number arms used selection one \"list best\" \"list\" options specified select_strategy. Can contain valid arms available trial. te_comp character string, treatment-effect comparator. Can either NULL (default) case first control arm used trial designs common control arm, string naming single trial arm. used calculating err_te sq_err_te (error squared error treatment effect comparing selected arm comparator arm, described ). raw_ests single logical. FALSE (default), posterior estimates (post_ests post_ests_all, see setup_trial() run_trial()) used calculate err sq_err (error squared error estimated compared specified effect selected arm) err_te sq_err_te (error squared error treatment effect comparing selected arm comparator arm, described te_comp ). TRUE, raw estimates (raw_ests raw_ests_all, see setup_trial() run_trial()) used instead posterior estimates. final_ests single logical. TRUE (recommended) final estimates calculated using outcome data patients randomised trials stopped used (post_ests_all raw_ests_all, see setup_trial() run_trial()); FALSE, estimates calculated arm arm stopped (last adaptive analysis ) using data patients reach followed time point patients randomised used (post_ests raw_ests, see setup_trial() run_trial()). NULL (default), argument set FALSE outcome data available immediate randomisation patients (backwards compatibility, final posterior estimates may vary slightly situation, even using data); otherwise said TRUE. See setup_trial() details estimates calculated. restrict single character string NULL. NULL (default), results summarised simulations; \"superior\", results summarised simulations ending superiority ; \"selected\", results summarised simulations ending selected arm (according specified arm selection strategy simulations ending superiority). summary measures (e.g., prob_conclusive) substantially different interpretations restricted, calculated nonetheless. cores NULL single integer. NULL, default value set setup_cluster() used control whether extractions simulation results done parallel default cluster sequentially main process; value specified setup_cluster(), cores set value stored global \"mc.cores\" option (previously set options(mc.cores = ), 1 option specified. cores = 1, computations run sequentially primary process, cores > 1, new parallel cluster setup using parallel library removed function completes. See setup_cluster() details.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/print.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Print methods for adaptive trial objects — print","text":"Invisibly returns x.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/print.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Print methods for adaptive trial objects — print","text":"behaviour depends class x: trial_spec: prints trial specification setup setup_trial(), setup_trial_binom() setup_trial_norm(). trial_result: prints results single trial simulated run_trial(). details saved trial_result object thus printed sparse argument run_trial() run_trials() set FALSE; TRUE, fewer details printed, omitted details available printing trial_spec object created setup_trial(), setup_trial_binom() setup_trial_norm(). trial_results: prints results multiple simulations generated using run_trials(). documentation multiple trials summarised printing can found summary() function documentation. trial_results_summary: print method summary multiple simulations trial specification, generated using summary() function object generated run_trials().","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/print.html","id":"methods-by-class-","dir":"Reference","previous_headings":"","what":"Methods (by class)","title":"Print methods for adaptive trial objects — print","text":"print(trial_spec): Trial specification print(trial_result): Single trial result print(trial_performance): Trial performance metrics print(trial_results): Multiple trial results print(trial_results_summary): Summary multiple trial results print(trial_calibration): Trial calibration","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/prob_all_equi.html","id":null,"dir":"Reference","previous_headings":"","what":"Calculate the probability that all arms are practically equivalent — prob_all_equi","title":"Calculate the probability that all arms are practically equivalent — prob_all_equi","text":"Used internally. function takes matrix calculated get_draws_binom(), get_draws_norm() corresponding custom function (specified using fun_draws argument setup_trial(); see get_draws_generic()), equivalence difference, calculates probability arms equivalent (absolute differences highest lowest value set posterior draws less difference considered practically equivalent).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/prob_all_equi.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Calculate the probability that all arms are practically equivalent — prob_all_equi","text":"","code":"prob_all_equi(m, equivalence_diff = NULL)"},{"path":"https://inceptdk.github.io/adaptr/reference/prob_all_equi.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Calculate the probability that all arms are practically equivalent — prob_all_equi","text":"m matrix one column per trial arm (named arms) one row draw posterior distributions. equivalence_diff single numeric value (> 0) NULL (default, corresponding equivalence assessment). numeric value specified, estimated absolute differences smaller threshold considered equivalent. designs common control arm, differences non-control arm control arm used, trials without common control arm, difference highest lowest estimated outcome rates used trial stopped equivalence remaining arms equivalent.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/prob_all_equi.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Calculate the probability that all arms are practically equivalent — prob_all_equi","text":"single numeric value corresponding probability arms practically equivalent.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/prob_best.html","id":null,"dir":"Reference","previous_headings":"","what":"Calculate the probabilities of each arm being the best — prob_best","title":"Calculate the probabilities of each arm being the best — prob_best","text":"Used internally. function takes matrix calculated get_draws_binom(), get_draws_norm() corresponding custom function (specified using fun_draws argument setup_trial(); see get_draws_generic()) calculates probabilities arm best (defined either highest lowest value, specified highest_is_best argument setup_trial(), setup_trial_binom() setup_trial_norm()).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/prob_best.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Calculate the probabilities of each arm being the best — prob_best","text":"","code":"prob_best(m, highest_is_best = FALSE)"},{"path":"https://inceptdk.github.io/adaptr/reference/prob_best.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Calculate the probabilities of each arm being the best — prob_best","text":"m matrix one column per trial arm (named arms) one row draw posterior distributions. highest_is_best single logical, specifies whether larger estimates outcome favourable ; defaults FALSE, corresponding , e.g., undesirable binary outcomes (e.g., mortality) continuous outcome lower numbers preferred (e.g., hospital length stay).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/prob_best.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Calculate the probabilities of each arm being the best — prob_best","text":"named numeric vector probabilities (names corresponding arms).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/prob_better.html","id":null,"dir":"Reference","previous_headings":"","what":"Calculate probabilities of comparisons of arms against with common control — prob_better","title":"Calculate probabilities of comparisons of arms against with common control — prob_better","text":"Used internally. function takes matrix calculated get_draws_binom(), get_draws_norm() corresponding custom function (specified using fun_draws argument setup_trial(); see get_draws_generic()) single character specifying control arm, calculates probabilities arm better common control (defined either higher lower control, specified highest_is_best argument setup_trial(), setup_trial_binom() setup_trial_norm()). function also calculates equivalence futility probabilities compared common control arm, specified setup_trial(), setup_trial_binom() setup_trial_norm(), unless equivalence_diff futility_diff, respectively, set NULL (default).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/prob_better.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Calculate probabilities of comparisons of arms against with common control — prob_better","text":"","code":"prob_better( m, control = NULL, highest_is_best = FALSE, equivalence_diff = NULL, futility_diff = NULL )"},{"path":"https://inceptdk.github.io/adaptr/reference/prob_better.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Calculate probabilities of comparisons of arms against with common control — prob_better","text":"m matrix one column per trial arm (named arms) one row draw posterior distributions. control single character string specifying common control arm. highest_is_best single logical, specifies whether larger estimates outcome favourable ; defaults FALSE, corresponding , e.g., undesirable binary outcomes (e.g., mortality) continuous outcome lower numbers preferred (e.g., hospital length stay). equivalence_diff single numeric value (> 0) NULL (default, corresponding equivalence assessment). numeric value specified, estimated absolute differences smaller threshold considered equivalent. designs common control arm, differences non-control arm control arm used, trials without common control arm, difference highest lowest estimated outcome rates used trial stopped equivalence remaining arms equivalent. futility_diff single numeric value (> 0) NULL (default, corresponding futility assessment). numeric value specified, estimated differences threshold beneficial direction (specified highest_is_best) considered futile assessing futility designs common control arm. 1 arm remains dropping arms futility, trial stopped without declaring last arm superior.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/prob_better.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Calculate probabilities of comparisons of arms against with common control — prob_better","text":"named (row names corresponding trial arms) matrix containing 1-3 columns: probs_better, probs_equivalence (equivalence_diff specified), probs_futile (futility_diff specified). columns contain NA control arm.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/prog_breaks.html","id":null,"dir":"Reference","previous_headings":"","what":"Generate breakpoints and other values for printing progress — prog_breaks","title":"Generate breakpoints and other values for printing progress — prog_breaks","text":"Used internally. Generates breakpoints, messages, 'batches' trial numbers simulate using run_trials() progress argument use. Breaks multiples number cores, repeated use values breaks avoided (, e.g., number breaks times number cores possible new trials run). Inputs validated run_trials().","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/prog_breaks.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Generate breakpoints and other values for printing progress — prog_breaks","text":"","code":"prog_breaks(progress, prev_n_rep, n_rep_new, cores)"},{"path":"https://inceptdk.github.io/adaptr/reference/prog_breaks.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Generate breakpoints and other values for printing progress — prog_breaks","text":"progress single numeric > 0 <= 1 NULL. NULL (default), progress printed console. Otherwise, progress messages printed control intervals proportional value specified progress.Note: printing possible within clusters multiple cores, function conducts batches simulations multiple cores (specified), intermittent printing statuses. Thus, cores finish running current assigned batches cores may proceed next batch. substantial differences simulation speeds across cores, using progress may thus increase total run time (especially small values). prev_n_rep single integer, previous number simulations run (add indices generated used). n_rep_new single integers, number new simulations run (.e., n_rep supplied run_trials() minus number previously run simulations grow used run_trials()). cores NULL single integer. NULL, default value/cluster set setup_cluster() used control whether simulations run parallel default cluster sequentially main process; cluster/value specified setup_cluster(), cores set value stored global \"mc.cores\" option (previously set options(mc.cores = ), 1 option specified. resulting number cores = 1, computations run sequentially primary process, cores > 1, new parallel cluster setup using parallel library removed function completes. See setup_cluster() details.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/prog_breaks.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Generate breakpoints and other values for printing progress — prog_breaks","text":"List containing breaks (number patients break), start_mess prog_mess (basis first subsequent progress messages), batches (list entry corresponding simulation numbers batch).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/reallocate_probs.html","id":null,"dir":"Reference","previous_headings":"","what":"Update allocation probabilities — reallocate_probs","title":"Update allocation probabilities — reallocate_probs","text":"Used internally. function calculates new allocation probabilities arm, based information specified setup_trial(), setup_trial_binom() setup_trial_norm() calculated probabilities arm best prob_best().","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/reallocate_probs.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Update allocation probabilities — reallocate_probs","text":"","code":"reallocate_probs( probs_best, fixed_probs, min_probs, max_probs, soften_power = 1, match_arm = NULL, rescale_fixed = FALSE, rescale_limits = FALSE, rescale_factor = 1, rescale_ignore = NULL )"},{"path":"https://inceptdk.github.io/adaptr/reference/reallocate_probs.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Update allocation probabilities — reallocate_probs","text":"probs_best resulting named vector prob_best() function. fixed_probs numeric vector, fixed allocation probabilities arm. Must either numeric vector NA arms without fixed probabilities values 0 1 arms NULL (default), adaptive randomisation used arms one special settings (\"sqrt-based\", \"sqrt-based start\", \"sqrt-based fixed\", \"match\") specified control_prob_fixed (described ). min_probs numeric vector, lower threshold adaptive allocation probabilities; lower probabilities rounded values. Must NA (default arms) lower threshold wanted arms using fixed allocation probabilities. max_probs numeric vector, upper threshold adaptive allocation probabilities; higher probabilities rounded values. Must NA (default arms) threshold wanted arms using fixed allocation probabilities. soften_power either single numeric value numeric vector exactly length maximum number looks/adaptive analyses. Values must 0 1 (default); < 1, re-allocated non-fixed allocation probabilities raised power (followed rescaling sum 1) make adaptive allocation probabilities less extreme, turn used redistribute remaining probability respecting limits defined min_probs /max_probs. 1, softening applied. match_arm index control arm. NULL (default), control arm allocation probability similar best non-control arm. Must NULL designs without common control arm. rescale_fixed logical indicating whether fixed_probs rescaled following arm dropping. rescale_limits logical indicating whether min/max_probs rescaled following arm dropping. rescale_factor numerical, rescale factor defined initial number arms/number active arms. rescale_ignore NULL index arm ignored rescale_fixed rescale_limits arguments.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/reallocate_probs.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Update allocation probabilities — reallocate_probs","text":"named (according arms) numeric vector updated allocation probabilities.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/replace_nonfinite.html","id":null,"dir":"Reference","previous_headings":"","what":"Replace non-finite values with other value (finite-OR-operator) — replace_nonfinite","title":"Replace non-finite values with other value (finite-OR-operator) — replace_nonfinite","text":"Used internally, helper function replaces non-finite (.e., NA, NaN, Inf, -Inf) values according .finite(), primarily used replace NaN/Inf/-Inf NA.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/replace_nonfinite.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Replace non-finite values with other value (finite-OR-operator) — replace_nonfinite","text":"","code":"a %f|% b"},{"path":"https://inceptdk.github.io/adaptr/reference/replace_nonfinite.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Replace non-finite values with other value (finite-OR-operator) — replace_nonfinite","text":"atomic vector type. b single value replace non-finite values .","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/replace_nonfinite.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Replace non-finite values with other value (finite-OR-operator) — replace_nonfinite","text":"values non-finite, replaced b, otherwise left unchanged.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/replace_null.html","id":null,"dir":"Reference","previous_headings":"","what":"Replace NULL with other value (NULL-OR-operator) — replace_null","title":"Replace NULL with other value (NULL-OR-operator) — replace_null","text":"Used internally, primarily working list arguments, , e.g., list_name$element_name yields NULL unspecified.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/replace_null.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Replace NULL with other value (NULL-OR-operator) — replace_null","text":"","code":"a %||% b"},{"path":"https://inceptdk.github.io/adaptr/reference/replace_null.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Replace NULL with other value (NULL-OR-operator) — replace_null","text":", b atomic values type.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/replace_null.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Replace NULL with other value (NULL-OR-operator) — replace_null","text":"NULL, b returned. Otherwise returned.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/rescale.html","id":null,"dir":"Reference","previous_headings":"","what":"Rescale numeric vector to sum to 1 — rescale","title":"Rescale numeric vector to sum to 1 — rescale","text":"Used internally.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/rescale.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Rescale numeric vector to sum to 1 — rescale","text":"","code":"rescale(x)"},{"path":"https://inceptdk.github.io/adaptr/reference/rescale.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Rescale numeric vector to sum to 1 — rescale","text":"x numeric vector.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/rescale.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Rescale numeric vector to sum to 1 — rescale","text":"Numeric vector, x rescaled sum total 1.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/run_trial.html","id":null,"dir":"Reference","previous_headings":"","what":"Simulate a single trial — run_trial","title":"Simulate a single trial — run_trial","text":"function conducts single trial simulation using trial specification specified setup_trial(), setup_trial_binom() setup_trial_norm(). simulation, function randomises \"patients\", randomly generates outcomes, calculates probabilities arm best (better control, ). followed checking inferiority, superiority, equivalence /futility desired; dropping arms, re-adjusting allocation probabilities according criteria specified trial specification. common control arm, trial simulation stopped final specified adaptive analysis, 1 arm superior others, arms considered equivalent (equivalence assessed). common control arm specified, arms compared , 1 pairwise comparisons crosses applicable superiority threshold adaptive analysis, arm become new control old control considered inferior dropped. multiple non-control arms cross applicable superiority threshold adaptive analysis, one highest probability overall best become new control. Equivalence/futility also checked specified, equivalent futile arms dropped designs common control arm entire trial stopped remaining arms equivalent designs without common control arm. trial simulation stopped 1 arm left, final arms equivalent, final specified adaptive analysis. stopping (regardless reason), final analysis including outcome data patients randomised arms conducted (final control arm, , used control analysis). Results analysis saved, used regards adaptive stopping rules. particularly relevant less patients available outcome data last adaptive analyses total number patients randomised (specified setup_trial(), setup_trial_binom(), setup_trial_norm()), final analysis include patients randomised, may last adaptive analysis conducted.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/run_trial.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Simulate a single trial — run_trial","text":"","code":"run_trial(trial_spec, seed = NULL, sparse = FALSE)"},{"path":"https://inceptdk.github.io/adaptr/reference/run_trial.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Simulate a single trial — run_trial","text":"trial_spec trial_spec object, generated validated setup_trial(), setup_trial_binom() setup_trial_norm() function. seed single integer NULL (default). value provided, value used random seed running global random seed restored function run, affected. sparse single logical; FALSE (default) everything listed included returned object. TRUE, limited amount data included returned object. can practical running many simulations saving results using run_trials() function (relies function), output file thus substantially smaller. However, printing individual trial results substantially less detailed sparse results non-sparse results required plot_history().","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/run_trial.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Simulate a single trial — run_trial","text":"trial_result object containing everything listed sparse (described ) FALSE. Otherwise final_status, final_n, followed_n, trial_res, seed, sparse included. final_status: either \"superiority\", \"equivalence\", \"futility\", \"max\" (stopped last possible adaptive analysis), calculated adaptive analyses. final_n: total number patients randomised. followed_n: total number patients available outcome data last adaptive analysis conducted. max_n: pre-specified maximum number patients outcome data available last possible adaptive analysis. max_randomised: pre-specified maximum number patients randomised last possible adaptive analysis. looks: numeric vector, total number patients outcome data available conducted adaptive analysis. planned_looks: numeric vector, cumulated number patients planned outcome data available adaptive analysis, even conducted simulation stopped final possible analysis. randomised_at_looks: numeric vector, cumulated number patients randomised conducted adaptive analysis (including relevant numbers analyses actually conducted). start_control: character, initial common control arm (specified). final_control: character, final common control arm (relevant). control_prob_fixed: fixed common control arm probabilities (specified; see setup_trial()). inferiority, superiority, equivalence_prob, equivalence_diff, equivalence_only_first, futility_prob, futility_diff, futility_only_first, highest_is_best, soften_power: specified setup_trial(). best_arm: best arm(s), described setup_trial(). trial_res: data.frame containing information specified arm setup_trial() including true_ys (true outcomes specified setup_trial()) arm sum outcomes (sum_ys/sum_ys_all; .e., total number events binary outcomes totals continuous outcomes) sum patients (ns/ns_all), summary statistics raw outcome data (raw_ests/raw_ests_all, calculated specified setup_trial(), defaults mean values, .e., event rates binary outcomes means continuous outcomes) posterior estimates (post_ests/post_ests_all, post_errs/post_errs_all, lo_cri/lo_cri_all, hi_cri/hi_cri_all, calculated specified setup_trial()), final_status arm (\"inferior\", \"superior\", \"equivalence\", \"futile\", \"active\", \"control\" (currently active control arm, including current control stopped equivalence)), status_look (specifying cumulated number patients outcome data available adaptive analysis changed final_status \"superior\", \"inferior\", \"equivalence\", \"futile\"), status_probs, probability (last adaptive analysis arm) arm best/better common control arm ()/equivalent common control arm (stopped equivalence; NA control arm stopped due last remaining arm(s) stopped equivalence)/futile stopped futility last analysis included , final_alloc, final allocation probability arm last time patients randomised , including arms stopped maximum sample size, probs_best_last, probabilities remaining arm overall best last conducted adaptive analysis (NA previously dropped arms).Note: variables data.frame version including _all-suffix included, versions WITHOUT suffix calculated using patients available outcome data time analysis, versions _all-suffixes calculated using outcome data patients randomised time analysis, even reached time follow-yet (see setup_trial()). all_looks: list lists containing one list per conducted trial look (adaptive analysis). lists contain variables arms, old_status (status analysis current round conducted), new_status (specified , status current analysis conducted), sum_ys/sum_ys_all (described ), ns/ns_all (described ), old_alloc (allocation probability used look), probs_best (probabilities arm best current adaptive analysis), new_alloc (allocation probabilities updating current adaptive analysis; NA arms trial stopped adaptive analyses conducted), probs_better_first (common control provided, specifying probabilities arm better control first analysis conducted look), probs_better (probs_better_first, updated another arm becomes new control), probs_equivalence_first probs_equivalence (probs_better/probs_better_first, equivalence equivalence assessed). last variables NA arm active applicable adaptive analysis included next adaptive analysis. allocs: character vector containing allocations patients order randomization. ys: numeric vector containing outcomes patients order randomization (0 1 binary outcomes). seed: random seed used, specified. description, add_info, cri_width, n_draws, robust: specified setup_trial(), setup_trial_binom() setup_trial_norm(). sparse: single logical, corresponding sparse input.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/run_trial.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Simulate a single trial — run_trial","text":"","code":"# Setup a trial specification binom_trial <- setup_trial_binom(arms = c(\"A\", \"B\", \"C\", \"D\"), true_ys = c(0.20, 0.18, 0.22, 0.24), data_looks = 1:20 * 100) # Run trial with a specified random seed res <- run_trial(binom_trial, seed = 12345) # Print results with 3 decimals print(res, digits = 3) #> Single simulation result: generic binomially distributed outcome trial #> * Undesirable outcome #> * No common control arm #> #> Final status: inconclusive, stopped at final allowed adaptive analysis #> Final/maximum allowed sample sizes: 2000/2000 (100.0%) #> Available outcome data at last adaptive analysis: 2000/2000 (100.0%) #> #> Trial results overview: #> arms true_ys final_status status_look status_probs final_alloc #> A 0.20 active NA NA 0.0232 #> B 0.18 active NA NA 0.8868 #> C 0.22 active NA NA 0.0900 #> D 0.24 inferior 300 0.0092 0.1078 #> #> Esimates from final analysis (all patients): #> arms sum_ys_all ns_all raw_ests_all post_ests_all post_errs_all lo_cri_all #> A 39 161 0.242 0.244 0.03355 0.184 #> B 297 1613 0.184 0.184 0.00987 0.165 #> C 39 180 0.217 0.218 0.03032 0.162 #> D 16 46 0.348 0.351 0.06806 0.224 #> hi_cri_all #> 0.316 #> 0.204 #> 0.279 #> 0.495 #> #> Estimates from last adaptive analysis including each arm: #> arms sum_ys ns raw_ests post_ests post_errs lo_cri hi_cri #> A 39 161 0.242 0.244 0.03461 0.180 0.315 #> B 297 1613 0.184 0.184 0.00938 0.166 0.204 #> C 39 180 0.217 0.219 0.03105 0.164 0.283 #> D 16 46 0.348 0.353 0.06967 0.226 0.492 #> #> Simulation details: #> * Random seed: 12345 #> * Credible interval width: 95% #> * Number of posterior draws: 5000 #> * Posterior estimation method: medians with MAD-SDs"},{"path":"https://inceptdk.github.io/adaptr/reference/run_trials.html","id":null,"dir":"Reference","previous_headings":"","what":"Simulate multiple trials — run_trials","title":"Simulate multiple trials — run_trials","text":"function conducts multiple simulations using trial specification specified setup_trial(), setup_trial_binom() setup_trial_norm(). function essentially manages random seeds runs multiple simulation using run_trial() - additional details individual simulations provided function's description. function allows simulating trials parallel using multiple cores, automatically saving re-loading saved objects, \"growing\" already saved simulation files (.e., appending additional simulations file).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/run_trials.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Simulate multiple trials — run_trials","text":"","code":"run_trials( trial_spec, n_rep, path = NULL, overwrite = FALSE, grow = FALSE, cores = NULL, base_seed = NULL, sparse = TRUE, progress = NULL, version = NULL, compress = TRUE, export = NULL, export_envir = parent.frame() )"},{"path":"https://inceptdk.github.io/adaptr/reference/run_trials.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Simulate multiple trials — run_trials","text":"trial_spec trial_spec object, generated validated setup_trial(), setup_trial_binom() setup_trial_norm() function. n_rep single integer; number simulations run. path single character string; specified (defaults NULL), files written loaded path using saveRDS() / readRDS() functions. overwrite single logical; defaults FALSE, case previous simulations saved path re-loaded (trial specification used). TRUE, previous file overwritten (even trial specification used). grow TRUE, argument must set FALSE. grow single logical; defaults FALSE. TRUE valid path valid previous file containing less simulations n_rep, additional number simulations run (appropriately re-using base_seed, specified) appended file. cores NULL single integer. NULL, default value/cluster set setup_cluster() used control whether simulations run parallel default cluster sequentially main process; cluster/value specified setup_cluster(), cores set value stored global \"mc.cores\" option (previously set options(mc.cores = ), 1 option specified. resulting number cores = 1, computations run sequentially primary process, cores > 1, new parallel cluster setup using parallel library removed function completes. See setup_cluster() details. base_seed single integer NULL (default); random seed used basis simulations. Regardless whether simulations run sequentially parallel, random number streams identical appropriate (see setup_cluster() details). sparse single logical, described run_trial(); defaults TRUE running multiple simulations, case data necessary summarise simulations saved simulation. FALSE, detailed data simulation saved, allowing detailed printing individual trial results plotting using plot_history() (plot_status() require non-sparse results). progress single numeric > 0 <= 1 NULL. NULL (default), progress printed console. Otherwise, progress messages printed control intervals proportional value specified progress.Note: printing possible within clusters multiple cores, function conducts batches simulations multiple cores (specified), intermittent printing statuses. Thus, cores finish running current assigned batches cores may proceed next batch. substantial differences simulation speeds across cores, using progress may thus increase total run time (especially small values). version passed saveRDS() saving simulations, defaults NULL (saveRDS()), means current default version used. Ignored simulations saved. compress passed saveRDS() saving simulations, defaults TRUE (saveRDS()), see saveRDS() options. Ignored simulations saved. export character vector names objects export parallel core running parallel; passed varlist argument parallel::clusterExport(). Defaults NULL (objects exported), ignored cores == 1. See Details . export_envir environment look objects defined export running parallel export NULL. Defaults environment function called.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/run_trials.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Simulate multiple trials — run_trials","text":"list special class \"trial_results\", contains trial_results (results simulations; note seed NULL individual simulations), trial_spec (trial specification), n_rep, base_seed, elapsed_time (total simulation run time), sparse (described ) adaptr_version (version adaptr package used run simulations). results may extracted, summarised, plotted using extract_results(), check_performance(), summary(), print.trial_results(), plot_convergence(), check_remaining_arms(), plot_status(), plot_history() functions. See definitions functions additional details details additional arguments used select arms simulations ending superiority summary choices.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/run_trials.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Simulate multiple trials — run_trials","text":"Exporting objects using multiple cores setup_trial() used define trial specification custom functions (fun_y_gen, fun_draws, fun_raw_est arguments setup_trial()) run_trials() run cores > 1, necessary export additional functions objects used functions defined user outside function definitions provided. Similarly, functions external packages loaded using library() require() must exported called prefixed namespace, .e., package::function. export export_envir arguments used export objects calling parallel::clusterExport()-function. See also setup_cluster(), may used setup cluster export required objects per session.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/run_trials.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Simulate multiple trials — run_trials","text":"","code":"# Setup a trial specification binom_trial <- setup_trial_binom(arms = c(\"A\", \"B\", \"C\", \"D\"), true_ys = c(0.20, 0.18, 0.22, 0.24), data_looks = 1:20 * 100) # Run 10 simulations with a specified random base seed res <- run_trials(binom_trial, n_rep = 10, base_seed = 12345) # See ?extract_results, ?check_performance, ?summary and ?print for details # on extracting resutls, summarising and printing"},{"path":"https://inceptdk.github.io/adaptr/reference/setup_cluster.html","id":null,"dir":"Reference","previous_headings":"","what":"Setup default cluster for use in parallelised adaptr functions — setup_cluster","title":"Setup default cluster for use in parallelised adaptr functions — setup_cluster","text":"function setups (removes) default cluster use parallelised functions adaptr using parallel package. function also exports objects available cluster sets random number generator appropriately. See Details info adaptr handles sequential/parallel computation.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/setup_cluster.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Setup default cluster for use in parallelised adaptr functions — setup_cluster","text":"","code":"setup_cluster(cores, export = NULL, export_envir = parent.frame())"},{"path":"https://inceptdk.github.io/adaptr/reference/setup_cluster.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Setup default cluster for use in parallelised adaptr functions — setup_cluster","text":"cores can either unspecified, NULL, single integer > 0. NULL 1, existing default cluster removed (), default subsequently run functions sequentially main process cores = 1, according getOption(\"mc.cores\") NULL (unless otherwise specified individual functions calls). parallel::detectCores() function may used see number available cores, although comes caveats (described function documentation), including number cores may always returned may match number cores available use. general, using less cores available may preferable processes run machine time. export character vector names objects export parallel core running parallel; passed varlist argument parallel::clusterExport(). Defaults NULL (objects exported), ignored cores == 1. See Details . export_envir environment look objects defined export running parallel export NULL. Defaults environment function called.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/setup_cluster.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Setup default cluster for use in parallelised adaptr functions — setup_cluster","text":"Invisibly returns default parallel cluster NULL, appropriate. may used functions parallel package advanced users, example load certain libraries cluster prior calling run_trials().","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/setup_cluster.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Setup default cluster for use in parallelised adaptr functions — setup_cluster","text":"Using sequential parallel computing adaptr parallelised adaptr functions cores argument defaults NULL. non-NULL integer > 0 provided cores argument (except setup_cluster()), package run calculations sequentially main process cores = 1, otherwise initiate new cluster size cores removed function completes, regardless whether default cluster global \"mc.cores\" option specified. cores NULL adaptr function (except setup_cluster()), package use default cluster one exists run computations sequentially setup_cluster() last called cores = 1. setup_cluster() called last called cores = NULL, package check global \"mc.cores\" option specified (using options(mc.cores = )). option set value > 1, new, temporary cluster size setup, used, removed function completes. option set set 1, computations run sequentially main process. Generally, recommend using setup_cluster() function avoids overhead re-initiating new clusters every call one parallelised adaptr functions. especially important exporting many large objects parallel cluster, can done (option export objects cluster calling run_trials()). Type clusters used random number generation adaptr package solely uses parallel socket clusters (using parallel::makePSOCKcluster()) thus use forking (available operating systems may cause crashes situations). , user-defined objects used adaptr functions run parallel need exported using either setup_cluster() run_trials(), included generated trial_spec object. adaptr package uses \"L'Ecuyer-CMRG\" kind (see RNGkind()) safe random number generation parallelised functions. also case running adaptr functions sequentially seed provided, ensure results obtained regardless whether sequential parallel computation used. functions restore random number generator kind global random seed use called seed.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/setup_cluster.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Setup default cluster for use in parallelised adaptr functions — setup_cluster","text":"","code":"# Setup a cluster using 2 cores setup_cluster(cores = 2) # Get existing default cluster (printed here as invisibly returned) print(setup_cluster()) #> socket cluster with 2 nodes on host ‘localhost’ # Remove existing default cluster setup_cluster(cores = NULL) # Specify preference for running computations sequentially setup_cluster(cores = 1) # Remove default cluster preference setup_cluster(cores = NULL) # Set global option to default to using 2 new clusters each time # (only used if no default cluster preference is specified) options(mc.cores = 2)"},{"path":"https://inceptdk.github.io/adaptr/reference/setup_trial.html","id":null,"dir":"Reference","previous_headings":"","what":"Setup a generic trial specification — setup_trial","title":"Setup a generic trial specification — setup_trial","text":"Specifies design adaptive trial type outcome validates inputs. Use calibrate_trial() calibrate trial specification obtain specific value certain performance metric (e.g., Bayesian type 1 error rate). Use run_trial() run_trials() conduct single/multiple simulations specified trial, respectively. See setup_trial_binom() setup_trial_norm() simplified setup trial designs common outcome types. additional trial specification examples, see Basic examples vignette (vignette(\"Basic-examples\", package = \"adaptr\")) Advanced example vignette (vignette(\"Advanced-example\", package = \"adaptr\")).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/setup_trial.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Setup a generic trial specification — setup_trial","text":"","code":"setup_trial( arms, true_ys, fun_y_gen = NULL, fun_draws = NULL, start_probs = NULL, fixed_probs = NULL, min_probs = rep(NA, length(arms)), max_probs = rep(NA, length(arms)), rescale_probs = NULL, data_looks = NULL, max_n = NULL, look_after_every = NULL, randomised_at_looks = NULL, control = NULL, control_prob_fixed = NULL, inferiority = 0.01, superiority = 0.99, equivalence_prob = NULL, equivalence_diff = NULL, equivalence_only_first = NULL, futility_prob = NULL, futility_diff = NULL, futility_only_first = NULL, highest_is_best = FALSE, soften_power = 1, fun_raw_est = mean, cri_width = 0.95, n_draws = 5000, robust = TRUE, description = NULL, add_info = NULL )"},{"path":"https://inceptdk.github.io/adaptr/reference/setup_trial.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Setup a generic trial specification — setup_trial","text":"arms character vector unique names trial arms. true_ys numeric vector specifying true outcomes (e.g., event probabilities, mean values, etc.) trial arms. fun_y_gen function, generates outcomes. See setup_trial() Details information specify function.Note: function called setup validate output (global random seed restored afterwards). fun_draws function, generates posterior draws. See setup_trial() Details information specify function.Note: function called three times setup validate output (global random seed restored afterwards). start_probs numeric vector, allocation probabilities arm beginning trial. default (NULL) automatically generates equal randomisation probabilities arm. fixed_probs numeric vector, fixed allocation probabilities arm. Must either numeric vector NA arms without fixed probabilities values 0 1 arms NULL (default), adaptive randomisation used arms one special settings (\"sqrt-based\", \"sqrt-based start\", \"sqrt-based fixed\", \"match\") specified control_prob_fixed (described ). min_probs numeric vector, lower threshold adaptive allocation probabilities; lower probabilities rounded values. Must NA (default arms) lower threshold wanted arms using fixed allocation probabilities. max_probs numeric vector, upper threshold adaptive allocation probabilities; higher probabilities rounded values. Must NA (default arms) threshold wanted arms using fixed allocation probabilities. rescale_probs NULL (default) one either \"fixed\", \"limits\", \"\". Rescales fixed_probs (\"fixed\" \"\") min_probs/max_probs (\"limits\" \"\") arm dropping trial specifications >2 arms using rescale_factor defined initial number arms/number active arms. \"fixed_probs min_probs rescaled initial value * rescale factor, except fixed_probs controlled control_prob_fixed argument, never rescaled. max_probs rescaled 1 - ( (1 - initial value) * rescale_factor). Must NULL 2 arms control_prob_fixed \"sqrt-based fixed\". NULL, one valid non-NA values must specified either min_probs/max_probs fixed_probs (counting fixed value original control control_prob_fixed \"sqrt-based\"/\"sqrt-based start\"/\"sqrt-based fixed\").Note: using argument specific combinations values arguments may lead invalid combined (total) allocation probabilities arm dropping, case probabilities ultimately rescaled sum 1. responsibility user ensure rescaling fixed allocation probabilities minimum/maximum allocation probability limits lead invalid unexpected allocation probabilities arm dropping. Finally, initial values overwritten control_prob_fixed argument arm dropping rescaled. data_looks vector increasing integers, specifies conduct adaptive analyses (= total number patients available outcome data adaptive analysis). last number vector represents final adaptive analysis, .e., final analysis superiority, inferiority, practical equivalence, futility can claimed. Instead specifying data_looks, max_n look_after_every arguments can used combination (case data_looks must NULL, default value). max_n single integer, number patients available outcome data last possible adaptive analysis (defaults NULL). Must specified data_looks NULL. Requires specification look_after_every argument. look_after_every single integer, specified together max_n. Adaptive analyses conducted every look_after_every patients available outcome data, total sample size specified max_n (max_n need multiple look_after_every). specified, data_looks must NULL (default). randomised_at_looks vector increasing integers NULL, specifying number patients randomised time adaptive analysis, new patients randomised using current allocation probabilities said analysis. NULL (default), number patients randomised analysis match number patients available outcome data said analysis, specified data_looks max_n look_after_every, .e., outcome data available immediately randomisation patients. NULL, vector must length number adaptive analyses specified data_looks max_n look_after_every, values must larger equal number patients available outcome data analysis. control single character string, name one arms NULL (default). specified, arm serve common control arm, arms compared inferiority/superiority/equivalence thresholds (see ) comparisons. See setup_trial() Details information behaviour respect comparisons. control_prob_fixed common control arm specified, can set NULL (default), case control arm allocation probability fixed control arms change (allocation probability first control arm may still fixed using fixed_probs, 'reused' new control arm). NULL, vector probabilities either length 1 number arms - 1 can provided, one special arguments \"sqrt-based\", \"sqrt-based start\", \"sqrt-based fixed\" \"match\". See setup_trial() Details details affects trial behaviour. inferiority single numeric value vector numeric values length maximum number possible adaptive analyses, specifying probability threshold(s) inferiority (default 0.01). values must >= 0 <= 1, multiple values supplied, values may lower preceding value. common controlis used, values must < 1 / number arms. arm considered inferior dropped probability best (comparing arms) better control arm (common control used) drops inferiority threshold adaptive analysis. superiority single numeric value vector numeric values length maximum number possible adaptive analyses, specifying probability threshold(s) superiority (default 0.99). values must >= 0 <= 1, multiple values supplied, values may higher preceding value. probability arm best (comparing arms) better control arm (common control used) exceeds superiority threshold adaptive analysis, said arm declared winner trial stopped (common control used last comparator dropped design common control) become new control trial continue (common control specified). equivalence_prob single numeric value, vector numeric values length maximum number possible adaptive analyses NULL (default, corresponding equivalence assessment), specifying probability threshold(s) equivalence. NULL, values must > 0 <= 1, multiple values supplied, value may higher preceding value. NULL, arms dropped equivalence probability either () equivalence compared common control (b) equivalence arms remaining (designs without common control) exceeds equivalence threshold adaptive analysis. Requires specification equivalence_diff equivalence_only_first. equivalence_diff single numeric value (> 0) NULL (default, corresponding equivalence assessment). numeric value specified, estimated absolute differences smaller threshold considered equivalent. designs common control arm, differences non-control arm control arm used, trials without common control arm, difference highest lowest estimated outcome rates used trial stopped equivalence remaining arms equivalent. equivalence_only_first single logical trial specifications equivalence_prob equivalence_diff specified common control arm included, otherwise NULL (default). common control arm used, specifies whether equivalence assessed first control (TRUE) also subsequent control arms (FALSE) one arm superior first control becomes new control. futility_prob single numeric value, vector numeric values length maximum number possible adaptive analyses NULL (default, corresponding futility assessment), specifying probability threshold(s) futility. values must > 0 <= 1, multiple values supplied, value may higher preceding value. NULL, arms dropped futility probability futility compared common control exceeds futility threshold adaptive analysis. Requires common control arm (otherwise argument must NULL), specification futility_diff, futility_only_first. futility_diff single numeric value (> 0) NULL (default, corresponding futility assessment). numeric value specified, estimated differences threshold beneficial direction (specified highest_is_best) considered futile assessing futility designs common control arm. 1 arm remains dropping arms futility, trial stopped without declaring last arm superior. futility_only_first single logical trial specifications designs futility_prob futility_diff specified, otherwise NULL (default required designs without common control arm). Specifies whether futility assessed first control (TRUE) also subsequent control arms (FALSE) one arm superior first control becomes new control. highest_is_best single logical, specifies whether larger estimates outcome favourable ; defaults FALSE, corresponding , e.g., undesirable binary outcomes (e.g., mortality) continuous outcome lower numbers preferred (e.g., hospital length stay). soften_power either single numeric value numeric vector exactly length maximum number looks/adaptive analyses. Values must 0 1 (default); < 1, re-allocated non-fixed allocation probabilities raised power (followed rescaling sum 1) make adaptive allocation probabilities less extreme, turn used redistribute remaining probability respecting limits defined min_probs /max_probs. 1, softening applied. fun_raw_est function takes numeric vector returns single numeric value, used calculate raw summary estimate outcomes arm. Defaults mean(), always used setup_trial_binom() setup_trial_norm() functions.Note: function called one time per arm setup validate output structure. cri_width single numeric >= 0 < 1, width percentile-based credible intervals used summarising individual trial results. Defaults 0.95, corresponding 95% credible intervals. n_draws single integer, number draws posterior distributions arm used running trial. Defaults 5000; can reduced speed gain (potential loss stability results low) increased increased precision (increasing simulation time). Values < 100 allowed values < 1000 recommended warned . robust single logical, TRUE (default) medians median absolute deviations (scaled comparable standard deviation normal distributions; MAD_SDs, see stats::mad()) used summarise posterior distributions; FALSE, means standard deviations (SDs) used instead (slightly faster, may less appropriate posteriors skewed natural scale). description optional single character string describing trial design, used print functions NULL (default). add_info optional single string containing additional information regarding trial design specifications, used print functions NULL (default).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/setup_trial.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Setup a generic trial specification — setup_trial","text":"trial_spec object used run simulations run_trial() run_trials(). output essentially list containing input values (combined data.frame called trial_arms), class signals inputs validated inappropriate combinations settings ruled . Also contains best_arm, holding arm(s) best value(s) true_ys. Use str() peruse actual content returned object.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/setup_trial.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Setup a generic trial specification — setup_trial","text":"specify fun_y_gen function function must take following arguments: allocs: character vector, trial arms new patients allocated since last adaptive analysis randomised . function must return single numeric vector, corresponding outcomes patients allocated since last adaptive analysis, order allocs. See Advanced example vignette (vignette(\"Advanced-example\", package = \"adaptr\")) example details. specify fun_draws function function must take following arguments: arms: character vector, unique trial arms, order , currently active arms included function called. allocs: vector allocations patients, corresponding trial arms, including patients allocated currently active inactive arms called. ys: vector outcomes patients order allocs, including outcomes patients allocated currently active inactive arms called. control: single character, current control arm, NULL designs without common control arm, required regardless argument supplied run_trial()/run_trials(). n_draws: single integer, number posterior draws arm. function must return matrix (containing numeric values) arms named columns n_draws rows. matrix must columns currently active arms (called). row contain single posterior draw arm original outcome scale: estimated , e.g., log(odds), estimates must transformed probabilities similarly measures. Important: matrix contain NAs, even patients randomised arm yet. See provided example one way alleviate . See Advanced examples vignette (vignette(\"Advanced-example\", package = \"adaptr\")) example details. Notes Different estimation methods prior distributions may used; complex functions lead slower simulations compared simpler methods obtaining posterior draws, including specified using setup_trial_binom() setup_trial_norm() functions. Technically, using log relative effect measures — e.g. log(odds ratios) log(risk ratios) - differences compared reference arm (e.g., mean differences absolute risk differences) instead absolute values arm work extent (cautious!): Stopping superiority/inferiority/max sample sizes work. Stopping equivalence/futility may used relative effect measures log scale, thresholds adjusted accordingly. Several summary statistics run_trial() (sum_ys posterior estimates) may nonsensical relative effect measures used (depending calculation method; see raw_ests argument relevant functions). vein, extract_results() (sum_ys, sq_err, sq_err_te), summary() (sum_ys_mean/sd/median/q25/q75/q0/q100, rmse, rmse_te) may equally nonsensical calculated relative scale (see raw_ests argument relevant functions. Using additional custom functions loaded packages custom functions fun_y_gen, fun_draws, fun_raw_est functions calls user-specified functions (uses objects defined user outside functions setup_trial()-call) functions external packages simulations conducted multiple cores, objects functions must prefixed namespaces (.e., package::function()) exported, described setup_cluster() run_trials(). information arguments control: one treatment arms superior control arm (.e., passes superiority threshold defined ), arm become new control (multiple arms superior, one highest probability overall best become new control), previous control dropped inferiority, remaining arms immediately compared new control adaptive analysis dropped inferior (possibly equivalent/futile, see ) compared new control arm. applies trials common control. control_prob_fixed: length 1, allocation probability used control group (including new arm becomes control original control dropped). multiple values specified first value used arms active, second one arm dropped, forth. 1 values specified, previously set fixed_probs, min_probs max_probs new control arms ignored. allocation probabilities sum 1 (e.g, due multiple limits) rescaled . Can also set one special arguments \"sqrt-based\", \"sqrt-based start\", \"sqrt-based fixed\" \"match\" (written exactly one , case sensitive). requires start_probs NULL relevant fixed_probs NULL (NA control arm). one \"sqrt-based\"/\"sqrt-based start\"/\"sqrt-based fixed\" options used, function set square-root-transformation-based starting allocation probabilities. defined :square root number non-control arms 1-ratio arms scaled sum 1, generally increase power comparisons common control, discussed , e.g., Park et al, 2020 doi:10.1016/j.jclinepi.2020.04.025 . \"sqrt-based\" \"sqrt-based fixed\", square-root-transformation-based allocation probabilities used initially also new controls arms dropped (probabilities always calculated based number active non-control arms). \"sqrt-based\", response-adaptive randomisation used non-control arms, non-control arms use fixed, square-root based allocation probabilities times (probabilities always calculated based number active non-control arms). \"sqrt-based start\", control arm allocation probability fixed square-root based probability times calculated according initial number arms (probability also used new control(s) original control dropped). \"match\" specified, control group allocation probability always matched similar highest non-control arm allocation probability. Superiority inferiority trial designs without common control arm, superiority inferiority assessed comparing currently active groups. means \"final\" analysis trial without common control > 2 arms conducted including arms (often done practice) adaptive trial stopped, final probabilities best arm superior may differ slightly. example, trial three arms common control arm, one arm may dropped early inferiority defined < 1% probability overall best arm. trial may continue two remaining arms, stopped one declared superior defined > 99% probability overall best arm. final analysis conducted including arms, final probability best arm overall superior generally slightly lower probability first dropped arm best often > 0%, even low inferiority threshold. less relevant trial designs common control, pairwise assessments superiority/inferiority compared common control influenced similarly previously dropped arms (previously dropped arms may included analyses, even posterior distributions returned ). Similarly, actual clinical trials randomised_at_looks specified numbers higher number patients available outcome data analysis, final probabilities may change somewhat patients completed follow-included final analysis. Equivalence Equivalence assessed inferiority superiority assessed (case superiority, assessed new control arm designs common control, specified - see ). Futility Futility assessed inferiority, superiority, equivalence assessed (case superiority, assessed new control arm designs common control, specified - see ). Arms thus dropped equivalence futility. Varying probability thresholds Different probability thresholds (superiority, inferiority, equivalence, futility) may specified different adaptive analyses. may used, e.g., apply strict probability thresholds earlier analyses (make one stopping rules apply earlier analyses), similar use monitoring boundaries different thresholds used interim analyses conventional, frequentist group sequential trial designs. See Basic examples vignette (vignette(\"Basic-examples\", package = \"adaptr\")) example.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/setup_trial.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Setup a generic trial specification — setup_trial","text":"","code":"# Setup a custom trial specification with right-skewed, log-normally # distributed continuous outcomes (higher values are worse) # Define the function that will generate the outcomes in each arm # Notice: contents should match arms/true_ys in the setup_trial() call below get_ys_lognorm <- function(allocs) { y <- numeric(length(allocs)) # arms (names and order) and values (except for exponentiation) should match # those used in setup_trial (below) means <- c(\"Control\" = 2.2, \"Experimental A\" = 2.1, \"Experimental B\" = 2.3) for (arm in names(means)) { ii <- which(allocs == arm) y[ii] <- rlnorm(length(ii), means[arm], 1.5) } y } # Define the function that will generate posterior draws # In this example, the function uses no priors (corresponding to improper # flat priors) and calculates results on the log-scale, before exponentiating # back to the natural scale, which is required for assessments of # equivalence, futility and general interpretation get_draws_lognorm <- function(arms, allocs, ys, control, n_draws) { draws <- list() logys <- log(ys) for (arm in arms){ ii <- which(allocs == arm) n <- length(ii) if (n > 1) { # Necessary to avoid errors if too few patients randomised to this arm draws[[arm]] <- exp(rnorm(n_draws, mean = mean(logys[ii]), sd = sd(logys[ii])/sqrt(n - 1))) } else { # Too few patients randomised to this arm - extreme uncertainty draws[[arm]] <- exp(rnorm(n_draws, mean = mean(logys), sd = 1000 * (max(logys) - min(logys)))) } } do.call(cbind, draws) } # The actual trial specification is then defined lognorm_trial <- setup_trial( # arms should match those above arms = c(\"Control\", \"Experimental A\", \"Experimental B\"), # true_ys should match those above true_ys = exp(c(2.2, 2.1, 2.3)), fun_y_gen = get_ys_lognorm, # as specified above fun_draws = get_draws_lognorm, # as specified above max_n = 5000, look_after_every = 200, control = \"Control\", # Square-root-based, fixed control group allocation ratio # and response-adaptive randomisation for other arms control_prob_fixed = \"sqrt-based\", # Equivalence assessment equivalence_prob = 0.9, equivalence_diff = 0.5, equivalence_only_first = TRUE, highest_is_best = FALSE, # Summarise raw results by taking the mean on the # log scale and back-transforming fun_raw_est = function(x) exp(mean(log(x))) , # Summarise posteriors using medians with MAD-SDs, # as distributions will not be normal on the actual scale robust = TRUE, # Description/additional info used when printing description = \"continuous, log-normally distributed outcome\", add_info = \"SD on the log scale for all arms: 1.5\" ) # Print trial specification with 3 digits for all probabilities print(lognorm_trial, prob_digits = 3) #> Trial specification: continuous, log-normally distributed outcome #> * Undesirable outcome #> * Common control arm: Control #> * Control arm probability fixed at 0.414 (for 3 arms), 0.5 (for 2 arms) #> * Best arm: Experimental A #> #> Arms, true outcomes, starting allocation probabilities #> and allocation probability limits: #> arms true_ys start_probs fixed_probs min_probs max_probs #> Control 9.03 0.414 0.414 NA NA #> Experimental A 8.17 0.293 NA NA NA #> Experimental B 9.97 0.293 NA NA NA #> #> Maximum sample size: 5000 #> Maximum number of data looks: 25 #> Planned looks after every 200 #> patients have reached follow-up until final look after 5000 patients #> Number of patients randomised at each look: 200, 400, 600, 800, 1000, 1200, 1400, 1600, 1800, 2000, 2200, 2400, 2600, 2800, 3000, 3200, 3400, 3600, 3800, 4000, 4200, 4400, 4600, 4800, 5000 #> #> Superiority threshold: 0.99 (all analyses) #> Inferiority threshold: 0.01 (all analyses) #> Equivalence threshold: 0.9 (all analyses) (only checked for first control) #> Absolute equivalence difference: 0.5 #> No futility threshold #> Soften power for all analyses: 1 (no softening) #> #> Additional info: SD on the log scale for all arms: 1.5"},{"path":"https://inceptdk.github.io/adaptr/reference/setup_trial_binom.html","id":null,"dir":"Reference","previous_headings":"","what":"Setup a trial specification using a binary, binomially distributed outcome — setup_trial_binom","title":"Setup a trial specification using a binary, binomially distributed outcome — setup_trial_binom","text":"Specifies design adaptive trial binary, binomially distributed outcome validates inputs. Uses beta-binomial conjugate models beta(1, 1) prior distributions, corresponding uniform prior (addition 2 patients, 1 event 1 without, arm) trial. Use calibrate_trial() calibrate trial specification obtain specific value certain performance metric (e.g., Bayesian type 1 error rate). Use run_trial() run_trials() conduct single/multiple simulations specified trial, respectively. Note: add_info specified setup_trial() set NULL trial specifications setup function.details: please see setup_trial(). See setup_trial_norm() simplified setup trials normally distributed continuous outcome. additional trial specification examples, see Basic examples vignette (vignette(\"Basic-examples\", package = \"adaptr\")) Advanced example vignette (vignette(\"Advanced-example\", package = \"adaptr\")).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/setup_trial_binom.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Setup a trial specification using a binary, binomially distributed outcome — setup_trial_binom","text":"","code":"setup_trial_binom( arms, true_ys, start_probs = NULL, fixed_probs = NULL, min_probs = rep(NA, length(arms)), max_probs = rep(NA, length(arms)), rescale_probs = NULL, data_looks = NULL, max_n = NULL, look_after_every = NULL, randomised_at_looks = NULL, control = NULL, control_prob_fixed = NULL, inferiority = 0.01, superiority = 0.99, equivalence_prob = NULL, equivalence_diff = NULL, equivalence_only_first = NULL, futility_prob = NULL, futility_diff = NULL, futility_only_first = NULL, highest_is_best = FALSE, soften_power = 1, cri_width = 0.95, n_draws = 5000, robust = TRUE, description = \"generic binomially distributed outcome trial\" )"},{"path":"https://inceptdk.github.io/adaptr/reference/setup_trial_binom.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Setup a trial specification using a binary, binomially distributed outcome — setup_trial_binom","text":"arms character vector unique names trial arms. true_ys numeric vector, true probabilities (0 1) outcomes trial arms. start_probs numeric vector, allocation probabilities arm beginning trial. default (NULL) automatically generates equal randomisation probabilities arm. fixed_probs numeric vector, fixed allocation probabilities arm. Must either numeric vector NA arms without fixed probabilities values 0 1 arms NULL (default), adaptive randomisation used arms one special settings (\"sqrt-based\", \"sqrt-based start\", \"sqrt-based fixed\", \"match\") specified control_prob_fixed (described ). min_probs numeric vector, lower threshold adaptive allocation probabilities; lower probabilities rounded values. Must NA (default arms) lower threshold wanted arms using fixed allocation probabilities. max_probs numeric vector, upper threshold adaptive allocation probabilities; higher probabilities rounded values. Must NA (default arms) threshold wanted arms using fixed allocation probabilities. rescale_probs NULL (default) one either \"fixed\", \"limits\", \"\". Rescales fixed_probs (\"fixed\" \"\") min_probs/max_probs (\"limits\" \"\") arm dropping trial specifications >2 arms using rescale_factor defined initial number arms/number active arms. \"fixed_probs min_probs rescaled initial value * rescale factor, except fixed_probs controlled control_prob_fixed argument, never rescaled. max_probs rescaled 1 - ( (1 - initial value) * rescale_factor). Must NULL 2 arms control_prob_fixed \"sqrt-based fixed\". NULL, one valid non-NA values must specified either min_probs/max_probs fixed_probs (counting fixed value original control control_prob_fixed \"sqrt-based\"/\"sqrt-based start\"/\"sqrt-based fixed\").Note: using argument specific combinations values arguments may lead invalid combined (total) allocation probabilities arm dropping, case probabilities ultimately rescaled sum 1. responsibility user ensure rescaling fixed allocation probabilities minimum/maximum allocation probability limits lead invalid unexpected allocation probabilities arm dropping. Finally, initial values overwritten control_prob_fixed argument arm dropping rescaled. data_looks vector increasing integers, specifies conduct adaptive analyses (= total number patients available outcome data adaptive analysis). last number vector represents final adaptive analysis, .e., final analysis superiority, inferiority, practical equivalence, futility can claimed. Instead specifying data_looks, max_n look_after_every arguments can used combination (case data_looks must NULL, default value). max_n single integer, number patients available outcome data last possible adaptive analysis (defaults NULL). Must specified data_looks NULL. Requires specification look_after_every argument. look_after_every single integer, specified together max_n. Adaptive analyses conducted every look_after_every patients available outcome data, total sample size specified max_n (max_n need multiple look_after_every). specified, data_looks must NULL (default). randomised_at_looks vector increasing integers NULL, specifying number patients randomised time adaptive analysis, new patients randomised using current allocation probabilities said analysis. NULL (default), number patients randomised analysis match number patients available outcome data said analysis, specified data_looks max_n look_after_every, .e., outcome data available immediately randomisation patients. NULL, vector must length number adaptive analyses specified data_looks max_n look_after_every, values must larger equal number patients available outcome data analysis. control single character string, name one arms NULL (default). specified, arm serve common control arm, arms compared inferiority/superiority/equivalence thresholds (see ) comparisons. See setup_trial() Details information behaviour respect comparisons. control_prob_fixed common control arm specified, can set NULL (default), case control arm allocation probability fixed control arms change (allocation probability first control arm may still fixed using fixed_probs, 'reused' new control arm). NULL, vector probabilities either length 1 number arms - 1 can provided, one special arguments \"sqrt-based\", \"sqrt-based start\", \"sqrt-based fixed\" \"match\". See setup_trial() Details details affects trial behaviour. inferiority single numeric value vector numeric values length maximum number possible adaptive analyses, specifying probability threshold(s) inferiority (default 0.01). values must >= 0 <= 1, multiple values supplied, values may lower preceding value. common controlis used, values must < 1 / number arms. arm considered inferior dropped probability best (comparing arms) better control arm (common control used) drops inferiority threshold adaptive analysis. superiority single numeric value vector numeric values length maximum number possible adaptive analyses, specifying probability threshold(s) superiority (default 0.99). values must >= 0 <= 1, multiple values supplied, values may higher preceding value. probability arm best (comparing arms) better control arm (common control used) exceeds superiority threshold adaptive analysis, said arm declared winner trial stopped (common control used last comparator dropped design common control) become new control trial continue (common control specified). equivalence_prob single numeric value, vector numeric values length maximum number possible adaptive analyses NULL (default, corresponding equivalence assessment), specifying probability threshold(s) equivalence. NULL, values must > 0 <= 1, multiple values supplied, value may higher preceding value. NULL, arms dropped equivalence probability either () equivalence compared common control (b) equivalence arms remaining (designs without common control) exceeds equivalence threshold adaptive analysis. Requires specification equivalence_diff equivalence_only_first. equivalence_diff single numeric value (> 0) NULL (default, corresponding equivalence assessment). numeric value specified, estimated absolute differences smaller threshold considered equivalent. designs common control arm, differences non-control arm control arm used, trials without common control arm, difference highest lowest estimated outcome rates used trial stopped equivalence remaining arms equivalent. equivalence_only_first single logical trial specifications equivalence_prob equivalence_diff specified common control arm included, otherwise NULL (default). common control arm used, specifies whether equivalence assessed first control (TRUE) also subsequent control arms (FALSE) one arm superior first control becomes new control. futility_prob single numeric value, vector numeric values length maximum number possible adaptive analyses NULL (default, corresponding futility assessment), specifying probability threshold(s) futility. values must > 0 <= 1, multiple values supplied, value may higher preceding value. NULL, arms dropped futility probability futility compared common control exceeds futility threshold adaptive analysis. Requires common control arm (otherwise argument must NULL), specification futility_diff, futility_only_first. futility_diff single numeric value (> 0) NULL (default, corresponding futility assessment). numeric value specified, estimated differences threshold beneficial direction (specified highest_is_best) considered futile assessing futility designs common control arm. 1 arm remains dropping arms futility, trial stopped without declaring last arm superior. futility_only_first single logical trial specifications designs futility_prob futility_diff specified, otherwise NULL (default required designs without common control arm). Specifies whether futility assessed first control (TRUE) also subsequent control arms (FALSE) one arm superior first control becomes new control. highest_is_best single logical, specifies whether larger estimates outcome favourable ; defaults FALSE, corresponding , e.g., undesirable binary outcomes (e.g., mortality) continuous outcome lower numbers preferred (e.g., hospital length stay). soften_power either single numeric value numeric vector exactly length maximum number looks/adaptive analyses. Values must 0 1 (default); < 1, re-allocated non-fixed allocation probabilities raised power (followed rescaling sum 1) make adaptive allocation probabilities less extreme, turn used redistribute remaining probability respecting limits defined min_probs /max_probs. 1, softening applied. cri_width single numeric >= 0 < 1, width percentile-based credible intervals used summarising individual trial results. Defaults 0.95, corresponding 95% credible intervals. n_draws single integer, number draws posterior distributions arm used running trial. Defaults 5000; can reduced speed gain (potential loss stability results low) increased increased precision (increasing simulation time). Values < 100 allowed values < 1000 recommended warned . robust single logical, TRUE (default) medians median absolute deviations (scaled comparable standard deviation normal distributions; MAD_SDs, see stats::mad()) used summarise posterior distributions; FALSE, means standard deviations (SDs) used instead (slightly faster, may less appropriate posteriors skewed natural scale). description character string, default \"generic binomially distributed outcome trial\". See arguments setup_trial().","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/setup_trial_binom.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Setup a trial specification using a binary, binomially distributed outcome — setup_trial_binom","text":"trial_spec object used run simulations run_trial() run_trials(). output essentially list containing input values (combined data.frame called trial_arms), class signals inputs validated inappropriate combinations settings ruled . Also contains best_arm, holding arm(s) best value(s) true_ys. Use str() peruse actual content returned object.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/setup_trial_binom.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Setup a trial specification using a binary, binomially distributed outcome — setup_trial_binom","text":"","code":"# Setup a trial specification using a binary, binomially # distributed, undesirable outcome binom_trial <- setup_trial_binom( arms = c(\"Arm A\", \"Arm B\", \"Arm C\"), true_ys = c(0.25, 0.20, 0.30), # Minimum allocation of 15% in all arms min_probs = rep(0.15, 3), data_looks = seq(from = 300, to = 2000, by = 100), # Stop for equivalence if > 90% probability of # absolute differences < 5 percentage points equivalence_prob = 0.9, equivalence_diff = 0.05, soften_power = 0.5 # Limit extreme allocation ratios ) # Print using 3 digits for probabilities print(binom_trial, prob_digits = 3) #> Trial specification: generic binomially distributed outcome trial #> * Undesirable outcome #> * No common control arm #> * Best arm: Arm B #> #> Arms, true outcomes, starting allocation probabilities #> and allocation probability limits: #> arms true_ys start_probs fixed_probs min_probs max_probs #> Arm A 0.25 0.333 NA 0.15 NA #> Arm B 0.20 0.333 NA 0.15 NA #> Arm C 0.30 0.333 NA 0.15 NA #> #> Maximum sample size: 2000 #> Maximum number of data looks: 18 #> Planned data looks after: 300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, 1500, 1600, 1700, 1800, 1900, 2000 patients have reached follow-up #> Number of patients randomised at each look: 300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, 1500, 1600, 1700, 1800, 1900, 2000 #> #> Superiority threshold: 0.99 (all analyses) #> Inferiority threshold: 0.01 (all analyses) #> Equivalence threshold: 0.9 (all analyses) (no common control) #> Absolute equivalence difference: 0.05 #> No futility threshold (not relevant - no common control) #> Soften power for all analyses: 0.5"},{"path":"https://inceptdk.github.io/adaptr/reference/setup_trial_norm.html","id":null,"dir":"Reference","previous_headings":"","what":"Setup a trial specification using a continuous, normally distributed outcome — setup_trial_norm","title":"Setup a trial specification using a continuous, normally distributed outcome — setup_trial_norm","text":"Specifies design adaptive trial continuous, normally distributed outcome validates inputs. Uses normally distributed posterior distributions mean values trial arm; technically, priors used (using normal-normal conjugate prior models extremely wide uniform priors gives similar results simple, unadjusted estimates). corresponds use improper, flat priors, although explicitly specified . Use calibrate_trial() calibrate trial specification obtain specific value certain performance metric (e.g., Bayesian type 1 error rate). Use run_trial() run_trials() conduct single/multiple simulations specified trial, respectively.Note: add_info specified setup_trial() set arms standard deviations used trials specified using function.details: please see setup_trial(). See setup_trial_binom() simplified setup trials binomially distributed binary outcomes. additional trial specification examples, see Basic examples vignette (vignette(\"Basic-examples\", package = \"adaptr\")) Advanced example vignette (vignette(\"Advanced-example\", package = \"adaptr\")).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/setup_trial_norm.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Setup a trial specification using a continuous, normally distributed outcome — setup_trial_norm","text":"","code":"setup_trial_norm( arms, true_ys, sds, start_probs = NULL, fixed_probs = NULL, min_probs = rep(NA, length(arms)), max_probs = rep(NA, length(arms)), rescale_probs = NULL, data_looks = NULL, max_n = NULL, look_after_every = NULL, randomised_at_looks = NULL, control = NULL, control_prob_fixed = NULL, inferiority = 0.01, superiority = 0.99, equivalence_prob = NULL, equivalence_diff = NULL, equivalence_only_first = NULL, futility_prob = NULL, futility_diff = NULL, futility_only_first = NULL, highest_is_best = FALSE, soften_power = 1, cri_width = 0.95, n_draws = 5000, robust = FALSE, description = \"generic normally distributed outcome trial\" )"},{"path":"https://inceptdk.github.io/adaptr/reference/setup_trial_norm.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Setup a trial specification using a continuous, normally distributed outcome — setup_trial_norm","text":"arms character vector unique names trial arms. true_ys numeric vector, simulated means outcome trial arms. sds numeric vector, true standard deviations (must > 0) outcome trial arms. start_probs numeric vector, allocation probabilities arm beginning trial. default (NULL) automatically generates equal randomisation probabilities arm. fixed_probs numeric vector, fixed allocation probabilities arm. Must either numeric vector NA arms without fixed probabilities values 0 1 arms NULL (default), adaptive randomisation used arms one special settings (\"sqrt-based\", \"sqrt-based start\", \"sqrt-based fixed\", \"match\") specified control_prob_fixed (described ). min_probs numeric vector, lower threshold adaptive allocation probabilities; lower probabilities rounded values. Must NA (default arms) lower threshold wanted arms using fixed allocation probabilities. max_probs numeric vector, upper threshold adaptive allocation probabilities; higher probabilities rounded values. Must NA (default arms) threshold wanted arms using fixed allocation probabilities. rescale_probs NULL (default) one either \"fixed\", \"limits\", \"\". Rescales fixed_probs (\"fixed\" \"\") min_probs/max_probs (\"limits\" \"\") arm dropping trial specifications >2 arms using rescale_factor defined initial number arms/number active arms. \"fixed_probs min_probs rescaled initial value * rescale factor, except fixed_probs controlled control_prob_fixed argument, never rescaled. max_probs rescaled 1 - ( (1 - initial value) * rescale_factor). Must NULL 2 arms control_prob_fixed \"sqrt-based fixed\". NULL, one valid non-NA values must specified either min_probs/max_probs fixed_probs (counting fixed value original control control_prob_fixed \"sqrt-based\"/\"sqrt-based start\"/\"sqrt-based fixed\").Note: using argument specific combinations values arguments may lead invalid combined (total) allocation probabilities arm dropping, case probabilities ultimately rescaled sum 1. responsibility user ensure rescaling fixed allocation probabilities minimum/maximum allocation probability limits lead invalid unexpected allocation probabilities arm dropping. Finally, initial values overwritten control_prob_fixed argument arm dropping rescaled. data_looks vector increasing integers, specifies conduct adaptive analyses (= total number patients available outcome data adaptive analysis). last number vector represents final adaptive analysis, .e., final analysis superiority, inferiority, practical equivalence, futility can claimed. Instead specifying data_looks, max_n look_after_every arguments can used combination (case data_looks must NULL, default value). max_n single integer, number patients available outcome data last possible adaptive analysis (defaults NULL). Must specified data_looks NULL. Requires specification look_after_every argument. look_after_every single integer, specified together max_n. Adaptive analyses conducted every look_after_every patients available outcome data, total sample size specified max_n (max_n need multiple look_after_every). specified, data_looks must NULL (default). randomised_at_looks vector increasing integers NULL, specifying number patients randomised time adaptive analysis, new patients randomised using current allocation probabilities said analysis. NULL (default), number patients randomised analysis match number patients available outcome data said analysis, specified data_looks max_n look_after_every, .e., outcome data available immediately randomisation patients. NULL, vector must length number adaptive analyses specified data_looks max_n look_after_every, values must larger equal number patients available outcome data analysis. control single character string, name one arms NULL (default). specified, arm serve common control arm, arms compared inferiority/superiority/equivalence thresholds (see ) comparisons. See setup_trial() Details information behaviour respect comparisons. control_prob_fixed common control arm specified, can set NULL (default), case control arm allocation probability fixed control arms change (allocation probability first control arm may still fixed using fixed_probs, 'reused' new control arm). NULL, vector probabilities either length 1 number arms - 1 can provided, one special arguments \"sqrt-based\", \"sqrt-based start\", \"sqrt-based fixed\" \"match\". See setup_trial() Details details affects trial behaviour. inferiority single numeric value vector numeric values length maximum number possible adaptive analyses, specifying probability threshold(s) inferiority (default 0.01). values must >= 0 <= 1, multiple values supplied, values may lower preceding value. common controlis used, values must < 1 / number arms. arm considered inferior dropped probability best (comparing arms) better control arm (common control used) drops inferiority threshold adaptive analysis. superiority single numeric value vector numeric values length maximum number possible adaptive analyses, specifying probability threshold(s) superiority (default 0.99). values must >= 0 <= 1, multiple values supplied, values may higher preceding value. probability arm best (comparing arms) better control arm (common control used) exceeds superiority threshold adaptive analysis, said arm declared winner trial stopped (common control used last comparator dropped design common control) become new control trial continue (common control specified). equivalence_prob single numeric value, vector numeric values length maximum number possible adaptive analyses NULL (default, corresponding equivalence assessment), specifying probability threshold(s) equivalence. NULL, values must > 0 <= 1, multiple values supplied, value may higher preceding value. NULL, arms dropped equivalence probability either () equivalence compared common control (b) equivalence arms remaining (designs without common control) exceeds equivalence threshold adaptive analysis. Requires specification equivalence_diff equivalence_only_first. equivalence_diff single numeric value (> 0) NULL (default, corresponding equivalence assessment). numeric value specified, estimated absolute differences smaller threshold considered equivalent. designs common control arm, differences non-control arm control arm used, trials without common control arm, difference highest lowest estimated outcome rates used trial stopped equivalence remaining arms equivalent. equivalence_only_first single logical trial specifications equivalence_prob equivalence_diff specified common control arm included, otherwise NULL (default). common control arm used, specifies whether equivalence assessed first control (TRUE) also subsequent control arms (FALSE) one arm superior first control becomes new control. futility_prob single numeric value, vector numeric values length maximum number possible adaptive analyses NULL (default, corresponding futility assessment), specifying probability threshold(s) futility. values must > 0 <= 1, multiple values supplied, value may higher preceding value. NULL, arms dropped futility probability futility compared common control exceeds futility threshold adaptive analysis. Requires common control arm (otherwise argument must NULL), specification futility_diff, futility_only_first. futility_diff single numeric value (> 0) NULL (default, corresponding futility assessment). numeric value specified, estimated differences threshold beneficial direction (specified highest_is_best) considered futile assessing futility designs common control arm. 1 arm remains dropping arms futility, trial stopped without declaring last arm superior. futility_only_first single logical trial specifications designs futility_prob futility_diff specified, otherwise NULL (default required designs without common control arm). Specifies whether futility assessed first control (TRUE) also subsequent control arms (FALSE) one arm superior first control becomes new control. highest_is_best single logical, specifies whether larger estimates outcome favourable ; defaults FALSE, corresponding , e.g., undesirable binary outcomes (e.g., mortality) continuous outcome lower numbers preferred (e.g., hospital length stay). soften_power either single numeric value numeric vector exactly length maximum number looks/adaptive analyses. Values must 0 1 (default); < 1, re-allocated non-fixed allocation probabilities raised power (followed rescaling sum 1) make adaptive allocation probabilities less extreme, turn used redistribute remaining probability respecting limits defined min_probs /max_probs. 1, softening applied. cri_width single numeric >= 0 < 1, width percentile-based credible intervals used summarising individual trial results. Defaults 0.95, corresponding 95% credible intervals. n_draws single integer, number draws posterior distributions arm used running trial. Defaults 5000; can reduced speed gain (potential loss stability results low) increased increased precision (increasing simulation time). Values < 100 allowed values < 1000 recommended warned . robust single logical, TRUE (default) medians median absolute deviations (scaled comparable standard deviation normal distributions; MAD_SDs, see stats::mad()) used summarise posterior distributions; FALSE, means standard deviations (SDs) used instead (slightly faster, may less appropriate posteriors skewed natural scale). description character string, default \"generic normally distributed outcome trial\". See arguments setup_trial().","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/setup_trial_norm.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Setup a trial specification using a continuous, normally distributed outcome — setup_trial_norm","text":"trial_spec object used run simulations run_trial() run_trials(). output essentially list containing input values (combined data.frame called trial_arms), class signals inputs validated inappropriate combinations settings ruled . Also contains best_arm, holding arm(s) best value(s) true_ys. Use str() peruse actual content returned object.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/setup_trial_norm.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Setup a trial specification using a continuous, normally distributed outcome — setup_trial_norm","text":"posteriors used type trial (generic, continuous, normally distributed outcome) definition normally distributed, FALSE used default value robust argument.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/setup_trial_norm.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Setup a trial specification using a continuous, normally distributed outcome — setup_trial_norm","text":"","code":"# Setup a trial specification using a continuous, normally distributed, desirable outcome norm_trial <- setup_trial_norm( arms = c(\"Control\", \"New A\", \"New B\", \"New C\"), true_ys = c(15, 20, 14, 13), sds = c(2, 2.5, 1.9, 1.8), # SDs in each arm max_n = 500, look_after_every = 50, control = \"Control\", # Common control arm # Square-root-based, fixed control group allocation ratios control_prob_fixed = \"sqrt-based fixed\", # Desirable outcome highest_is_best = TRUE, soften_power = 0.5 # Limit extreme allocation ratios ) # Print using 3 digits for probabilities print(norm_trial, prob_digits = 3) #> Trial specification: generic normally distributed outcome trial #> * Desirable outcome #> * Common control arm: Control #> * Control arm probability fixed at 0.366 (for 4 arms), 0.414 (for 3 arms), 0.5 (for 2 arms) #> * Best arm: New A #> #> Arms, true outcomes, starting allocation probabilities #> and allocation probability limits: #> arms true_ys start_probs fixed_probs min_probs max_probs #> Control 15 0.366 0.366 NA NA #> New A 20 0.211 0.211 NA NA #> New B 14 0.211 0.211 NA NA #> New C 13 0.211 0.211 NA NA #> #> Maximum sample size: 500 #> Maximum number of data looks: 10 #> Planned looks after every 50 #> patients have reached follow-up until final look after 500 patients #> Number of patients randomised at each look: 50, 100, 150, 200, 250, 300, 350, 400, 450, 500 #> #> Superiority threshold: 0.99 (all analyses) #> Inferiority threshold: 0.01 (all analyses) #> No equivalence threshold #> No futility threshold #> Soften power for all analyses: 0.5 #> #> Additional info: Arm SDs - Control: 2; New A: 2.5; New B: 1.9; New C: 1.8."},{"path":"https://inceptdk.github.io/adaptr/reference/stop0_warning0.html","id":null,"dir":"Reference","previous_headings":"","what":"stop() and warning() with call. = FALSE — stop0_warning0","title":"stop() and warning() with call. = FALSE — stop0_warning0","text":"Used internally. Calls stop0() warning() enforces call. = FALSE, suppress call error/warning.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/stop0_warning0.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"stop() and warning() with call. = FALSE — stop0_warning0","text":"","code":"stop0(...) warning0(...)"},{"path":"https://inceptdk.github.io/adaptr/reference/stop0_warning0.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"stop() and warning() with call. = FALSE — stop0_warning0","text":"... zero objects can coerced character (pasted together separator) single condition object.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/summarise_dist.html","id":null,"dir":"Reference","previous_headings":"","what":"Summarise distribution — summarise_dist","title":"Summarise distribution — summarise_dist","text":"Used internally, summarise posterior distributions, logic apply distribution (thus, name).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/summarise_dist.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Summarise distribution — summarise_dist","text":"","code":"summarise_dist(x, robust = TRUE, interval_width = 0.95)"},{"path":"https://inceptdk.github.io/adaptr/reference/summarise_dist.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Summarise distribution — summarise_dist","text":"x numeric vector posterior draws. robust single logical. TRUE (default) median median absolute deviation (MAD-SD; scaled comparable standard deviation normal distributions) used summarise distribution; FALSE, mean standard deviation (SD) used instead (slightly faster, may less appropriate skewed distribution). interval_width single numeric value (> 0 <1); width interval; default 0.95, corresponding 95% percentile-base credible intervals posterior distributions.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/summarise_dist.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Summarise distribution — summarise_dist","text":"numeric vector four named elements: est (median/mean), err (MAD-SD/SD), lo hi (lower upper boundaries interval).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/summarise_dist.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Summarise distribution — summarise_dist","text":"MAD-SDs scaled correspond SDs distributions normal, similarly stats::mad() function; see details regarding calculation function's description.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/summarise_num.html","id":null,"dir":"Reference","previous_headings":"","what":"Summarise numeric vector — summarise_num","title":"Summarise numeric vector — summarise_num","text":"Used internally, summarise numeric vectors.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/summarise_num.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Summarise numeric vector — summarise_num","text":"","code":"summarise_num(x)"},{"path":"https://inceptdk.github.io/adaptr/reference/summarise_num.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Summarise numeric vector — summarise_num","text":"x numeric vector.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/summarise_num.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Summarise numeric vector — summarise_num","text":"numeric vector seven named elements: mean, sd, median, p25, p75, p0, p100 corresponding mean, standard deviation, median, 25-/75-/0-/100-percentiles.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/summary.html","id":null,"dir":"Reference","previous_headings":"","what":"Summary of simulated trial results — summary","title":"Summary of simulated trial results — summary","text":"Summarises simulation results run_trials() function. Uses extract_results() check_performance(), may used directly extract key trial results without summarising calculate performance metrics (uncertainty measures desired) return tidy data.frame.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/summary.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Summary of simulated trial results — summary","text":"","code":"# S3 method for trial_results summary( object, select_strategy = \"control if available\", select_last_arm = FALSE, select_preferences = NULL, te_comp = NULL, raw_ests = FALSE, final_ests = NULL, restrict = NULL, cores = NULL, ... )"},{"path":"https://inceptdk.github.io/adaptr/reference/summary.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Summary of simulated trial results — summary","text":"object trial_results object, output run_trials() function. select_strategy single character string. trial stopped due superiority (1 arm remaining, select_last_arm set TRUE trial designs common control arm; see ), parameter specifies arm considered selected calculating trial design performance metrics, described ; corresponds consequence inconclusive trial, .e., arm used practice. following options available must written exactly (case sensitive, abbreviated): \"control available\" (default): selects first control arm trials common control arm arm active end--trial, otherwise arm selected. trial designs without common control, arm selected. \"none\": selects arm trials ending superiority. \"control\": similar \"control available\", throw error used trial designs without common control arm. \"final control\": selects final control arm regardless whether trial stopped practical equivalence, futility, maximum sample size; strategy can specified trial designs common control arm. \"control best\": selects first control arm still active end--trial, otherwise selects best remaining arm (defined remaining arm highest probability best last adaptive analysis conducted). works trial designs common control arm. \"best\": selects best remaining arm (described \"control best\"). \"list best\": selects first remaining arm specified list (specified using select_preferences, technically character vector). none arms active end--trial, best remaining arm selected (described ). \"list\": specified , arms provided list remain active end--trial, arm selected. select_last_arm single logical, defaults FALSE. TRUE, remaining active arm (last control) selected trials common control arm ending equivalence futility, considering options specified select_strategy. Must FALSE trial designs without common control arm. select_preferences character vector specifying number arms used selection one \"list best\" \"list\" options specified select_strategy. Can contain valid arms available trial. te_comp character string, treatment-effect comparator. Can either NULL (default) case first control arm used trial designs common control arm, string naming single trial arm. used calculating err_te sq_err_te (error squared error treatment effect comparing selected arm comparator arm, described ). raw_ests single logical. FALSE (default), posterior estimates (post_ests post_ests_all, see setup_trial() run_trial()) used calculate err sq_err (error squared error estimated compared specified effect selected arm) err_te sq_err_te (error squared error treatment effect comparing selected arm comparator arm, described te_comp ). TRUE, raw estimates (raw_ests raw_ests_all, see setup_trial() run_trial()) used instead posterior estimates. final_ests single logical. TRUE (recommended) final estimates calculated using outcome data patients randomised trials stopped used (post_ests_all raw_ests_all, see setup_trial() run_trial()); FALSE, estimates calculated arm arm stopped (last adaptive analysis ) using data patients reach followed time point patients randomised used (post_ests raw_ests, see setup_trial() run_trial()). NULL (default), argument set FALSE outcome data available immediate randomisation patients (backwards compatibility, final posterior estimates may vary slightly situation, even using data); otherwise said TRUE. See setup_trial() details estimates calculated. restrict single character string NULL. NULL (default), results summarised simulations; \"superior\", results summarised simulations ending superiority ; \"selected\", results summarised simulations ending selected arm (according specified arm selection strategy simulations ending superiority). summary measures (e.g., prob_conclusive) substantially different interpretations restricted, calculated nonetheless. cores NULL single integer. NULL, default value set setup_cluster() used control whether extractions simulation results done parallel default cluster sequentially main process; value specified setup_cluster(), cores set value stored global \"mc.cores\" option (previously set options(mc.cores = ), 1 option specified. cores = 1, computations run sequentially primary process, cores > 1, new parallel cluster setup using parallel library removed function completes. See setup_cluster() details. ... additional arguments, used.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/summary.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Summary of simulated trial results — summary","text":"\"trial_results_summary\" object containing following values: n_rep: number simulations. n_summarised: described check_performance(). highest_is_best: specified setup_trial(). elapsed_time: total simulation time. size_mean, size_sd, size_median, size_p25, size_p75, size_p0, size_p100, sum_ys_mean, sum_ys_sd, sum_ys_median, sum_ys_p25, sum_ys_p75, sum_ys_p0, sum_ys_p100, ratio_ys_mean, ratio_ys_sd, ratio_ys_median, ratio_ys_p25, ratio_ys_p75, ratio_ys_p0, ratio_ys_p100, prob_conclusive, prob_superior, prob_equivalence, prob_futility, prob_max, prob_select_* (* either \"arm_ arm names none), rmse, rmse_te, mae, mae_te, idp: performance metrics described check_performance(). Note sum_ys_ ratio_ys_ measures use outcome data randomised patients, regardless whether outcome data available last analysis , described extract_results(). select_strategy, select_last_arm, select_preferences, te_comp, raw_ests, final_ests, restrict: specified . control: control arm specified setup_trial(), setup_trial_binom() setup_trial_norm(); NULL control. equivalence_assessed, futility_assessed: single logicals, specifies whether trial design specification includes assessments equivalence /futility. base_seed: specified run_trials(). cri_width, n_draws, robust, description, add_info: specified setup_trial(), setup_trial_binom() setup_trial_norm().","code":""},{"path":[]},{"path":"https://inceptdk.github.io/adaptr/reference/summary.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Summary of simulated trial results — summary","text":"","code":"# Setup a trial specification binom_trial <- setup_trial_binom(arms = c(\"A\", \"B\", \"C\", \"D\"), control = \"A\", true_ys = c(0.20, 0.18, 0.22, 0.24), data_looks = 1:20 * 100) # Run 10 simulations with a specified random base seed res <- run_trials(binom_trial, n_rep = 10, base_seed = 12345) # Summarise simulations - select the control arm if available in trials not # ending with a superiority decision res_sum <- summary(res, select_strategy = \"control\") # Print summary print(res_sum, digits = 1) #> Multiple simulation results: generic binomially distributed outcome trial #> * Undesirable outcome #> * Number of simulations: 10 #> * Number of simulations summarised: 10 (all trials) #> * Common control arm: A #> * Selection strategy: first control if available (otherwise no selection) #> * Treatment effect compared to: no comparison #> #> Performance metrics (using posterior estimates from last adaptive analysis): #> * Sample sizes: mean 1840.0 (SD: 506.0) | median 2000.0 (IQR: 2000.0 to 2000.0) [range: 400.0 to 2000.0] #> * Total summarised outcomes: mean 369.9 (SD: 105.4) | median 390.0 (IQR: 376.5 to 408.5) [range: 84.0 to 466.0] #> * Total summarised outcome rates: mean 0.202 (SD: 0.016) | median 0.196 (IQR: 0.194 to 0.209) [range: 0.180 to 0.233] #> * Conclusive: 10.0% #> * Superiority: 10.0% #> * Equivalence: 0.0% [not assessed] #> * Futility: 0.0% [not assessed] #> * Inconclusive at max sample size: 90.0% #> * Selection probabilities: A: 80.0% | B: 10.0% | C: 0.0% | D: 0.0% | None: 10.0% #> * RMSE / MAE: 0.02061 / 0.01915 #> * RMSE / MAE treatment effect: 0.18206 / 0.18206 #> * Ideal design percentage: 70.4% #> #> Simulation details: #> * Simulation time: 0.792 secs #> * Base random seed: 12345 #> * Credible interval width: 95% #> * Number of posterior draws: 5000 #> * Estimation method: posterior medians with MAD-SDs"},{"path":"https://inceptdk.github.io/adaptr/reference/update_saved_calibration.html","id":null,"dir":"Reference","previous_headings":"","what":"Update previously saved calibration result — update_saved_calibration","title":"Update previously saved calibration result — update_saved_calibration","text":"function updates previously saved \"trial_calibration\"-object created saved calibrate_trial() using previous version adaptr, including embedded trial specification trial results objects (internally using update_saved_trials() function). allows use calibration results, including calibrated trial specification best simulations results calibration process, used without errors version package. function run per saved simulation object issue warning object already date. overview changes made according adaptr package version used generate original object provided Details.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/update_saved_calibration.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Update previously saved calibration result — update_saved_calibration","text":"","code":"update_saved_calibration(path, version = NULL, compress = TRUE)"},{"path":"https://inceptdk.github.io/adaptr/reference/update_saved_calibration.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Update previously saved calibration result — update_saved_calibration","text":"path single character; path saved \"trial_calibration\"-object containing calibration result saved calibrate_trial(). version passed saveRDS() saving updated object, defaults NULL (saveRDS()), means current default version used. compress passed saveRDS() saving updated object, defaults TRUE (saveRDS()), see saveRDS() options.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/update_saved_calibration.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Update previously saved calibration result — update_saved_calibration","text":"Invisibly returns updated \"trial_calibration\"-object.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/update_saved_calibration.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Update previously saved calibration result — update_saved_calibration","text":"following changes made according version adaptr used generate original \"trial_calibration\" object: v1.3.0+: updates version number \"trial_calibration\"-object updates embedded \"trial_results\"-object (saved $best_sims, ) \"trial_spec\"-objects (saved $input_trial_spec $best_trial_spec) described update_saved_trials().","code":""},{"path":[]},{"path":"https://inceptdk.github.io/adaptr/reference/update_saved_trials.html","id":null,"dir":"Reference","previous_headings":"","what":"Update previously saved simulation results — update_saved_trials","title":"Update previously saved simulation results — update_saved_trials","text":"function updates previously saved \"trial_results\" object created saved run_trials() using previous version adaptr, allowing results previous simulations post-processed (including performance metric calculation, printing plotting) without errors version package. function run per saved simulation object issue warning object already date. overview changes made according adaptr package version used generate original object provided Details.NOTE: values updated set NA (posterior estimates 'final' analysis conducted last adaptive analysis including outcome data patients), thus using raw_ests = TRUE final_ests = TRUE extract_results() summary() functions lead missing values values calculated updated simulation objects.NOTE: objects created adaptr package, .e., trial specifications generated setup_trial() / setup_trial_binom() / setup_trial_norm() single simulation results run_trials() included part returned output run_trials() re-created re-running relevant code using updated version adaptr; manually re-loaded previous sessions, may cause errors problems updated version package.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/update_saved_trials.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Update previously saved simulation results — update_saved_trials","text":"","code":"update_saved_trials(path, version = NULL, compress = TRUE)"},{"path":"https://inceptdk.github.io/adaptr/reference/update_saved_trials.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Update previously saved simulation results — update_saved_trials","text":"path single character; path saved \"trial_results\"-object containing simulations saved run_trials(). version passed saveRDS() saving updated object, defaults NULL (saveRDS()), means current default version used. compress passed saveRDS() saving updated object, defaults TRUE (saveRDS()), see saveRDS() options.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/update_saved_trials.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Update previously saved simulation results — update_saved_trials","text":"Invisibly returns updated \"trial_results\"-object.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/update_saved_trials.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Update previously saved simulation results — update_saved_trials","text":"following changes made according version adaptr used generate original \"trial_results\" object: v1.2.0+: updates version number reallocate_probs argument embedded trial specification. v1.1.1 earlier: updates version number everything related follow-data collection lag (versions, randomised_at_looks argument setup_trial() functions exist, practical purposes identical number patients available data look) reallocate_probs argument embedded trial specification.","code":""},{"path":[]},{"path":"https://inceptdk.github.io/adaptr/reference/validate_trial.html","id":null,"dir":"Reference","previous_headings":"","what":"Validate trial specification — validate_trial","title":"Validate trial specification — validate_trial","text":"Used internally. Validates inputs common trial specifications, specified setup_trial(), setup_trial_binom() setup_trial_norm().","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/validate_trial.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Validate trial specification — validate_trial","text":"","code":"validate_trial( arms, true_ys, start_probs = NULL, fixed_probs = NULL, min_probs = rep(NA, length(arms)), max_probs = rep(NA, length(arms)), rescale_probs = NULL, data_looks = NULL, max_n = NULL, look_after_every = NULL, randomised_at_looks = NULL, control = NULL, control_prob_fixed = NULL, inferiority = 0.01, superiority = 0.99, equivalence_prob = NULL, equivalence_diff = NULL, equivalence_only_first = NULL, futility_prob = NULL, futility_diff = NULL, futility_only_first = NULL, highest_is_best = FALSE, soften_power = 1, cri_width = 0.95, n_draws = 5000, robust = FALSE, description = NULL, add_info = NULL, fun_y_gen, fun_draws, fun_raw_est )"},{"path":"https://inceptdk.github.io/adaptr/reference/validate_trial.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Validate trial specification — validate_trial","text":"arms character vector unique names trial arms. true_ys numeric vector specifying true outcomes (e.g., event probabilities, mean values, etc.) trial arms. start_probs numeric vector, allocation probabilities arm beginning trial. default (NULL) automatically generates equal randomisation probabilities arm. fixed_probs numeric vector, fixed allocation probabilities arm. Must either numeric vector NA arms without fixed probabilities values 0 1 arms NULL (default), adaptive randomisation used arms one special settings (\"sqrt-based\", \"sqrt-based start\", \"sqrt-based fixed\", \"match\") specified control_prob_fixed (described ). min_probs numeric vector, lower threshold adaptive allocation probabilities; lower probabilities rounded values. Must NA (default arms) lower threshold wanted arms using fixed allocation probabilities. max_probs numeric vector, upper threshold adaptive allocation probabilities; higher probabilities rounded values. Must NA (default arms) threshold wanted arms using fixed allocation probabilities. rescale_probs NULL (default) one either \"fixed\", \"limits\", \"\". Rescales fixed_probs (\"fixed\" \"\") min_probs/max_probs (\"limits\" \"\") arm dropping trial specifications >2 arms using rescale_factor defined initial number arms/number active arms. \"fixed_probs min_probs rescaled initial value * rescale factor, except fixed_probs controlled control_prob_fixed argument, never rescaled. max_probs rescaled 1 - ( (1 - initial value) * rescale_factor). Must NULL 2 arms control_prob_fixed \"sqrt-based fixed\". NULL, one valid non-NA values must specified either min_probs/max_probs fixed_probs (counting fixed value original control control_prob_fixed \"sqrt-based\"/\"sqrt-based start\"/\"sqrt-based fixed\").Note: using argument specific combinations values arguments may lead invalid combined (total) allocation probabilities arm dropping, case probabilities ultimately rescaled sum 1. responsibility user ensure rescaling fixed allocation probabilities minimum/maximum allocation probability limits lead invalid unexpected allocation probabilities arm dropping. Finally, initial values overwritten control_prob_fixed argument arm dropping rescaled. data_looks vector increasing integers, specifies conduct adaptive analyses (= total number patients available outcome data adaptive analysis). last number vector represents final adaptive analysis, .e., final analysis superiority, inferiority, practical equivalence, futility can claimed. Instead specifying data_looks, max_n look_after_every arguments can used combination (case data_looks must NULL, default value). max_n single integer, number patients available outcome data last possible adaptive analysis (defaults NULL). Must specified data_looks NULL. Requires specification look_after_every argument. look_after_every single integer, specified together max_n. Adaptive analyses conducted every look_after_every patients available outcome data, total sample size specified max_n (max_n need multiple look_after_every). specified, data_looks must NULL (default). randomised_at_looks vector increasing integers NULL, specifying number patients randomised time adaptive analysis, new patients randomised using current allocation probabilities said analysis. NULL (default), number patients randomised analysis match number patients available outcome data said analysis, specified data_looks max_n look_after_every, .e., outcome data available immediately randomisation patients. NULL, vector must length number adaptive analyses specified data_looks max_n look_after_every, values must larger equal number patients available outcome data analysis. control single character string, name one arms NULL (default). specified, arm serve common control arm, arms compared inferiority/superiority/equivalence thresholds (see ) comparisons. See setup_trial() Details information behaviour respect comparisons. control_prob_fixed common control arm specified, can set NULL (default), case control arm allocation probability fixed control arms change (allocation probability first control arm may still fixed using fixed_probs, 'reused' new control arm). NULL, vector probabilities either length 1 number arms - 1 can provided, one special arguments \"sqrt-based\", \"sqrt-based start\", \"sqrt-based fixed\" \"match\". See setup_trial() Details details affects trial behaviour. inferiority single numeric value vector numeric values length maximum number possible adaptive analyses, specifying probability threshold(s) inferiority (default 0.01). values must >= 0 <= 1, multiple values supplied, values may lower preceding value. common controlis used, values must < 1 / number arms. arm considered inferior dropped probability best (comparing arms) better control arm (common control used) drops inferiority threshold adaptive analysis. superiority single numeric value vector numeric values length maximum number possible adaptive analyses, specifying probability threshold(s) superiority (default 0.99). values must >= 0 <= 1, multiple values supplied, values may higher preceding value. probability arm best (comparing arms) better control arm (common control used) exceeds superiority threshold adaptive analysis, said arm declared winner trial stopped (common control used last comparator dropped design common control) become new control trial continue (common control specified). equivalence_prob single numeric value, vector numeric values length maximum number possible adaptive analyses NULL (default, corresponding equivalence assessment), specifying probability threshold(s) equivalence. NULL, values must > 0 <= 1, multiple values supplied, value may higher preceding value. NULL, arms dropped equivalence probability either () equivalence compared common control (b) equivalence arms remaining (designs without common control) exceeds equivalence threshold adaptive analysis. Requires specification equivalence_diff equivalence_only_first. equivalence_diff single numeric value (> 0) NULL (default, corresponding equivalence assessment). numeric value specified, estimated absolute differences smaller threshold considered equivalent. designs common control arm, differences non-control arm control arm used, trials without common control arm, difference highest lowest estimated outcome rates used trial stopped equivalence remaining arms equivalent. equivalence_only_first single logical trial specifications equivalence_prob equivalence_diff specified common control arm included, otherwise NULL (default). common control arm used, specifies whether equivalence assessed first control (TRUE) also subsequent control arms (FALSE) one arm superior first control becomes new control. futility_prob single numeric value, vector numeric values length maximum number possible adaptive analyses NULL (default, corresponding futility assessment), specifying probability threshold(s) futility. values must > 0 <= 1, multiple values supplied, value may higher preceding value. NULL, arms dropped futility probability futility compared common control exceeds futility threshold adaptive analysis. Requires common control arm (otherwise argument must NULL), specification futility_diff, futility_only_first. futility_diff single numeric value (> 0) NULL (default, corresponding futility assessment). numeric value specified, estimated differences threshold beneficial direction (specified highest_is_best) considered futile assessing futility designs common control arm. 1 arm remains dropping arms futility, trial stopped without declaring last arm superior. futility_only_first single logical trial specifications designs futility_prob futility_diff specified, otherwise NULL (default required designs without common control arm). Specifies whether futility assessed first control (TRUE) also subsequent control arms (FALSE) one arm superior first control becomes new control. highest_is_best single logical, specifies whether larger estimates outcome favourable ; defaults FALSE, corresponding , e.g., undesirable binary outcomes (e.g., mortality) continuous outcome lower numbers preferred (e.g., hospital length stay). soften_power either single numeric value numeric vector exactly length maximum number looks/adaptive analyses. Values must 0 1 (default); < 1, re-allocated non-fixed allocation probabilities raised power (followed rescaling sum 1) make adaptive allocation probabilities less extreme, turn used redistribute remaining probability respecting limits defined min_probs /max_probs. 1, softening applied. cri_width single numeric >= 0 < 1, width percentile-based credible intervals used summarising individual trial results. Defaults 0.95, corresponding 95% credible intervals. n_draws single integer, number draws posterior distributions arm used running trial. Defaults 5000; can reduced speed gain (potential loss stability results low) increased increased precision (increasing simulation time). Values < 100 allowed values < 1000 recommended warned . robust single logical, TRUE (default) medians median absolute deviations (scaled comparable standard deviation normal distributions; MAD_SDs, see stats::mad()) used summarise posterior distributions; FALSE, means standard deviations (SDs) used instead (slightly faster, may less appropriate posteriors skewed natural scale). description optional single character string describing trial design, used print functions NULL (default). add_info optional single string containing additional information regarding trial design specifications, used print functions NULL (default). fun_y_gen function, generates outcomes. See setup_trial() Details information specify function.Note: function called setup validate output (global random seed restored afterwards). fun_draws function, generates posterior draws. See setup_trial() Details information specify function.Note: function called three times setup validate output (global random seed restored afterwards). fun_raw_est function takes numeric vector returns single numeric value, used calculate raw summary estimate outcomes arm. Defaults mean(), always used setup_trial_binom() setup_trial_norm() functions.Note: function called one time per arm setup validate output structure.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/validate_trial.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Validate trial specification — validate_trial","text":"object class trial_spec containing validated trial specification.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/vapply_helpers.html","id":null,"dir":"Reference","previous_headings":"","what":"vapply helpers — vapply_helpers","title":"vapply helpers — vapply_helpers","text":"Used internally. Helpers simplifying code invoking vapply().","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/vapply_helpers.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"vapply helpers — vapply_helpers","text":"","code":"vapply_num(X, FUN, ...) vapply_int(X, FUN, ...) vapply_str(X, FUN, ...) vapply_lgl(X, FUN, ...)"},{"path":"https://inceptdk.github.io/adaptr/reference/vapply_helpers.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"vapply helpers — vapply_helpers","text":"X vector (atomic list) expression object. objects (including classed objects) coerced base::.list. FUN function applied element X: see ‘Details’. case functions like +, %*%, function name must backquoted quoted. ... optional arguments FUN.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/verify_int.html","id":null,"dir":"Reference","previous_headings":"","what":"Verify input is single integer (potentially within range) — verify_int","title":"Verify input is single integer (potentially within range) — verify_int","text":"Used internally.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/verify_int.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Verify input is single integer (potentially within range) — verify_int","text":"","code":"verify_int(x, min_value = -Inf, max_value = Inf, open = \"no\")"},{"path":"https://inceptdk.github.io/adaptr/reference/verify_int.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Verify input is single integer (potentially within range) — verify_int","text":"x value check. min_value, max_value single integers (), lower upper bounds x lie. open single character, determines whether min_value max_value excluded . Valid values: \"\" (= closed interval; min_value max_value included; default value), \"right\", \"left\", \"yes\" (= open interval, min_value max_value excluded).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/verify_int.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Verify input is single integer (potentially within range) — verify_int","text":"Single logical.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/which_nearest.html","id":null,"dir":"Reference","previous_headings":"","what":"Find the index of value nearest to a target value — which_nearest","title":"Find the index of value nearest to a target value — which_nearest","text":"Used internally, find index value vector nearest target value, possibly specific preferred direction.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/which_nearest.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Find the index of value nearest to a target value — which_nearest","text":"","code":"which_nearest(values, target, dir)"},{"path":"https://inceptdk.github.io/adaptr/reference/which_nearest.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Find the index of value nearest to a target value — which_nearest","text":"values numeric vector, values considered. target single numeric value, target find value closest . dir single numeric value. 0 (default), finds index value closest target, regardless direction. < 0 > 0, finds index value closest target, considers values /target, respectfully, (otherwise returns closest value regardless direction).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/which_nearest.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Find the index of value nearest to a target value — which_nearest","text":"Single integer, index value closest target according dir.","code":""},{"path":"https://inceptdk.github.io/adaptr/news/index.html","id":"adaptr-140","dir":"Changelog","previous_headings":"","what":"adaptr 1.4.0","title":"adaptr 1.4.0","text":"minor release implementing new functionality, including bug fixes, updates documentation, argument checking test coverage.","code":""},{"path":"https://inceptdk.github.io/adaptr/news/index.html","id":"new-features-and-major-changes-1-4-0","dir":"Changelog","previous_headings":"","what":"New features and major changes:","title":"adaptr 1.4.0","text":"Added rescale_probs argument setup_trial() family functions, allowing automatic rescaling fixed allocation probabilities minimum/maximum allocation probability limits arms dropped simulations trial designs >2 arms. extract_results() function now also returns errors simulation (addition squared errors) check_performance(), plot_convergence(), summary() functions (including print() methods) now calculate present median absolute errors (addition root mean squared errors). plot_metrics_ecdf() function now supports plotting errors (raw, squared, absolute), now takes necessary additional arguments passed extract_results() used arm selection simulated trials stopped superiority. Added update_saved_calibration() function update calibrated trial objects (including embedded trial specifications results) saved calibrate_trial() function using previous versions package. Rewritten README ‘Overview’ vignette better reflect typical usage workflow.","code":""},{"path":"https://inceptdk.github.io/adaptr/news/index.html","id":"minor-changes-and-bug-fixes-1-4-0","dir":"Changelog","previous_headings":"","what":"Minor changes and bug fixes:","title":"adaptr 1.4.0","text":"setup_trial() family functions now stops error less two arms provided. setup_trial() family functions now stops error control_prob_fixed \"match\" fixed_probs provided common control arm. Improved error message true_ys-argument missing setup_trial_binom() true_ys- sds-argument missing setup_trial_norm(). Changed number rows used plot_convergence() plot_status() total number plots <= 3 nrow ncol NULL. Fixed bug extract_results() (thus functions relying ), causing arm selection inconclusive trial simulations error stopped practical equivalence simulated patients randomised included last analysis. Improved test coverage. Minor edits clarification package documentation. Added references two open access articles (code) simulation studies using adaptr assess performance adaptive clinical trials according different follow-/data collection lags (https://doi.org/10.1002/pst.2342) different sceptical priors (https://doi.org/10.1002/pst.2387)","code":""},{"path":"https://inceptdk.github.io/adaptr/news/index.html","id":"adaptr-132","dir":"Changelog","previous_headings":"","what":"adaptr 1.3.2","title":"adaptr 1.3.2","text":"CRAN release: 2023-08-21 patch release bug fixes documentation updates. Fixed bug check_performance() caused proportion conclusive trial simulations (prob_conclusive) calculated incorrectly restricted simulations ending superiority selected arm according selection strategy used restrict. bug also affected summary() method multiple simulations (relies check_performance()). Fixed bug plot_convergence() caused arm selection probabilities incorrectly calculated plotted (bug affect functions calculating summarising simulation results). Corrections plot_convergence() summary() method documentation arm selection probability extraction. Fixed inconsistency argument names documentation internal %f|% function (renamed arguments consistency internal %||% function).","code":""},{"path":"https://inceptdk.github.io/adaptr/news/index.html","id":"adaptr-131","dir":"Changelog","previous_headings":"","what":"adaptr 1.3.1","title":"adaptr 1.3.1","text":"CRAN release: 2023-05-02 patch release triggered CRAN request fix failing test also includes minor documentation updates. Fixed single test failed CRAN due update testthat dependency waldo. Fixed erroneous duplicated text README thus also GitHub package website. Minor edits/clarifications documentation including function documentation, README, vignettes.","code":""},{"path":"https://inceptdk.github.io/adaptr/news/index.html","id":"adaptr-130","dir":"Changelog","previous_headings":"","what":"adaptr 1.3.0","title":"adaptr 1.3.0","text":"CRAN release: 2023-03-31 release implements new functionality (importantly trial calibration), improved parallelism, single important bug fix, multiple minor fixes, changes, improvements.","code":""},{"path":"https://inceptdk.github.io/adaptr/news/index.html","id":"new-features-and-major-changes-1-3-0","dir":"Changelog","previous_headings":"","what":"New features and major changes:","title":"adaptr 1.3.0","text":"Added calibrate_trial() function, can used calibrate trial specification obtain (approximately) desired value certain performance characteristic. Typically, used calibrate trial specifications control overall Bayesian type-1 error rates scenario -arm differences, function extensible may used calibrate trial specifications performance metrics. function uses quite efficient Gaussian process-based Bayesian optimisation algorithm, based part code Robert Gramacy (Surrogates chapter 5, see: https://bookdown.org/rbg/surrogates/chap5.html), permission. better parallelism. functions extract_results(), check_performance(), plot_convergence(), plot_history(), summary() print() methods trial_results objects may now run parallel via cores argument described . Please note, functions parallelised, already fast time took copy data clusters meant parallel versions functions actually slower original ones, even run results 10-100K simulations. setup_cluster() function added can now used setup use parallel cluster throughout session, avoiding overhead setting stopping new clusters time. default value cores argument functions now NULL; actual value supplied, always used initiate new, temporary cluster size, left NULL defaults defined setup_cluster() used (), otherwise \"mc.cores\" global option used (new, temporary clusters size) specified options(mc.cores = ), otherwise 1. Finally, adaptr now always uses parallel (forked) clusters default parallel works operating systems. Better (safer, correct) random number generation. Previously, random number generation managed ad-hoc fashion produce similar results sequentially parallel; influence minimal, package now uses \"L'Ecuyer-CMRG\" random number generator (see base::RNGkind()) appropriately manages random number streams across parallel workers, also run sequentially, ensuring identical results regardless use parallelism . Important: Due change, simulation results run_trials() bootstrapped uncertainty measures check_performance() identical generated previous versions package. addition, individual trial_result objects trial_results object returned run_trials() longer contain individual seed values, instead NULL. Added plot_metrics_ecdf() function, plots empirical cumulative distribution functions numerical performance metrics across multiple trial simulations. Added check_remaining_arms() function, summarises combinations remaining arms across multiple simulations.","code":""},{"path":"https://inceptdk.github.io/adaptr/news/index.html","id":"bug-fixes-1-3-0","dir":"Changelog","previous_headings":"","what":"Bug fixes:","title":"adaptr 1.3.0","text":"Fixed bug extract_results() (thus also functionality relying : check_performance(), plot_convergence(), summary() method multiple simulated trials) caused incorrect total event counts event rates calculated trial specification follow-/outcome-data lag (total event count last adaptive analysis incorrectly used, ratios divided total number patients randomised). fixed documentation relevant functions updated clarify behaviour . bug affect results simulations without follow-/outcome- data lag. Values inferiority must now less 1 / number arms common control group used, setup_trial() family functions now throws error case. Larger values invalid lead simultaneous dropping arms, caused run_trial() crash. print() method results check_performance() respect digits argument; fixed.","code":""},{"path":"https://inceptdk.github.io/adaptr/news/index.html","id":"minor-changes-1-3-0","dir":"Changelog","previous_headings":"","what":"Minor changes:","title":"adaptr 1.3.0","text":"Now includes min/max values summarising numerical performance metrics check_performance() summary(), may plotted using plot_convergence() well. setup_trial() functions now accepts equivalence_prob futility_prob thresholds 1. run_trial() stops drops arms equivalence/futility probabilities exceed current threshold, values 1 makes stopping impossible. values, however, may used sequence thresholds effectively prevent early stopping equivalence/futility allowing later. overwrite TRUE run_trials(), previous object overwritten, even previous object used different trial specification. Various minor updates, corrections, clarifications, structural changes package documentation (including package description website). Changed size linewidth examples plot_status() plot_history() describing arguments may passed ggplot2 due deprecation/change aesthetic names ggplot2 3.4.0. Documentation plot_convergence(), plot_status(), plot_history() now prints plots rendering documentation ggplot2 installed (include example plots website). setup_trial() functions longer prints message informing single best arm. Various minor changes print() methods (including changed number digits stopping rule probability thresholds). setup_trial() family functions now restores global random seed run outcome generator/draws generator functions called validation, involving random number generation. always documented, seems preferable restore global random seed trial setup functions validated. Always explicitly uses inherits = FALSE calls base::get(), base::exists(), base::assign() ensure .Random.seed checked/used/assigned global environment. unlikely ever cause errors done, serves extra safety.","code":""},{"path":"https://inceptdk.github.io/adaptr/news/index.html","id":"adaptr-120","dir":"Changelog","previous_headings":"","what":"adaptr 1.2.0","title":"adaptr 1.2.0","text":"CRAN release: 2022-12-13 minor release implementing new functionality, updating documentation, fixing multiple minor issues, mostly validation supplied arguments.","code":""},{"path":"https://inceptdk.github.io/adaptr/news/index.html","id":"new-features-1-2-0","dir":"Changelog","previous_headings":"","what":"New features:","title":"adaptr 1.2.0","text":"Simulate follow-(data collection) lag: added option different numbers simulated patients outcome data available compared total number simulated patients randomised adaptive analysis (randomised_at_looks argument setup_trial() family functions). Defaults behaviour previously (.e., assuming outcome data immediately available following randomisation). consequence, run_trial() now always conducts final analysis last adaptive analysis (including final posterior ‘raw’ estimates), including outcome data patients randomised arms, regardless many outcome data available last conducted adaptive analysis. sets results saved printed individual simulations; extract_results(), summary() print() methods multiple simulations gained additional argument final_ests controls whether results final analysis last relevant adaptive analysis including arm used calculating performance metrics (defaults set ensure backwards compatibility otherwise use final estimates situations patients included final adaptive analysis). example added Basic examples vignette illustrating use argument. Updated plot_history() plot_status() add possibility plot different metrics according number patients randomised specified new randomised_at_looks argument setup_trial() functions described . Added update_saved_trials() function, ‘updates’ multiple trial simulation objects saved run_trials() using previous versions adaptr. reformats objects work updated functions. values can added previously saved simulation results without re-running; values replaced NAs, - used - may lead printing plotting missing values. However, function allows re-use data previous simulations without re-run (mostly relevant time-consuming simulations). Important: please notice objects (.e., objects returned setup_trial() family functions single simulations returned run_trial()) may create problems errors functions created previous versions package manually reloaded; objects updated re-running code using newest version package. Similarly, manually reloaded results run_trials() updated using function may cause errors/problems used. Added check_performance() function (corresponding print() method) calculates performance metrics can used calculate uncertainty measures using non-parametric bootstrapping. function now used internally summary() method multiple trial objects. Added plot_convergence() function plots performance metrics according number simulations conducted multiple simulated trials (possibly splitting simulations batches), used assess stability performance metrics. Added possibility define different probability thresholds different adaptive analyses setup_trials() family functions (inferiority, superiority, equivalence, futility probability thresholds), according updates run_trial() print() method trial specifications. Updated plot_status(); multiple arms may now simultaneously plotted specifying one valid arm NA (lead statuses arms plotted) arm argument. addition, arm name(s) now always included plots.","code":""},{"path":"https://inceptdk.github.io/adaptr/news/index.html","id":"documentation-bug-fixes-and-other-changes-1-2-0","dir":"Changelog","previous_headings":"","what":"Documentation, bug fixes, and other changes:","title":"adaptr 1.2.0","text":"Added reference open access article describing key methodological considerations adaptive clinical trials using adaptive stopping, arm dropping, randomisation package documentation (https://doi.org/10.1016/j.jclinepi.2022.11.002). proportion conclusive trials restricting trials summarised (extract_results()) may now calculated summary() method multiple trial simulations new check_performance() function, even measure may difficult interpret trials summarised restricted. Minor fixes, updates, added clarification documentation multiple places, including vignettes, also updated illustrate new functions added. Minor fix print() method individual trial results, correctly print additional information trials. Fixed bug number patients included used subsequent data_looks setup_trial() family functions; now produces error. Added internal vapply_lgl() helper function; internal vapply() helper functions now used consistently simplify code. Added multiple internal (non-exported) helper functions simplify code throughout package: stop0(), warning0(), %f|%, summarise_num(). Added names = FALSE argument quantile() calls summary() method trial_results objects avoid unnecessary naming components subsequently extracted returned object. Ideal design percentages may calculated NaN, Inf -Inf scenarios differences; now converted NA returned various functions. Minor edits/clarifications several errors/warnings/messages. Minor fix internal verify_int() function; supplied , e.g., character vector, execution stopped error instead returning FALSE, needed print proper error messages checks. Minor fix plot_status(), upper area (representing trials/arms still recruiting) sometimes erroneously plotted due floating point issue summed proportions sometimes slightly exceed 1. Added additional tests test increase coverage existing new functions. Minor fix internal reallocate_probs() function, \"match\"-ing control arm allocation highest probability non-control arm probabilities initially 0, returned vector lacked names, now added. Minor fixes internal validate_trial() function order : give error multiple values supplied control_prob_fixed argument; give correct error multiple values provided equivalence_diff futility_diff; give error NA supplied futility_only_first; add tolerance checks data_looks randomised_at_looks avoid errors due floating point imprecision specified using multiplication similar; correct errors decimal numbers patient count arguments supplied; additional minor updates errors/messages.","code":""},{"path":"https://inceptdk.github.io/adaptr/news/index.html","id":"adaptr-111","dir":"Changelog","previous_headings":"","what":"adaptr 1.1.1","title":"adaptr 1.1.1","text":"CRAN release: 2022-08-16 patch release triggered CRAN request updates. Minor formatting changes adaptr-package help page comply CRAN request use HTML5 (used R >=4.2.0). Minor bug fixes print() methods trial specifications summaries multiple trial results. Minor updates messages setup_trial().","code":""},{"path":"https://inceptdk.github.io/adaptr/news/index.html","id":"adaptr-110","dir":"Changelog","previous_headings":"","what":"adaptr 1.1.0","title":"adaptr 1.1.0","text":"CRAN release: 2022-06-17 Minor release: Updates run_trials() function allow exporting objects clusters running simulations multiple cores. Updates internal function verify_int() due updates R >= 4.2.0, avoid incorrect error messages future versions due changed behaviour && function used arguments length > 1 (https://stat.ethz.ch/pipermail/r-announce/2022/000683.html). Minor documentation edits updated citation info (reference software paper published Journal Open Source Software, https://doi.org/10.21105/joss.04284).","code":""},{"path":"https://inceptdk.github.io/adaptr/news/index.html","id":"adaptr-100","dir":"Changelog","previous_headings":"","what":"adaptr 1.0.0","title":"adaptr 1.0.0","text":"CRAN release: 2022-03-15 First release.","code":""}] +[{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":null,"dir":"","previous_headings":"","what":"GNU General Public License","title":"GNU General Public License","text":"Version 3, 29 June 2007Copyright © 2007 Free Software Foundation, Inc.  Everyone permitted copy distribute verbatim copies license document, changing allowed.","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"preamble","dir":"","previous_headings":"","what":"Preamble","title":"GNU General Public License","text":"GNU General Public License free, copyleft license software kinds works. licenses software practical works designed take away freedom share change works. contrast, GNU General Public License intended guarantee freedom share change versions program–make sure remains free software users. , Free Software Foundation, use GNU General Public License software; applies also work released way authors. can apply programs, . speak free software, referring freedom, price. General Public Licenses designed make sure freedom distribute copies free software (charge wish), receive source code can get want , can change software use pieces new free programs, know can things. protect rights, need prevent others denying rights asking surrender rights. Therefore, certain responsibilities distribute copies software, modify : responsibilities respect freedom others. example, distribute copies program, whether gratis fee, must pass recipients freedoms received. must make sure , , receive can get source code. must show terms know rights. Developers use GNU GPL protect rights two steps: (1) assert copyright software, (2) offer License giving legal permission copy, distribute /modify . developers’ authors’ protection, GPL clearly explains warranty free software. users’ authors’ sake, GPL requires modified versions marked changed, problems attributed erroneously authors previous versions. devices designed deny users access install run modified versions software inside , although manufacturer can . fundamentally incompatible aim protecting users’ freedom change software. systematic pattern abuse occurs area products individuals use, precisely unacceptable. Therefore, designed version GPL prohibit practice products. problems arise substantially domains, stand ready extend provision domains future versions GPL, needed protect freedom users. Finally, every program threatened constantly software patents. States allow patents restrict development use software general-purpose computers, , wish avoid special danger patents applied free program make effectively proprietary. prevent , GPL assures patents used render program non-free. precise terms conditions copying, distribution modification follow.","code":""},{"path":[]},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"id_0-definitions","dir":"","previous_headings":"TERMS AND CONDITIONS","what":"0. Definitions","title":"GNU General Public License","text":"“License” refers version 3 GNU General Public License. “Copyright” also means copyright-like laws apply kinds works, semiconductor masks. “Program” refers copyrightable work licensed License. licensee addressed “”. “Licensees” “recipients” may individuals organizations. “modify” work means copy adapt part work fashion requiring copyright permission, making exact copy. resulting work called “modified version” earlier work work “based ” earlier work. “covered work” means either unmodified Program work based Program. “propagate” work means anything , without permission, make directly secondarily liable infringement applicable copyright law, except executing computer modifying private copy. Propagation includes copying, distribution (without modification), making available public, countries activities well. “convey” work means kind propagation enables parties make receive copies. Mere interaction user computer network, transfer copy, conveying. interactive user interface displays “Appropriate Legal Notices” extent includes convenient prominently visible feature (1) displays appropriate copyright notice, (2) tells user warranty work (except extent warranties provided), licensees may convey work License, view copy License. interface presents list user commands options, menu, prominent item list meets criterion.","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"id_1-source-code","dir":"","previous_headings":"TERMS AND CONDITIONS","what":"1. Source Code","title":"GNU General Public License","text":"“source code” work means preferred form work making modifications . “Object code” means non-source form work. “Standard Interface” means interface either official standard defined recognized standards body, , case interfaces specified particular programming language, one widely used among developers working language. “System Libraries” executable work include anything, work whole, () included normal form packaging Major Component, part Major Component, (b) serves enable use work Major Component, implement Standard Interface implementation available public source code form. “Major Component”, context, means major essential component (kernel, window system, ) specific operating system () executable work runs, compiler used produce work, object code interpreter used run . “Corresponding Source” work object code form means source code needed generate, install, (executable work) run object code modify work, including scripts control activities. However, include work’s System Libraries, general-purpose tools generally available free programs used unmodified performing activities part work. example, Corresponding Source includes interface definition files associated source files work, source code shared libraries dynamically linked subprograms work specifically designed require, intimate data communication control flow subprograms parts work. Corresponding Source need include anything users can regenerate automatically parts Corresponding Source. Corresponding Source work source code form work.","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"id_2-basic-permissions","dir":"","previous_headings":"TERMS AND CONDITIONS","what":"2. Basic Permissions","title":"GNU General Public License","text":"rights granted License granted term copyright Program, irrevocable provided stated conditions met. License explicitly affirms unlimited permission run unmodified Program. output running covered work covered License output, given content, constitutes covered work. License acknowledges rights fair use equivalent, provided copyright law. may make, run propagate covered works convey, without conditions long license otherwise remains force. may convey covered works others sole purpose make modifications exclusively , provide facilities running works, provided comply terms License conveying material control copyright. thus making running covered works must exclusively behalf, direction control, terms prohibit making copies copyrighted material outside relationship . Conveying circumstances permitted solely conditions stated . Sublicensing allowed; section 10 makes unnecessary.","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"id_3-protecting-users-legal-rights-from-anti-circumvention-law","dir":"","previous_headings":"TERMS AND CONDITIONS","what":"3. Protecting Users’ Legal Rights From Anti-Circumvention Law","title":"GNU General Public License","text":"covered work shall deemed part effective technological measure applicable law fulfilling obligations article 11 WIPO copyright treaty adopted 20 December 1996, similar laws prohibiting restricting circumvention measures. convey covered work, waive legal power forbid circumvention technological measures extent circumvention effected exercising rights License respect covered work, disclaim intention limit operation modification work means enforcing, work’s users, third parties’ legal rights forbid circumvention technological measures.","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"id_4-conveying-verbatim-copies","dir":"","previous_headings":"TERMS AND CONDITIONS","what":"4. Conveying Verbatim Copies","title":"GNU General Public License","text":"may convey verbatim copies Program’s source code receive , medium, provided conspicuously appropriately publish copy appropriate copyright notice; keep intact notices stating License non-permissive terms added accord section 7 apply code; keep intact notices absence warranty; give recipients copy License along Program. may charge price price copy convey, may offer support warranty protection fee.","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"id_5-conveying-modified-source-versions","dir":"","previous_headings":"TERMS AND CONDITIONS","what":"5. Conveying Modified Source Versions","title":"GNU General Public License","text":"may convey work based Program, modifications produce Program, form source code terms section 4, provided also meet conditions: ) work must carry prominent notices stating modified , giving relevant date. b) work must carry prominent notices stating released License conditions added section 7. requirement modifies requirement section 4 “keep intact notices”. c) must license entire work, whole, License anyone comes possession copy. License therefore apply, along applicable section 7 additional terms, whole work, parts, regardless packaged. License gives permission license work way, invalidate permission separately received . d) work interactive user interfaces, must display Appropriate Legal Notices; however, Program interactive interfaces display Appropriate Legal Notices, work need make . compilation covered work separate independent works, nature extensions covered work, combined form larger program, volume storage distribution medium, called “aggregate” compilation resulting copyright used limit access legal rights compilation’s users beyond individual works permit. Inclusion covered work aggregate cause License apply parts aggregate.","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"id_6-conveying-non-source-forms","dir":"","previous_headings":"TERMS AND CONDITIONS","what":"6. Conveying Non-Source Forms","title":"GNU General Public License","text":"may convey covered work object code form terms sections 4 5, provided also convey machine-readable Corresponding Source terms License, one ways: ) Convey object code , embodied , physical product (including physical distribution medium), accompanied Corresponding Source fixed durable physical medium customarily used software interchange. b) Convey object code , embodied , physical product (including physical distribution medium), accompanied written offer, valid least three years valid long offer spare parts customer support product model, give anyone possesses object code either (1) copy Corresponding Source software product covered License, durable physical medium customarily used software interchange, price reasonable cost physically performing conveying source, (2) access copy Corresponding Source network server charge. c) Convey individual copies object code copy written offer provide Corresponding Source. alternative allowed occasionally noncommercially, received object code offer, accord subsection 6b. d) Convey object code offering access designated place (gratis charge), offer equivalent access Corresponding Source way place charge. need require recipients copy Corresponding Source along object code. place copy object code network server, Corresponding Source may different server (operated third party) supports equivalent copying facilities, provided maintain clear directions next object code saying find Corresponding Source. Regardless server hosts Corresponding Source, remain obligated ensure available long needed satisfy requirements. e) Convey object code using peer--peer transmission, provided inform peers object code Corresponding Source work offered general public charge subsection 6d. separable portion object code, whose source code excluded Corresponding Source System Library, need included conveying object code work. “User Product” either (1) “consumer product”, means tangible personal property normally used personal, family, household purposes, (2) anything designed sold incorporation dwelling. determining whether product consumer product, doubtful cases shall resolved favor coverage. particular product received particular user, “normally used” refers typical common use class product, regardless status particular user way particular user actually uses, expects expected use, product. product consumer product regardless whether product substantial commercial, industrial non-consumer uses, unless uses represent significant mode use product. “Installation Information” User Product means methods, procedures, authorization keys, information required install execute modified versions covered work User Product modified version Corresponding Source. information must suffice ensure continued functioning modified object code case prevented interfered solely modification made. convey object code work section , , specifically use , User Product, conveying occurs part transaction right possession use User Product transferred recipient perpetuity fixed term (regardless transaction characterized), Corresponding Source conveyed section must accompanied Installation Information. requirement apply neither third party retains ability install modified object code User Product (example, work installed ROM). requirement provide Installation Information include requirement continue provide support service, warranty, updates work modified installed recipient, User Product modified installed. Access network may denied modification materially adversely affects operation network violates rules protocols communication across network. Corresponding Source conveyed, Installation Information provided, accord section must format publicly documented (implementation available public source code form), must require special password key unpacking, reading copying.","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"id_7-additional-terms","dir":"","previous_headings":"TERMS AND CONDITIONS","what":"7. Additional Terms","title":"GNU General Public License","text":"“Additional permissions” terms supplement terms License making exceptions one conditions. Additional permissions applicable entire Program shall treated though included License, extent valid applicable law. additional permissions apply part Program, part may used separately permissions, entire Program remains governed License without regard additional permissions. convey copy covered work, may option remove additional permissions copy, part . (Additional permissions may written require removal certain cases modify work.) may place additional permissions material, added covered work, can give appropriate copyright permission. Notwithstanding provision License, material add covered work, may (authorized copyright holders material) supplement terms License terms: ) Disclaiming warranty limiting liability differently terms sections 15 16 License; b) Requiring preservation specified reasonable legal notices author attributions material Appropriate Legal Notices displayed works containing ; c) Prohibiting misrepresentation origin material, requiring modified versions material marked reasonable ways different original version; d) Limiting use publicity purposes names licensors authors material; e) Declining grant rights trademark law use trade names, trademarks, service marks; f) Requiring indemnification licensors authors material anyone conveys material (modified versions ) contractual assumptions liability recipient, liability contractual assumptions directly impose licensors authors. non-permissive additional terms considered “restrictions” within meaning section 10. Program received , part , contains notice stating governed License along term restriction, may remove term. license document contains restriction permits relicensing conveying License, may add covered work material governed terms license document, provided restriction survive relicensing conveying. add terms covered work accord section, must place, relevant source files, statement additional terms apply files, notice indicating find applicable terms. Additional terms, permissive non-permissive, may stated form separately written license, stated exceptions; requirements apply either way.","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"id_8-termination","dir":"","previous_headings":"TERMS AND CONDITIONS","what":"8. Termination","title":"GNU General Public License","text":"may propagate modify covered work except expressly provided License. attempt otherwise propagate modify void, automatically terminate rights License (including patent licenses granted third paragraph section 11). However, cease violation License, license particular copyright holder reinstated () provisionally, unless copyright holder explicitly finally terminates license, (b) permanently, copyright holder fails notify violation reasonable means prior 60 days cessation. Moreover, license particular copyright holder reinstated permanently copyright holder notifies violation reasonable means, first time received notice violation License (work) copyright holder, cure violation prior 30 days receipt notice. Termination rights section terminate licenses parties received copies rights License. rights terminated permanently reinstated, qualify receive new licenses material section 10.","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"id_9-acceptance-not-required-for-having-copies","dir":"","previous_headings":"TERMS AND CONDITIONS","what":"9. Acceptance Not Required for Having Copies","title":"GNU General Public License","text":"required accept License order receive run copy Program. Ancillary propagation covered work occurring solely consequence using peer--peer transmission receive copy likewise require acceptance. However, nothing License grants permission propagate modify covered work. actions infringe copyright accept License. Therefore, modifying propagating covered work, indicate acceptance License .","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"id_10-automatic-licensing-of-downstream-recipients","dir":"","previous_headings":"TERMS AND CONDITIONS","what":"10. Automatic Licensing of Downstream Recipients","title":"GNU General Public License","text":"time convey covered work, recipient automatically receives license original licensors, run, modify propagate work, subject License. responsible enforcing compliance third parties License. “entity transaction” transaction transferring control organization, substantially assets one, subdividing organization, merging organizations. propagation covered work results entity transaction, party transaction receives copy work also receives whatever licenses work party’s predecessor interest give previous paragraph, plus right possession Corresponding Source work predecessor interest, predecessor can get reasonable efforts. may impose restrictions exercise rights granted affirmed License. example, may impose license fee, royalty, charge exercise rights granted License, may initiate litigation (including cross-claim counterclaim lawsuit) alleging patent claim infringed making, using, selling, offering sale, importing Program portion .","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"id_11-patents","dir":"","previous_headings":"TERMS AND CONDITIONS","what":"11. Patents","title":"GNU General Public License","text":"“contributor” copyright holder authorizes use License Program work Program based. work thus licensed called contributor’s “contributor version”. contributor’s “essential patent claims” patent claims owned controlled contributor, whether already acquired hereafter acquired, infringed manner, permitted License, making, using, selling contributor version, include claims infringed consequence modification contributor version. purposes definition, “control” includes right grant patent sublicenses manner consistent requirements License. contributor grants non-exclusive, worldwide, royalty-free patent license contributor’s essential patent claims, make, use, sell, offer sale, import otherwise run, modify propagate contents contributor version. following three paragraphs, “patent license” express agreement commitment, however denominated, enforce patent (express permission practice patent covenant sue patent infringement). “grant” patent license party means make agreement commitment enforce patent party. convey covered work, knowingly relying patent license, Corresponding Source work available anyone copy, free charge terms License, publicly available network server readily accessible means, must either (1) cause Corresponding Source available, (2) arrange deprive benefit patent license particular work, (3) arrange, manner consistent requirements License, extend patent license downstream recipients. “Knowingly relying” means actual knowledge , patent license, conveying covered work country, recipient’s use covered work country, infringe one identifiable patents country reason believe valid. , pursuant connection single transaction arrangement, convey, propagate procuring conveyance , covered work, grant patent license parties receiving covered work authorizing use, propagate, modify convey specific copy covered work, patent license grant automatically extended recipients covered work works based . patent license “discriminatory” include within scope coverage, prohibits exercise , conditioned non-exercise one rights specifically granted License. may convey covered work party arrangement third party business distributing software, make payment third party based extent activity conveying work, third party grants, parties receive covered work , discriminatory patent license () connection copies covered work conveyed (copies made copies), (b) primarily connection specific products compilations contain covered work, unless entered arrangement, patent license granted, prior 28 March 2007. Nothing License shall construed excluding limiting implied license defenses infringement may otherwise available applicable patent law.","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"id_12-no-surrender-of-others-freedom","dir":"","previous_headings":"TERMS AND CONDITIONS","what":"12. No Surrender of Others’ Freedom","title":"GNU General Public License","text":"conditions imposed (whether court order, agreement otherwise) contradict conditions License, excuse conditions License. convey covered work satisfy simultaneously obligations License pertinent obligations, consequence may convey . example, agree terms obligate collect royalty conveying convey Program, way satisfy terms License refrain entirely conveying Program.","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"id_13-use-with-the-gnu-affero-general-public-license","dir":"","previous_headings":"TERMS AND CONDITIONS","what":"13. Use with the GNU Affero General Public License","title":"GNU General Public License","text":"Notwithstanding provision License, permission link combine covered work work licensed version 3 GNU Affero General Public License single combined work, convey resulting work. terms License continue apply part covered work, special requirements GNU Affero General Public License, section 13, concerning interaction network apply combination .","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"id_14-revised-versions-of-this-license","dir":"","previous_headings":"TERMS AND CONDITIONS","what":"14. Revised Versions of this License","title":"GNU General Public License","text":"Free Software Foundation may publish revised /new versions GNU General Public License time time. new versions similar spirit present version, may differ detail address new problems concerns. version given distinguishing version number. Program specifies certain numbered version GNU General Public License “later version” applies , option following terms conditions either numbered version later version published Free Software Foundation. Program specify version number GNU General Public License, may choose version ever published Free Software Foundation. Program specifies proxy can decide future versions GNU General Public License can used, proxy’s public statement acceptance version permanently authorizes choose version Program. Later license versions may give additional different permissions. However, additional obligations imposed author copyright holder result choosing follow later version.","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"id_15-disclaimer-of-warranty","dir":"","previous_headings":"TERMS AND CONDITIONS","what":"15. Disclaimer of Warranty","title":"GNU General Public License","text":"WARRANTY PROGRAM, EXTENT PERMITTED APPLICABLE LAW. EXCEPT OTHERWISE STATED WRITING COPYRIGHT HOLDERS /PARTIES PROVIDE PROGRAM “” WITHOUT WARRANTY KIND, EITHER EXPRESSED IMPLIED, INCLUDING, LIMITED , IMPLIED WARRANTIES MERCHANTABILITY FITNESS PARTICULAR PURPOSE. ENTIRE RISK QUALITY PERFORMANCE PROGRAM . PROGRAM PROVE DEFECTIVE, ASSUME COST NECESSARY SERVICING, REPAIR CORRECTION.","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"id_16-limitation-of-liability","dir":"","previous_headings":"TERMS AND CONDITIONS","what":"16. Limitation of Liability","title":"GNU General Public License","text":"EVENT UNLESS REQUIRED APPLICABLE LAW AGREED WRITING COPYRIGHT HOLDER, PARTY MODIFIES /CONVEYS PROGRAM PERMITTED , LIABLE DAMAGES, INCLUDING GENERAL, SPECIAL, INCIDENTAL CONSEQUENTIAL DAMAGES ARISING USE INABILITY USE PROGRAM (INCLUDING LIMITED LOSS DATA DATA RENDERED INACCURATE LOSSES SUSTAINED THIRD PARTIES FAILURE PROGRAM OPERATE PROGRAMS), EVEN HOLDER PARTY ADVISED POSSIBILITY DAMAGES.","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"id_17-interpretation-of-sections-15-and-16","dir":"","previous_headings":"TERMS AND CONDITIONS","what":"17. Interpretation of Sections 15 and 16","title":"GNU General Public License","text":"disclaimer warranty limitation liability provided given local legal effect according terms, reviewing courts shall apply local law closely approximates absolute waiver civil liability connection Program, unless warranty assumption liability accompanies copy Program return fee. END TERMS CONDITIONS","code":""},{"path":"https://inceptdk.github.io/adaptr/LICENSE.html","id":"how-to-apply-these-terms-to-your-new-programs","dir":"","previous_headings":"","what":"How to Apply These Terms to Your New Programs","title":"GNU General Public License","text":"develop new program, want greatest possible use public, best way achieve make free software everyone can redistribute change terms. , attach following notices program. safest attach start source file effectively state exclusion warranty; file least “copyright” line pointer full notice found. Also add information contact electronic paper mail. program terminal interaction, make output short notice like starts interactive mode: hypothetical commands show w show c show appropriate parts General Public License. course, program’s commands might different; GUI interface, use “box”. also get employer (work programmer) school, , sign “copyright disclaimer” program, necessary. information , apply follow GNU GPL, see . GNU General Public License permit incorporating program proprietary programs. program subroutine library, may consider useful permit linking proprietary applications library. want , use GNU Lesser General Public License instead License. first, please read .","code":" Copyright (C) This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . Copyright (C) This program comes with ABSOLUTELY NO WARRANTY; for details type 'show w'. This is free software, and you are welcome to redistribute it under certain conditions; type 'show c' for details."},{"path":"https://inceptdk.github.io/adaptr/articles/Advanced-example.html","id":"preamble","dir":"Articles","previous_headings":"","what":"Preamble","title":"Advanced example","text":"example, set trial three arms, one common control, undesirable binary outcome (e.g., mortality). examples creates custom version setup_trial_binom() function using non-flat priors event rates arm (setup_trial_binom() uses flat priors), returning event probabilities percentages (instead fractions), illustrate use custom function summarise raw outcome data. setup_trial() attempts validate custom functions assessing output trial specification, edge cases might elude validation. , therefore, urge users specifying custom functions carefully test complex functions actual use. go trouble writing nice set functions generating outcomes sampling posterior distributions, please consider adding package. way, others can benefit work helps validate . See GitHub page Contributing. Although user-written custom functions depend adaptr package, first thing load package: –set global seed ensure reproducible results vignette: define functions (illustration purposes sanity check) print outputs. , , invoked setup_trial() (final code chunk vignette).","code":"library(adaptr) #> Loading 'adaptr' package v1.4.0. #> For instructions, type 'help(\"adaptr\")' #> or see https://inceptdk.github.io/adaptr/. set.seed(89)"},{"path":"https://inceptdk.github.io/adaptr/articles/Advanced-example.html","id":"functions-for-generating-outcomes","dir":"Articles","previous_headings":"","what":"Functions for generating outcomes","title":"Advanced example","text":"function take single argument (allocs), character vector containing allocations (names trial arms) patients included since last adaptive analysis. function must return numeric vector, regardless actual outcome type (, e.g., categorical outcomes must encoded numeric). returned numeric vector must length, values order allocs. , third element allocs specifies allocation third patient randomised since last adaptive analysis, (correspondingly) third element returned vector patient’s outcome. sounds complicated, becomes clearer actually specify function (essentially re-implementation built-function used setup_trial_binom()): illustrate function works, first generate random allocations 50 patients using equal allocation probabilities, default behaviour sample(). enclosing call parentheses, resulting allocations printed: Next, generate random outcomes patients:","code":"get_ys_binom_custom <- function(allocs) { # Binary outcome coded as 0/1 - prepare returned vector of appropriate length y <- integer(length(allocs)) # Specify trial arms and true event probabilities for each arm # These values should exactly match those supplied to setup_trial # NB! This is not validated, so this is the user's responsibility arms <- c(\"Control\", \"Experimental arm A\", \"Experimental arm B\") true_ys <- c(0.25, 0.27, 0.20) # Loop through arms and generate outcomes for (i in seq_along(arms)) { # Indices of patients allocated to the current arm ii <- which(allocs == arms[i]) # Generate outcomes for all patients allocated to current arm y[ii] <- rbinom(length(ii), 1, true_ys[i]) } # Return outcome vector y } (allocs <- sample(c(\"Control\", \"Experimental arm A\", \"Experimental arm B\"), size = 50, replace = TRUE)) #> [1] \"Control\" \"Experimental arm B\" \"Experimental arm B\" #> [4] \"Experimental arm B\" \"Experimental arm B\" \"Experimental arm A\" #> [7] \"Experimental arm A\" \"Experimental arm A\" \"Experimental arm A\" #> [10] \"Experimental arm A\" \"Experimental arm B\" \"Experimental arm B\" #> [13] \"Experimental arm B\" \"Control\" \"Experimental arm B\" #> [16] \"Control\" \"Experimental arm A\" \"Experimental arm A\" #> [19] \"Experimental arm A\" \"Experimental arm B\" \"Control\" #> [22] \"Control\" \"Experimental arm B\" \"Control\" #> [25] \"Experimental arm A\" \"Control\" \"Experimental arm A\" #> [28] \"Control\" \"Experimental arm B\" \"Experimental arm B\" #> [31] \"Control\" \"Experimental arm B\" \"Control\" #> [34] \"Control\" \"Experimental arm A\" \"Experimental arm B\" #> [37] \"Control\" \"Experimental arm A\" \"Experimental arm A\" #> [40] \"Experimental arm A\" \"Experimental arm B\" \"Experimental arm A\" #> [43] \"Control\" \"Experimental arm B\" \"Control\" #> [46] \"Control\" \"Control\" \"Control\" #> [49] \"Experimental arm A\" \"Experimental arm A\" (ys <- get_ys_binom_custom(allocs)) #> [1] 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 #> [39] 0 0 0 1 1 0 0 1 0 1 1 0"},{"path":"https://inceptdk.github.io/adaptr/articles/Advanced-example.html","id":"functions-for-drawing-posterior-samples","dir":"Articles","previous_headings":"","what":"Functions for drawing posterior samples","title":"Advanced example","text":"setup_trial_binom() function uses beta-binomial conjugate prior models arm, beta(1, 1) priors. priors uniform (≈ “non-informative”) probability scale corresponds amount information provided 2 patients (1 1 without event), described greater detail , e.g., Ryan et al, 2019 (10.1136/bmjopen-2018-024256). custom function generating posterior draws also uses beta-binomial conjugate prior models, informative priors. Informative priors may prevent undue influence random, early fluctuations trial pulling posterior estimates closer prior limited data available. seek relatively weakly informative priors centred previous knowledge (beliefs), can actually define function generating posterior draws based informative priors, need derive prior.","code":""},{"path":"https://inceptdk.github.io/adaptr/articles/Advanced-example.html","id":"informative-priors","dir":"Articles","previous_headings":"Functions for drawing posterior samples","what":"Informative priors","title":"Advanced example","text":"assume prior knowledge corresponding belief best estimate true event probability control arm 0.25 (25%), true event probability 0.15 0.35 (15-35%) 95% probability. mean beta distribution simply [number events]/[number patients]. derive beta distribution reflects prior belief, use find_beta_params(), helper function included adaptr (see ?find_beta_params details): thus see prior belief prior roughly corresponds previous randomisation 60 patients 15 (alpha) experienced event 45 (beta) . Even though may expect event probabilities differ non-control arms, example consider prior appropriate arms consider event probabilities smaller/larger represented prior unlikely. , illustrate effects prior compared default beta(1, 1) prior used setup_trial_binom() single trial arm 20 patients randomised, 12 events 8 non-events. corresponds estimated event probability 0.6 (60%), far expected 0.25 (25%). come random fluctuations patients randomised, even prior beliefs correct. Next, illustrate effects prior 200 patients randomised arm, 56 events 144 non-events, corresponds estimated event probability 0.28 (28%), similar expected event probability. comparing previous plot, clearly see patients randomised, larger sample observed data starts dominate posterior, prior exerts less influence posterior distribution (posterior distributions alike despite different prior distributions).","code":"find_beta_params( theta = 0.25, # Event probability boundary = \"lower\", boundary_target = 0.15, interval_width = 0.95 ) #> alpha beta p2.5 p50.0 p97.5 #> 1 15 45 0.1498208 0.2472077 0.3659499"},{"path":"https://inceptdk.github.io/adaptr/articles/Advanced-example.html","id":"defining-the-function-to-generate-posterior-draws","dir":"Articles","previous_headings":"Functions for drawing posterior samples","what":"Defining the function to generate posterior draws","title":"Advanced example","text":"number important things aware specifying function. First, must accept following arguments (exact names, even used function): arms: character vector currently active arms trial. allocs: character vector allocations (trial arms) patients randomised trial, including randomised arms longer active. ys: numeric vector outcomes patients randomised trial, including randomised arms longer active. control: single character, current control arm; NULL trials without common control. n_draws: single integer, number posterior draws generate arm. Alternatively, unused arguments can left ellipsis (...) included final argument function. Second, order allocs ys must match: fifth element allocs represents allocation fifth patient, fifth element ys represent outcome patient. Third, allocs ys provided patients, including randomised arms longer active. done users situations may want use data generating posterior draws (currently active) arms. Fourth, adaptr restrict posterior samples drawn. Consequently, Markov chain Monte Carlo- variational inference-based methods may used, packages supplying functionality may called user-provided functions. However, using complex methods simple conjugate models substantially increases simulation run time. Consequently, simpler models well-suited use simulations. Fifth, function must return matrix numeric values length(arms) columns n_draws rows, currently active arms column names. , row must contain one posterior draw arm. NA’s allowed, even patients randomised arm yet, valid numeric values returned (e.g., drawn prior another diffuse posterior distribution). Even outcome truly numeric, vector outcomes provided function (ys) returned matrix posterior draws must encoded numeric. mind, ready specify function: now call function using previously generated allocs ys. avoid cluttering, generate 10 posterior draws arm example: Importantly, less 100 posterior draws arm allowed setting trial specification, avoid unstable results (see setup_trial_binom()).","code":"get_draws_binom_custom <- function(arms, allocs, ys, control, n_draws) { # Setup a list to store the posterior draws for each arm draws <- list() # Loop through the ACTIVE arms and generate posterior draws for (a in arms) { # Indices of patients allocated to the current arm ii <- which(allocs == a) # Sum the number of events in the current arm n_events <- sum(ys[ii]) # Compute the number of patients in the current arm n_patients <- length(ii) # Generate draws using the number of events, the number of patients # and the prior specified above: beta(15, 45) # Saved using the current arms' name in the list, ensuring that the # resulting matrix has column names corresponding to the ACTIVE arms draws[[a]] <- rbeta(n_draws, 15 + n_events, 45 + n_patients - n_events) } # Bind all elements of the list column-wise to form a matrix with # 1 named column per ACTIVE arm and 1 row per posterior draw. # Multiply result with 100, as we're using percentages and not proportions # in this example (just to correspond to the illustrated custom function to # generate RAW outcome estimates below) do.call(cbind, draws) * 100 } get_draws_binom_custom( # Only currently ACTIVE arms, but all are considered active at this time arms = c(\"Control\", \"Experimental arm A\", \"Experimental arm B\"), allocs = allocs, # Generated above ys = ys, # Generated above # Input control arm, argument is supplied even if not used in the function control = \"Control\", # Input number of draws (for brevity, only 10 draws here) n_draws = 10 ) #> Control Experimental arm A Experimental arm B #> [1,] 30.96555 29.34973 29.26143 #> [2,] 30.47382 23.22668 25.08249 #> [3,] 31.04807 31.76577 19.81416 #> [4,] 17.00712 24.30809 16.36256 #> [5,] 21.31251 27.74615 22.63147 #> [6,] 25.50944 24.16283 30.29049 #> [7,] 16.60420 29.49526 28.75436 #> [8,] 25.17899 33.29374 30.87149 #> [9,] 23.72043 27.78537 29.89836 #> [10,] 30.50004 28.43694 26.62115"},{"path":"https://inceptdk.github.io/adaptr/articles/Advanced-example.html","id":"specifying-the-function-to-calculate-raw-outcome-estimates","dir":"Articles","previous_headings":"","what":"Specifying the function to calculate raw outcome estimates","title":"Advanced example","text":"Finally, custom function may specified calculate raw summary estimates arm; raw estimates posterior estimates, can considered maximum likelihood point estimates example. function must take numeric vector (outcomes arm) return single numeric value. function called separately arm. express results percentages proportions example, function simply calculates outcome percentage arm: now call function outcomes \"Control\" arm, example:","code":"fun_raw_est_custom <- function(ys) { mean(ys) * 100 } cat(sprintf( \"Raw outcome percentage estimate in the 'Control' group: %.1f%%\", fun_raw_est_custom(ys[allocs == \"Control\"]) )) #> Raw outcome percentage estimate in the 'Control' group: 29.4%"},{"path":"https://inceptdk.github.io/adaptr/articles/Advanced-example.html","id":"setup-the-trial-specification","dir":"Articles","previous_headings":"","what":"Setup the trial specification","title":"Advanced example","text":"functions defined, can now setup trial specification. stated , validation custom functions carried trial setup: setup_trial() runs errors warnings, custom trial successfully specified may run run_trial() run_trials() calibrated calibrate_trial(). custom functions provided setup_trial() calls custom functions (uses objects defined user outside functions) functions loaded non-base R packages used, please aware exporting objects/functions prefixing namespace necessary simulations conducted using multiple cores. See setup_cluster() run_trial() additional details export necessary functions objects.","code":"setup_trial( arms = c(\"Control\", \"Experimental arm A\", \"Experimental arm B\"), # true_ys, true outcome percentages (since posterior draws and raw estimates # are returned as percentages, this must be a percentage as well, even if # probabilities are specified as proportions internally in the outcome # generating function specified above true_ys = c(25, 27, 20), # Supply the functions to generate outcomes and posterior draws fun_y_gen = get_ys_binom_custom, fun_draws = get_draws_binom_custom, # Define looks max_n = 2000, look_after_every = 100, # Define control and allocation strategy control = \"Control\", control_prob_fixed = \"sqrt-based\", # Define equivalence assessment - drop non-control arms at > 90% probability # of equivalence, defined as an absolute difference of 10 %-points # (specified on the percentage-point scale as the rest of probabilities in # the example) equivalence_prob = 0.9, equivalence_diff = 10, equivalence_only_first = TRUE, # Input the function used to calculate raw outcome estimates fun_raw_est = fun_raw_est_custom, # Description and additional information description = \"custom trial [binary outcome, weak priors]\", add_info = \"Trial using beta-binomial conjugate prior models and beta(15, 45) priors in each arm.\" ) #> Trial specification: custom trial [binary outcome, weak priors] #> * Undesirable outcome #> * Common control arm: Control #> * Control arm probability fixed at 0.414 (for 3 arms), 0.5 (for 2 arms) #> * Best arm: Experimental arm B #> #> Arms, true outcomes, starting allocation probabilities #> and allocation probability limits: #> arms true_ys start_probs fixed_probs min_probs max_probs #> Control 25 0.414 0.414 NA NA #> Experimental arm A 27 0.293 NA NA NA #> Experimental arm B 20 0.293 NA NA NA #> #> Maximum sample size: 2000 #> Maximum number of data looks: 20 #> Planned looks after every 100 #> patients have reached follow-up until final look after 2000 patients #> Number of patients randomised at each look: 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, 1500, 1600, 1700, 1800, 1900, 2000 #> #> Superiority threshold: 0.99 (all analyses) #> Inferiority threshold: 0.01 (all analyses) #> Equivalence threshold: 0.9 (all analyses) (only checked for first control) #> Absolute equivalence difference: 10 #> No futility threshold #> Soften power for all analyses: 1 (no softening) #> #> Additional info: Trial using beta-binomial conjugate prior models and beta(15, 45) priors in each arm."},{"path":"https://inceptdk.github.io/adaptr/articles/Basic-examples.html","id":"trial-designs-without-a-common-control-arm","dir":"Articles","previous_headings":"","what":"Trial designs without a common control arm","title":"Basic examples","text":"section, several examples trials without common control arm provided. General settings applicable trial designs (including trial specifications without common control arm) covered section.","code":""},{"path":"https://inceptdk.github.io/adaptr/articles/Basic-examples.html","id":"example-1-general-settings","dir":"Articles","previous_headings":"Trial designs without a common control arm","what":"Example 1: general settings","title":"Basic examples","text":"","code":"setup_trial_binom( # Four arms arms = c(\"A\", \"B\", \"C\", \"D\"), # Set true outcomes (in this example event probabilities) for all arms true_ys = c(0.3, 0.35, 0.31, 0.27), # 30%, 34%, 31% and 27%, respectively # Set starting allocation probabilities # Defaults to equal allocation if not specified start_probs = c(0.3, 0.3, 0.2, 0.2), # Set fixed allocation probability for first arm # NA corresponds to no limits for specific arms # Default (NULL) corresponds to no limits for all arms fixed_probs = c(0.3, NA, NA, NA), # Set minimum and maximum probability limits for some arms # NA corresponds to no limits for specific arms # Default (NULL) corresponds to no limits for all arms # Must be NA for arms with fixed_probs (first arm in this example) # sum(fixed_probs) + sum(min_probs) must not exceed 1 # sum(fixed_probs) + sum(max_probs) may be greater than 1, and must be at least # 1 if specified for all arms min_probs = c(NA, 0.2, NA, NA), max_probs = c(NA, 0.7, NA, NA), # Set looks - alternatively, specify both max_n AND look_after_every data_looks = seq(from = 300, to = 1000, by = 100), # No common control arm (as default, but explicitly specified in this example) control = NULL, # Set inferiority/superiority thresholds (different values than the defaults) # (see also the calibrate_trial() function) inferiority = 0.025, superiority = 0.975, # Define that the outcome is desirable (as opposed to the default setting) highest_is_best = TRUE, # No softening (the default setting, but made explicit here) soften_power = 1, # Use different simulation/summary settings than default cri_width = 0.89, # 89% credible intervals n_draws = 1000, # Only 1000 posterior draws in each arm robust = TRUE, # Summarise posteriors using medians/MAD-SDs (as default) # Trial description (used by print methods) description = \"example trial specification 1\" ) #> Trial specification: example trial specification 1 #> * Desirable outcome #> * No common control arm #> * Best arm: B #> #> Arms, true outcomes, starting allocation probabilities #> and allocation probability limits: #> arms true_ys start_probs fixed_probs min_probs max_probs #> A 0.30 0.3 0.3 NA NA #> B 0.35 0.3 NA 0.2 0.7 #> C 0.31 0.2 NA NA NA #> D 0.27 0.2 NA NA NA #> #> Maximum sample size: 1000 #> Maximum number of data looks: 8 #> Planned data looks after: 300, 400, 500, 600, 700, 800, 900, 1000 patients have reached follow-up #> Number of patients randomised at each look: 300, 400, 500, 600, 700, 800, 900, 1000 #> #> Superiority threshold: 0.975 (all analyses) #> Inferiority threshold: 0.025 (all analyses) #> No equivalence threshold #> No futility threshold (not relevant - no common control) #> Soften power for all analyses: 1 (no softening)"},{"path":"https://inceptdk.github.io/adaptr/articles/Basic-examples.html","id":"example-2-equivalence-testing-decreasing-softening","dir":"Articles","previous_headings":"Trial designs without a common control arm","what":"Example 2: equivalence testing, decreasing softening","title":"Basic examples","text":"common control arm Equivalence testing Different softening powers (decreasing softening trial progresses) Default settings many unspecified arguments","code":"setup_trial_binom( # Specify arms and true outcome probabilities (undesirable outcome as default) arms = c(\"A\", \"B\", \"C\", \"D\"), true_ys = c(0.2, 0.22, 0.24, 0.18), # Specify adaptive analysis looks using max_n and look_after_every # max_n does not need to be a multiple of look_after_every - a final look # will be conducted at max_n regardless max_n = 1250, # Maximum 1250 patients look_after_every = 100, # Look after every 100 patients # Assess equivalence between all arms: stop if >90 % probability that the # absolute difference between the best and worst arms is < 5 %-points # Note: equivalence_only_first must be NULL (default) in designs without a # common control arm (such as this trial) equivalence_prob = 0.9, equivalence_diff = 0.05, # Different softening powers at each look (13 possible looks in total) # Starts at 0 (softens all allocation probabilities to be equal) and ends at # 1 (no softening) for the final possible look in the trial soften_power = seq(from = 0, to = 1, length.out = 13) ) #> Trial specification: generic binomially distributed outcome trial #> * Undesirable outcome #> * No common control arm #> * Best arm: D #> #> Arms, true outcomes, starting allocation probabilities #> and allocation probability limits: #> arms true_ys start_probs fixed_probs min_probs max_probs #> A 0.20 0.25 NA NA NA #> B 0.22 0.25 NA NA NA #> C 0.24 0.25 NA NA NA #> D 0.18 0.25 NA NA NA #> #> Maximum sample size: 1250 #> Maximum number of data looks: 13 #> Planned looks after every 100 #> patients have reached follow-up until final look after 1250 patients #> Number of patients randomised at each look: 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, 1250 #> #> Superiority threshold: 0.99 (all analyses) #> Inferiority threshold: 0.01 (all analyses) #> Equivalence threshold: 0.9 (all analyses) (no common control) #> Absolute equivalence difference: 0.05 #> No futility threshold (not relevant - no common control) #> Soften power for each consequtive analysis: 0, 0.083, 0.167, 0.25, 0.333, 0.417, 0.5, 0.583, 0.667, 0.75, 0.833, 0.917, 1"},{"path":"https://inceptdk.github.io/adaptr/articles/Basic-examples.html","id":"trial-designs-with-a-common-control-arm","dir":"Articles","previous_headings":"","what":"Trial designs with a common control arm","title":"Basic examples","text":"section, several examples trials common control arm provided focus mostly options specific trial designs common control arm.","code":""},{"path":"https://inceptdk.github.io/adaptr/articles/Basic-examples.html","id":"example-3-common-control-and-sqrt-based-fixed-allocation","dir":"Articles","previous_headings":"Trial designs with a common control arm","what":"Example 3: common control and sqrt-based fixed allocation","title":"Basic examples","text":"common control arm square-root-transformation-based fixed allocation probabilities (see description setup_trial()) Assessment equivalence futility compared initial control (assessed superior arms become subsequent controls)","code":"setup_trial_binom( arms = c(\"A\", \"B\", \"C\", \"D\"), # Specify control arm control = \"A\", true_ys = c(0.2, 0.22, 0.24, 0.18), data_looks = seq(from = 100, to = 1000, by = 100), # Fixed, square-root-transformation-based allocation throughout control_prob_fixed = \"sqrt-based fixed\", # Assess equivalence: drop non-control arms if > 90% probability that they # are equivalent to the common control, defined as an absolute difference of # < 3 %-points equivalence_prob = 0.9, equivalence_diff = 0.03, # Only assess against the initial control (i.e., not assessed if an arm is # declared superior to the initial control and becomes the new control) equivalence_only_first = TRUE, # Assess futility: drop non-control arms if > 80% probability that they are # < 10 %-points better (in this case lower because outcome is undesirable in # this example) compared to the common control futility_prob = 0.8, futility_diff = 0.1, # Only assessed for the initial control, as described above futility_only_first = TRUE ) #> Trial specification: generic binomially distributed outcome trial #> * Undesirable outcome #> * Common control arm: A #> * Control arm probability fixed at 0.366 (for 4 arms), 0.414 (for 3 arms), 0.5 (for 2 arms) #> * Best arm: D #> #> Arms, true outcomes, starting allocation probabilities #> and allocation probability limits: #> arms true_ys start_probs fixed_probs min_probs max_probs #> A 0.20 0.366 0.366 NA NA #> B 0.22 0.211 0.211 NA NA #> C 0.24 0.211 0.211 NA NA #> D 0.18 0.211 0.211 NA NA #> #> Maximum sample size: 1000 #> Maximum number of data looks: 10 #> Planned data looks after: 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000 patients have reached follow-up #> Number of patients randomised at each look: 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000 #> #> Superiority threshold: 0.99 (all analyses) #> Inferiority threshold: 0.01 (all analyses) #> Equivalence threshold: 0.9 (all analyses) (only checked for first control) #> Absolute equivalence difference: 0.03 #> Futility threshold: 0.8 (all analyses) (only checked for first control) #> Absolute futility difference (in beneficial direction): 0.1 #> Soften power for all analyses: 1 (no softening - all arms fixed)"},{"path":"https://inceptdk.github.io/adaptr/articles/Basic-examples.html","id":"example-4-sqrt-based-initial-allocation-and-restricted-rar","dir":"Articles","previous_headings":"Trial designs with a common control arm","what":"Example 4: sqrt-based initial allocation and restricted RAR","title":"Basic examples","text":"Square-root-transformation-based initial allocation probabilities Square-root-transformation-based allocation control arm (including subsequent controls, non-control arm declared superior initial control) Restricted response-adaptive randomisation non-control arms","code":"setup_trial_binom( arms = c(\"A\", \"B\", \"C\", \"D\"), control = \"A\", true_ys = c(0.2, 0.22, 0.24, 0.18), data_looks = seq(from = 100, to = 1000, by = 100), # Square-root-transformation-based control arm allocation including for # subsequent controls and initial equal allocation to the non-control arms, # followed by response-adaptive randomisation control_prob_fixed = \"sqrt-based\", # Restricted response-adaptive randomisation # Minimum probabilities of 20% for non-control arms, must be NA for the # control arm with fixed allocation probability # Limits are ignored for arms that become subsequent controls # Limits are rescaled (i.e., increased proportionally) when arms are dropped min_probs = c(NA, 0.2, 0.2, 0.2), rescale_probs = \"limits\", # Constant softening of 0.5 (= square-root transformation) soften_power = 0.5 ) #> Trial specification: generic binomially distributed outcome trial #> * Undesirable outcome #> * Common control arm: A #> * Control arm probability fixed at 0.366 (for 4 arms), 0.414 (for 3 arms), 0.5 (for 2 arms) #> * Best arm: D #> #> Arms, true outcomes, starting allocation probabilities #> and allocation probability limits (min/max_probs rescaled): #> arms true_ys start_probs fixed_probs min_probs max_probs #> A 0.20 0.366 0.366 NA NA #> B 0.22 0.211 NA 0.2 NA #> C 0.24 0.211 NA 0.2 NA #> D 0.18 0.211 NA 0.2 NA #> #> Maximum sample size: 1000 #> Maximum number of data looks: 10 #> Planned data looks after: 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000 patients have reached follow-up #> Number of patients randomised at each look: 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000 #> #> Superiority threshold: 0.99 (all analyses) #> Inferiority threshold: 0.01 (all analyses) #> No equivalence threshold #> No futility threshold #> Soften power for all analyses: 0.5"},{"path":"https://inceptdk.github.io/adaptr/articles/Basic-examples.html","id":"example-5-sqrt-based-allocation-only-to-initial-control-arm","dir":"Articles","previous_headings":"Trial designs with a common control arm","what":"Example 5: sqrt-based allocation only to initial control arm","title":"Basic examples","text":"example similar (different restriction settings), use square-root-transformation-based allocation probabilities initial control arm. Hence, apply another arm declared superior becomes new control.","code":"setup_trial_binom( arms = c(\"A\", \"B\", \"C\", \"D\"), control = \"A\", true_ys = c(0.2, 0.22, 0.24, 0.18), data_looks = seq(from = 100, to = 1000, by = 100), # Square-root-transformation-based control arm allocation for the initial # control only and initial equal allocation to the non-control arms, followed # by response-adaptive randomisation control_prob_fixed = \"sqrt-based start\", # Restrict response-adaptive randomisation # Minimum probabilities of 20% for all non-control arms # - must be NA for the initial control arm with fixed allocation probability min_probs = c(NA, 0.2, 0.2, 0.2), # Maximum probabilities of 65% for all non-control arms # - must be NA for the initial control arm with fixed allocation probability max_probs = c(NA, 0.65, 0.65, 0.65), soften_power = 0.75 ) #> Trial specification: generic binomially distributed outcome trial #> * Undesirable outcome #> * Common control arm: A #> * Control arm probability fixed at 0.366 #> * Best arm: D #> #> Arms, true outcomes, starting allocation probabilities #> and allocation probability limits: #> arms true_ys start_probs fixed_probs min_probs max_probs #> A 0.20 0.366 0.366 NA NA #> B 0.22 0.211 NA 0.2 0.65 #> C 0.24 0.211 NA 0.2 0.65 #> D 0.18 0.211 NA 0.2 0.65 #> #> Maximum sample size: 1000 #> Maximum number of data looks: 10 #> Planned data looks after: 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000 patients have reached follow-up #> Number of patients randomised at each look: 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000 #> #> Superiority threshold: 0.99 (all analyses) #> Inferiority threshold: 0.01 (all analyses) #> No equivalence threshold #> No futility threshold #> Soften power for all analyses: 0.75"},{"path":"https://inceptdk.github.io/adaptr/articles/Basic-examples.html","id":"example-6-restricted-rar-matched-control-arm-allocation","dir":"Articles","previous_headings":"Trial designs with a common control arm","what":"Example 6: restricted RAR, matched control-arm allocation","title":"Basic examples","text":"Restricted response-adaptive randomisation Control-arm allocation probability matched highest non-control arm (re-scaling necessary) Applies initial subsequent control arms","code":"setup_trial_binom( arms = c(\"A\", \"B\", \"C\", \"D\"), control = \"A\", true_ys = c(0.2, 0.22, 0.24, 0.18), data_looks = seq(from = 100, to = 1000, by = 100), # Specify starting probabilities # When \"match\" is specified below in control_prob_fixed, the initial control # arm's initial allocation probability must match the highest initial # non-control arm allocation probability start_probs = c(0.3, 0.3, 0.2, 0.2), control_prob_fixed = \"match\", # Restrict response-adaptive randomisation # - these are applied AFTER \"matching\" when calculating new allocation # probabilities # - min_probs must be NA for the initial control arm when using matching min_probs = c(NA, 0.2, 0.2, 0.2), soften_power = 0.7 ) #> Trial specification: generic binomially distributed outcome trial #> * Undesirable outcome #> * Common control arm: A #> * Control arm probability matched to best non-control arm #> * Best arm: D #> #> Arms, true outcomes, starting allocation probabilities #> and allocation probability limits: #> arms true_ys start_probs fixed_probs min_probs max_probs #> A 0.20 0.3 NA NA NA #> B 0.22 0.3 NA 0.2 NA #> C 0.24 0.2 NA 0.2 NA #> D 0.18 0.2 NA 0.2 NA #> #> Maximum sample size: 1000 #> Maximum number of data looks: 10 #> Planned data looks after: 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000 patients have reached follow-up #> Number of patients randomised at each look: 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000 #> #> Superiority threshold: 0.99 (all analyses) #> Inferiority threshold: 0.01 (all analyses) #> No equivalence threshold #> No futility threshold #> Soften power for all analyses: 0.7"},{"path":"https://inceptdk.github.io/adaptr/articles/Basic-examples.html","id":"example-7-follow-up-and-data-collection-lag","dir":"Articles","previous_headings":"Trial designs with a common control arm","what":"Example 7: follow-up and data collection lag","title":"Basic examples","text":"example uses randomised_at_looks argument specify follow-/data collection lag. real use cases, usually considered, may affect relative performance different trial designs extent ‘final’ results patients reached follow-analysed may differ results adaptive analyses randomised patients included due outcome data available yet patients.","code":"setup_trial_binom( arms = c(\"A\", \"B\", \"C\", \"D\"), control = \"A\", true_ys = c(0.2, 0.22, 0.24, 0.18), # Analyses conducted every time 100 patients have follow-up data available data_looks = seq(from = 100, to = 1000, by = 100), # Specify the number of patients randomised at each look - in this case, 200 # more patients are randomised than the number of patients that # have follow-up data available at each look randomised_at_looks = seq(from = 300, to = 1200, by = 100) ) #> Trial specification: generic binomially distributed outcome trial #> * Undesirable outcome #> * Common control arm: A #> #> * Best arm: D #> #> Arms, true outcomes, starting allocation probabilities #> and allocation probability limits: #> arms true_ys start_probs fixed_probs min_probs max_probs #> A 0.20 0.25 NA NA NA #> B 0.22 0.25 NA NA NA #> C 0.24 0.25 NA NA NA #> D 0.18 0.25 NA NA NA #> #> Maximum sample size: 1200 #> Maximum number of data looks: 10 #> Planned data looks after: 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000 patients have reached follow-up #> Number of patients randomised at each look: 300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200 #> #> Superiority threshold: 0.99 (all analyses) #> Inferiority threshold: 0.01 (all analyses) #> No equivalence threshold #> No futility threshold #> Soften power for all analyses: 1 (no softening)"},{"path":"https://inceptdk.github.io/adaptr/articles/Basic-examples.html","id":"example-8-different-probability-thresholds-over-time","dir":"Articles","previous_headings":"Trial designs with a common control arm","what":"Example 8: different probability thresholds over time","title":"Basic examples","text":"example, specify different probability thresholds superiority inferiority stopping rules different adaptive analyses. Varying probability thresholds may similarly specified stopping rules equivalence futility. Importantly, probability thresholds must specified subsequent threshold never stricter previous threshold. Varying thresholds may also used make stopping rules first function later analyses (e.g., long stopping threshold superiority 1 stopping threshold inferiority 0, trials stopped arms dropped due rules).","code":"setup_trial_binom( arms = c(\"A\", \"B\", \"C\", \"D\"), control = \"A\", true_ys = c(0.2, 0.22, 0.24, 0.18), # Analyses conducted every time 100 patients have follow-up data available data_looks = seq(from = 100, to = 1000, by = 100), # Specify varying inferiority/superiority thresholds # When specifying varying thresholds, the number of thresholds must match # the number of analyses, and thresholds may never be stricter than the # threshold used in the previous analysis # Superiority threshold decreasing from 0.99 to 0.95 during the first five # analyses, and remains stationary at 0.95 after that superiority = c(seq(from = 0.99, to = 0.95, by = -0.01), rep(0.95, 5)), # Similarly for inferiority thresholds, but in the opposite direction inferiority = c(seq(from = 0.01, to = 0.05, by = 0.01), rep(0.05, 5)), ) #> Trial specification: generic binomially distributed outcome trial #> * Undesirable outcome #> * Common control arm: A #> #> * Best arm: D #> #> Arms, true outcomes, starting allocation probabilities #> and allocation probability limits: #> arms true_ys start_probs fixed_probs min_probs max_probs #> A 0.20 0.25 NA NA NA #> B 0.22 0.25 NA NA NA #> C 0.24 0.25 NA NA NA #> D 0.18 0.25 NA NA NA #> #> Maximum sample size: 1000 #> Maximum number of data looks: 10 #> Planned data looks after: 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000 patients have reached follow-up #> Number of patients randomised at each look: 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000 #> #> Superiority thresholds: #> 0.99, 0.98, 0.97, 0.96, 0.95, 0.95, 0.95, 0.95, 0.95, 0.95 #> Inferiority thresholds: #> 0.01, 0.02, 0.03, 0.04, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05 #> No equivalence threshold #> No futility threshold #> Soften power for all analyses: 1 (no softening)"},{"path":"https://inceptdk.github.io/adaptr/articles/Basic-examples.html","id":"example-9-minimum-allocation-probabilities-rescaled-when-arms-are-dropped","dir":"Articles","previous_headings":"Trial designs with a common control arm","what":"Example 9: minimum allocation probabilities rescaled when arms are dropped","title":"Basic examples","text":"example, trial design four arms restricted RAR (minimum allocation limits) specified, additional specification minimum allocation limits rescaled proportionally arms dropped (rescaling can similarly applied fixed allocation probabilities):","code":"setup_trial_binom( arms = c(\"A\", \"B\", \"C\", \"D\"), control = \"A\", true_ys = c(0.2, 0.2, 0.2, 0.2), min_probs = rep(0.15, 4), # Specify initial minimum allocation probabilities # Rescale allocation probability limits as arms are dropped rescale_probs = \"limits\", data_looks = seq(from = 100, to = 1000, by = 100) ) #> Trial specification: generic binomially distributed outcome trial #> * Undesirable outcome #> * Common control arm: A #> #> * Best arms: A and B and C and D #> #> Arms, true outcomes, starting allocation probabilities #> and allocation probability limits (min/max_probs rescaled): #> arms true_ys start_probs fixed_probs min_probs max_probs #> A 0.2 0.25 NA 0.15 NA #> B 0.2 0.25 NA 0.15 NA #> C 0.2 0.25 NA 0.15 NA #> D 0.2 0.25 NA 0.15 NA #> #> Maximum sample size: 1000 #> Maximum number of data looks: 10 #> Planned data looks after: 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000 patients have reached follow-up #> Number of patients randomised at each look: 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000 #> #> Superiority threshold: 0.99 (all analyses) #> Inferiority threshold: 0.01 (all analyses) #> No equivalence threshold #> No futility threshold #> Soften power for all analyses: 1 (no softening)"},{"path":"https://inceptdk.github.io/adaptr/articles/Overview.html","id":"usage-and-workflow-overview","dir":"Articles","previous_headings":"","what":"Usage and workflow overview","title":"Overview","text":"central functionality adaptr typical workflow illustrated .","code":""},{"path":"https://inceptdk.github.io/adaptr/articles/Overview.html","id":"setup","dir":"Articles","previous_headings":"Usage and workflow overview","what":"Setup","title":"Overview","text":"First, package loaded cluster parallel workers initiated setup_cluster() function facilitate parallel computing: Parallelisation supported many adaptr functions, cluster parallel workers can setup entire session using setup_cluster() early script example. Alternatively, parallelisation can controlled global \"mc.cores\" option (set calling options(mc.cores = )) cores argument many functions.","code":"library(adaptr) #> Loading 'adaptr' package v1.4.0. #> For instructions, type 'help(\"adaptr\")' #> or see https://inceptdk.github.io/adaptr/. setup_cluster(2)"},{"path":"https://inceptdk.github.io/adaptr/articles/Overview.html","id":"specify-trial-design","dir":"Articles","previous_headings":"Usage and workflow overview","what":"Specify trial design","title":"Overview","text":"Setup trial specification (defining trial design scenario) using general setup_trial() function, one special case variants using default priors setup_trial_binom() (binary, binomially distributed outcomes; used example) setup_trial_norm() (continuous, normally distributed outcomes). example trial specification following characteristics: binary, binomially distributed, undesirable (default) outcome Three arms designated common control Identical underlying outcome probabilities 25% arm Analyses conducted specific number patients outcome data available, patients randomised last look (lag due follow-data collection/verification) explicitly defined stopping thresholds inferiority superiority (default thresholds < 1% > 99%, respectively, apply) Equivalence stopping rule defined > 90% probability (equivalence_prob) -arm differences remaining arms < 5 %-points Response-adaptive randomisation minimum allocation probabilities 20% softening allocation ratios constant factor (soften_power) See ?setup_trial() details arguments vignette(\"Basic-examples\", \"adaptr\") basic example trial specifications thorough review general trial specification settings, vignette(\"Advanced-example\", \"adaptr\") advanced example including details specify user-written functions generating outcomes posterior draws. , trial specification setup human-readable overview printed: default, () probabilities shown 3 decimals. can changed explicitly print()ing specification prob_digits arguments, example:","code":"binom_trial <- setup_trial_binom( arms = c(\"Arm A\", \"Arm B\", \"Arm C\"), true_ys = c(0.25, 0.25, 0.25), min_probs = rep(0.20, 3), data_looks = seq(from = 300, to = 2000, by = 100), randomised_at_looks = c(seq(from = 400, to = 2000, by = 100), 2000), equivalence_prob = 0.9, equivalence_diff = 0.05, soften_power = 0.5 ) print(binom_trial, prob_digits = 3) #> Trial specification: generic binomially distributed outcome trial #> * Undesirable outcome #> * No common control arm #> * Best arms: Arm A and Arm B and Arm C #> #> Arms, true outcomes, starting allocation probabilities #> and allocation probability limits: #> arms true_ys start_probs fixed_probs min_probs max_probs #> Arm A 0.25 0.333 NA 0.2 NA #> Arm B 0.25 0.333 NA 0.2 NA #> Arm C 0.25 0.333 NA 0.2 NA #> #> Maximum sample size: 2000 #> Maximum number of data looks: 18 #> Planned data looks after: 300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, 1500, 1600, 1700, 1800, 1900, 2000 patients have reached follow-up #> Number of patients randomised at each look: 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, 1500, 1600, 1700, 1800, 1900, 2000, 2000 #> #> Superiority threshold: 0.99 (all analyses) #> Inferiority threshold: 0.01 (all analyses) #> Equivalence threshold: 0.9 (all analyses) (no common control) #> Absolute equivalence difference: 0.05 #> No futility threshold (not relevant - no common control) #> Soften power for all analyses: 0.5 print(binom_trial, prob_digits = 2) #> Trial specification: generic binomially distributed outcome trial #> * Undesirable outcome #> * No common control arm #> * Best arms: Arm A and Arm B and Arm C #> #> Arms, true outcomes, starting allocation probabilities #> and allocation probability limits: #> arms true_ys start_probs fixed_probs min_probs max_probs #> Arm A 0.25 0.33 NA 0.2 NA #> Arm B 0.25 0.33 NA 0.2 NA #> Arm C 0.25 0.33 NA 0.2 NA #> #> Maximum sample size: 2000 #> Maximum number of data looks: 18 #> Planned data looks after: 300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, 1500, 1600, 1700, 1800, 1900, 2000 patients have reached follow-up #> Number of patients randomised at each look: 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, 1500, 1600, 1700, 1800, 1900, 2000, 2000 #> #> Superiority threshold: 0.99 (all analyses) #> Inferiority threshold: 0.01 (all analyses) #> Equivalence threshold: 0.9 (all analyses) (no common control) #> Absolute equivalence difference: 0.05 #> No futility threshold (not relevant - no common control) #> Soften power for all analyses: 0.5"},{"path":"https://inceptdk.github.io/adaptr/articles/Overview.html","id":"calibration","dir":"Articles","previous_headings":"Usage and workflow overview","what":"Calibration","title":"Overview","text":"example trial specification, true -arm differences, stopping rules inferiority superiority explicitly defined. intentional, stopping rules calibrated obtain desired probability stopping superiority scenario -arm differences (corresponding Bayesian type 1 error rate). Trial specifications necessarily calibrated. Instead,simulations can run directly using run_trials() function covered (run_trial() single simulation). can followed assessment performance metrics, manually changing specification (including stopping rules) performance metrics considered acceptable. example, full calibration procedure performed. Calibration trial specification done using calibrate_trial() function, defaults calibrate constant, symmetrical stopping rules inferiority superiority (expecting trial specification identical outcomes arm), can used calibrate parameter trial specification towards performance metric user-defined calibration function (fun) specified. perform calibration, target value, search_range, tolerance value (tol), allowed direction tolerance value (dir) must specified (alternatively, defaults can used). note, number simulations calibration step lower generally recommended (reduce time required build vignette): calibration successful (, results used, calibration settings changed calibration repeated). calibrated, constant stopping threshold superiority printed results (0.9830921) can extracted using calibrated_binom_trial$best_x. Using default calibration functionality, calibrated, constant stopping threshold inferiority symmetrical, .e., 1 - stopping threshold superiority (0.0169079). calibrated trial specification may extracted using calibrated_binom_trial$best_trial_spec , printed, also include calibrated stopping thresholds. Calibration results may saved reloaded using path argument, avoid unnecessary repeated simulations.","code":"# Calibrate the trial specification calibrated_binom_trial <- calibrate_trial( trial_spec = binom_trial, n_rep = 1000, # 1000 simulations for each step (more generally recommended) base_seed = 4131, # Base random seed (for reproducible results) target = 0.05, # Target value for calibrated metric (default value) search_range = c(0.9, 1), # Search range for superiority stopping threshold tol = 0.01, # Tolerance range dir = -1 # Tolerance range only applies below target ) # Print result (to check if calibration is successful) calibrated_binom_trial #> Trial calibration: #> * Result: calibration successful #> * Best x: 0.9830921 #> * Best y: 0.045 #> #> Central settings: #> * Target: 0.05 #> * Tolerance: 0.01 (at or below target, range: 0.04 to 0.05) #> * Search range: 0.9 to 1 #> * Gaussian process controls: #> * - resolution: 5000 #> * - kappa: 0.5 #> * - pow: 1.95 #> * - lengthscale: 1 (constant) #> * - x scaled: yes #> * Noisy: no #> * Narrowing: yes #> #> Calibration/simulation details: #> * Total evaluations: 4 (previous + grid + iterations) #> * Repetitions: 1000 #> * Calibration time: 53.9 secs #> * Base random seed: 4131 #> #> See 'help(\"calibrate_trial\")' for details."},{"path":"https://inceptdk.github.io/adaptr/articles/Overview.html","id":"summarising-results","dir":"Articles","previous_headings":"Usage and workflow overview","what":"Summarising results","title":"Overview","text":"results simulations using calibrated trial specification conducted calibration procedure may extracted using calibrated_binom_trial$best_sims. results can summarised several functions. functions support different ‘selection strategies’ simulations ending superiority, .e., performance metrics can calculated assuming different arms used clinical practice arm ultimately superior. check_performance() function summarises performance metrics tidy data.frame, uncertainty measures (bootstrapped confidence intervals) requested. , performance metrics calculated considering ‘best’ arm (.e., one highest probability overall best) selected simulations ending superiority: Similar results list format (without uncertainty measures) can obtained using summary() method (known , e.g., regression models inR), comes print() method providing formatted results. simulation results printed directly, function called default arguments (arguments, e.g., selection strategies may also directly supplied print() method). Individual simulation results can extracted tidy data.frame using extract_results(): Finally, probabilities different remaining arms statuses (uncertainty) last adaptive analysis can summarised using check_remaining_arms() function (dropped arms shown empty text string):","code":"# Calculate performance metrics with uncertainty measures binom_trial_performance <- check_performance( calibrated_binom_trial$best_sims, select_strategy = \"best\", uncertainty = TRUE, # Calculate uncertainty measures n_boot = 1000, # 1000 bootstrap samples (more typically recommended) ci_width = 0.95, # 95% confidence intervals (default) boot_seed = \"base\" # Use same random seed for bootstrapping as for simulations ) # Print results print(binom_trial_performance, digits = 2) #> metric est err_sd err_mad lo_ci hi_ci #> 1 n_summarised 1000.00 0.00 0.00 1000.00 1000.00 #> 2 size_mean 1757.20 11.26 11.12 1736.20 1779.10 #> 3 size_sd 370.74 9.31 9.34 353.87 389.70 #> 4 size_median 2000.00 0.00 0.00 2000.00 2000.00 #> 5 size_p25 1500.00 47.25 0.00 1400.00 1500.00 #> 6 size_p75 2000.00 0.00 0.00 2000.00 2000.00 #> 7 size_p0 400.00 NA NA NA NA #> 8 size_p100 2000.00 NA NA NA NA #> 9 sum_ys_mean 440.16 2.90 2.91 434.50 445.89 #> 10 sum_ys_sd 95.56 2.34 2.41 91.15 100.14 #> 11 sum_ys_median 487.00 1.36 0.74 484.00 489.00 #> 12 sum_ys_p25 366.00 9.63 8.90 353.00 387.00 #> 13 sum_ys_p75 506.00 1.09 1.48 504.00 508.00 #> 14 sum_ys_p0 88.00 NA NA NA NA #> 15 sum_ys_p100 572.00 NA NA NA NA #> 16 ratio_ys_mean 0.25 0.00 0.00 0.25 0.25 #> 17 ratio_ys_sd 0.01 0.00 0.00 0.01 0.01 #> 18 ratio_ys_median 0.25 0.00 0.00 0.25 0.25 #> 19 ratio_ys_p25 0.24 0.00 0.00 0.24 0.24 #> 20 ratio_ys_p75 0.26 0.00 0.00 0.26 0.26 #> 21 ratio_ys_p0 0.19 NA NA NA NA #> 22 ratio_ys_p100 0.30 NA NA NA NA #> 23 prob_conclusive 0.42 0.02 0.01 0.39 0.45 #> 24 prob_superior 0.04 0.01 0.01 0.03 0.06 #> 25 prob_equivalence 0.38 0.02 0.01 0.35 0.41 #> 26 prob_futility 0.00 0.00 0.00 0.00 0.00 #> 27 prob_max 0.58 0.02 0.01 0.55 0.61 #> 28 prob_select_arm_Arm A 0.35 0.01 0.01 0.32 0.38 #> 29 prob_select_arm_Arm B 0.33 0.01 0.01 0.30 0.36 #> 30 prob_select_arm_Arm C 0.32 0.01 0.01 0.29 0.35 #> 31 prob_select_none 0.00 0.00 0.00 0.00 0.00 #> 32 rmse 0.02 0.00 0.00 0.02 0.02 #> 33 rmse_te NA NA NA NA NA #> 34 mae 0.01 0.00 0.00 0.01 0.01 #> 35 mae_te NA NA NA NA NA #> 36 idp NA NA NA NA NA binom_trial_summary <- summary( calibrated_binom_trial$best_sims, select_strategy = \"best\" ) print(binom_trial_summary, digits = 2) #> Multiple simulation results: generic binomially distributed outcome trial #> * Undesirable outcome #> * Number of simulations: 1000 #> * Number of simulations summarised: 1000 (all trials) #> * No common control arm #> * Selection strategy: best remaining available #> * Treatment effect compared to: no comparison #> #> Performance metrics (using posterior estimates from final analysis [all patients]): #> * Sample sizes: mean 1757.20 (SD: 370.74) | median 2000.00 (IQR: 1500.00 to 2000.00) [range: 400.00 to 2000.00] #> * Total summarised outcomes: mean 440.16 (SD: 95.56) | median 487.00 (IQR: 366.00 to 506.00) [range: 88.00 to 572.00] #> * Total summarised outcome rates: mean 0.2503 (SD: 0.0109) | median 0.2500 (IQR: 0.2435 to 0.2573) [range: 0.1900 to 0.2950] #> * Conclusive: 42.50% #> * Superiority: 4.50% #> * Equivalence: 38.00% #> * Futility: 0.00% [not assessed] #> * Inconclusive at max sample size: 57.50% #> * Selection probabilities: Arm A: 35.10% | Arm B: 32.90% | Arm C: 32.00% | None: 0.00% #> * RMSE / MAE: 0.01767 / 0.01164 #> * RMSE / MAE treatment effect: not estimated / not estimated #> * Ideal design percentage: not estimable #> #> Simulation details: #> * Simulation time: 20.1 secs #> * Base random seed: 4131 #> * Credible interval width: 95% #> * Number of posterior draws: 5000 #> * Estimation method: posterior medians with MAD-SDs binom_trial_results <- extract_results( calibrated_binom_trial$best_sims, select_strategy = \"best\" ) nrow(binom_trial_results) # Number of rows/simulations #> [1] 1000 head(binom_trial_results) # Print the first rows #> sim final_n sum_ys ratio_ys final_status superior_arm selected_arm #> 1 1 2000 478 0.2390 equivalence Arm A #> 2 2 2000 488 0.2440 max Arm A #> 3 3 2000 521 0.2605 max Arm C #> 4 4 2000 500 0.2500 max Arm C #> 5 5 2000 471 0.2355 max Arm A #> 6 6 2000 503 0.2515 max Arm B #> err sq_err err_te sq_err_te #> 1 -0.0134029565 1.796392e-04 NA NA #> 2 -0.0118977741 1.415570e-04 NA NA #> 3 0.0004940695 2.441046e-07 NA NA #> 4 -0.0127647255 1.629382e-04 NA NA #> 5 -0.0232813002 5.420189e-04 NA NA #> 6 -0.0154278469 2.380185e-04 NA NA check_remaining_arms( calibrated_binom_trial$best_sims, ci_width = 0.95 # 95% confidence intervals (default) ) #> arm_Arm A arm_Arm B arm_Arm C n prop se lo_ci #> 1 active active active 528 0.528 0.02172556 0.48541868 #> 2 equivalence equivalence 121 0.121 0.02964793 0.06289112 #> 3 equivalence equivalence 120 0.120 0.02966479 0.06185807 #> 4 equivalence equivalence 108 0.108 0.02986637 0.04946299 #> 5 equivalence equivalence equivalence 31 0.031 0.03112876 0.00000000 #> 6 superior 22 0.022 0.03127299 0.00000000 #> 7 superior 14 0.014 0.03140064 0.00000000 #> 8 superior 9 0.009 0.03148015 0.00000000 #> hi_ci #> 1 0.57058132 #> 2 0.17910888 #> 3 0.17814193 #> 4 0.16653701 #> 5 0.09201126 #> 6 0.08329394 #> 7 0.07554412 #> 8 0.07069997"},{"path":"https://inceptdk.github.io/adaptr/articles/Overview.html","id":"visualising-results","dir":"Articles","previous_headings":"Usage and workflow overview","what":"Visualising results","title":"Overview","text":"Several visualisation functions included (optional, require ggplot2 package installed). Convergence stability one performance metrics may visually assessed using plot_convergence() function: Plotting metrics possible; see plot_convergence() documentation. simulation results may also split separate, consecutive batches assessing convergence, assess stability: status probabilities overall trial according trial progress can visualised using plot_status() function: Similarly, status probabilities one specific trial arms can visualised: Finally, various metrics may summarised progress one multiple trial simulations using plot_history() function, requires non-sparse results (sparse argument must FALSE calibrate_trials(), run_trials(), run_trial(), leading additional results saved - functions work sparse results). illustrated .","code":"plot_convergence( calibrated_binom_trial$best_sims, metrics = c(\"size mean\", \"prob_superior\", \"prob_equivalence\"), # select_strategy can be specified, but does not affect the chosen metrics ) plot_convergence( calibrated_binom_trial$best_sims, metrics = c(\"size mean\", \"prob_superior\", \"prob_equivalence\"), n_split = 4 ) plot_status( calibrated_binom_trial$best_sims, x_value = \"total n\" # Total number of randomised patients at X-axis ) plot_status( calibrated_binom_trial$best_sims, x_value = \"total n\", arm = NA # NA for all arms or character vector for specific arms )"},{"path":"https://inceptdk.github.io/adaptr/articles/Overview.html","id":"use-calibrated-stopping-thresholds-in-another-scenario","dir":"Articles","previous_headings":"Usage and workflow overview","what":"Use calibrated stopping thresholds in another scenario","title":"Overview","text":"calibrated stopping thresholds (calibrated scenario -arm differences) may used run simulations overall trial specification, according different scenario (.e., -arm differences present) assess performance metrics (including Bayesian analogue power). First, new trial specification setup using settings , except -arm differences calibrated stopping thresholds: Simulations using trial specification calibrated stopping thresholds differences present can conducted using run_trials() function. , specify non-sparse results returned (illustrate plot_history() function). , simulations may saved reloaded using path argument. calculate performance metrics : Similarly, overall trial statuses scenario differences visualised: Statuses arm scenario also visualised: can plot median interquartile ranges allocation probabilities arm time using plot_history() function (requiring non-sparse results, leading substantially larger objects files saved): Similarly, median (interquartile range) number patients allocated arm trial progresses can visualised: Plotting metrics possible; see plot_history() documentation.","code":"binom_trial_calib_diff <- setup_trial_binom( arms = c(\"Arm A\", \"Arm B\", \"Arm C\"), true_ys = c(0.25, 0.20, 0.30), # Different outcomes in the arms min_probs = rep(0.20, 3), data_looks = seq(from = 300, to = 2000, by = 100), randomised_at_looks = c(seq(from = 400, to = 2000, by = 100), 2000), # Stopping rules for inferiority/superiority explicitly defined # using the calibration results inferiority = 1 - calibrated_binom_trial$best_x, superiority = calibrated_binom_trial$best_x, equivalence_prob = 0.9, equivalence_diff = 0.05, soften_power = 0.5 ) binom_trial_diff_sims <- run_trials( binom_trial_calib_diff, n_rep = 1000, # 1000 simulations (more generally recommended) base_seed = 1234, # Reproducible results sparse = FALSE # Return additional results for visualisation ) check_performance( binom_trial_diff_sims, select_strategy = \"best\", uncertainty = TRUE, n_boot = 1000, # 1000 bootstrap samples (more typically recommended) ci_width = 0.95, boot_seed = \"base\" ) #> metric est err_sd err_mad lo_ci hi_ci #> 1 n_summarised 1000.000 0.000 0.000 1000.000 1000.000 #> 2 size_mean 1245.100 16.618 17.272 1215.185 1277.702 #> 3 size_sd 510.702 7.414 7.436 496.194 525.386 #> 4 size_median 1200.000 46.824 0.000 1200.000 1300.000 #> 5 size_p25 800.000 35.902 0.000 800.000 900.000 #> 6 size_p75 1700.000 46.345 0.000 1600.000 1700.000 #> 7 size_p0 400.000 NA NA NA NA #> 8 size_p100 2000.000 NA NA NA NA #> 9 sum_ys_mean 287.066 3.697 3.827 280.241 294.549 #> 10 sum_ys_sd 113.660 1.697 1.650 110.337 116.954 #> 11 sum_ys_median 286.000 5.981 7.413 274.500 295.000 #> 12 sum_ys_p25 194.750 7.147 7.413 180.731 207.756 #> 13 sum_ys_p75 382.250 6.961 7.042 370.000 395.250 #> 14 sum_ys_p0 85.000 NA NA NA NA #> 15 sum_ys_p100 518.000 NA NA NA NA #> 16 ratio_ys_mean 0.233 0.000 0.001 0.232 0.234 #> 17 ratio_ys_sd 0.016 0.000 0.000 0.015 0.016 #> 18 ratio_ys_median 0.232 0.001 0.001 0.231 0.233 #> 19 ratio_ys_p25 0.222 0.001 0.001 0.220 0.224 #> 20 ratio_ys_p75 0.243 0.001 0.001 0.241 0.244 #> 21 ratio_ys_p0 0.196 NA NA NA NA #> 22 ratio_ys_p100 0.298 NA NA NA NA #> 23 prob_conclusive 0.882 0.011 0.010 0.862 0.902 #> 24 prob_superior 0.719 0.015 0.015 0.690 0.747 #> 25 prob_equivalence 0.163 0.012 0.012 0.139 0.185 #> 26 prob_futility 0.000 0.000 0.000 0.000 0.000 #> 27 prob_max 0.118 0.011 0.010 0.098 0.138 #> 28 prob_select_arm_Arm A 0.033 0.005 0.004 0.023 0.043 #> 29 prob_select_arm_Arm B 0.967 0.005 0.004 0.957 0.977 #> 30 prob_select_arm_Arm C 0.000 0.000 0.000 0.000 0.000 #> 31 prob_select_none 0.000 0.000 0.000 0.000 0.000 #> 32 rmse 0.020 0.001 0.001 0.019 0.022 #> 33 rmse_te NA NA NA NA NA #> 34 mae 0.011 0.000 0.000 0.010 0.012 #> 35 mae_te NA NA NA NA NA #> 36 idp 98.350 0.264 0.222 97.849 98.850 plot_status(binom_trial_diff_sims, x_value = \"total n\") plot_status(binom_trial_diff_sims, x_value = \"total n\", arm = NA) plot_history( binom_trial_diff_sims, x_value = \"total n\", y_value = \"prob\" ) plot_history( binom_trial_diff_sims, x_value = \"total n\", y_value = \"n all\" )"},{"path":"https://inceptdk.github.io/adaptr/articles/Overview.html","id":"citation","dir":"Articles","previous_headings":"","what":"Citation","title":"Overview","text":"use package, please consider citing :","code":"citation(package = \"adaptr\") #> To cite package 'adaptr' in publications use: #> #> Granholm A, Jensen AKG, Lange T, Kaas-Hansen BS (2022). adaptr: an R #> package for simulating and comparing adaptive clinical trials. #> Journal of Open Source Software, 7(72), 4284. URL #> https://doi.org/10.21105/joss.04284. #> #> A BibTeX entry for LaTeX users is #> #> @Article{, #> title = {{adaptr}: an R package for simulating and comparing adaptive clinical trials}, #> author = {Anders Granholm and Aksel Karl Georg Jensen and Theis Lange and Benjamin Skov Kaas-Hansen}, #> journal = {Journal of Open Source Software}, #> year = {2022}, #> volume = {7}, #> number = {72}, #> pages = {4284}, #> url = {https://doi.org/10.21105/joss.04284}, #> doi = {10.21105/joss.04284}, #> }"},{"path":"https://inceptdk.github.io/adaptr/authors.html","id":null,"dir":"","previous_headings":"","what":"Authors","title":"Authors and Citation","text":"Anders Granholm. Author, maintainer. Benjamin Skov Kaas-Hansen. Author. Aksel Karl Georg Jensen. Contributor. Theis Lange. Contributor.","code":""},{"path":"https://inceptdk.github.io/adaptr/authors.html","id":"citation","dir":"","previous_headings":"","what":"Citation","title":"Authors and Citation","text":"Granholm , Jensen AKG, Lange T, Kaas-Hansen BS (2022). adaptr: R package simulating comparing adaptive clinical trials. Journal Open Source Software, 7(72), 4284. URL https://doi.org/10.21105/joss.04284.","code":"@Article{, title = {{adaptr}: an R package for simulating and comparing adaptive clinical trials}, author = {Anders Granholm and Aksel Karl Georg Jensen and Theis Lange and Benjamin Skov Kaas-Hansen}, journal = {Journal of Open Source Software}, year = {2022}, volume = {7}, number = {72}, pages = {4284}, url = {https://doi.org/10.21105/joss.04284}, doi = {10.21105/joss.04284}, }"},{"path":"https://inceptdk.github.io/adaptr/index.html","id":"adaptr-","dir":"","previous_headings":"","what":"Adaptive Trial Simulator","title":"Adaptive Trial Simulator","text":"adaptr package simulates adaptive (multi-arm, multi-stage) clinical trials using adaptive stopping, adaptive arm dropping /response-adaptive randomisation. package developed part INCEPT (Intensive Care Platform Trial) project, primarily supported grant Sygeforsikringen “danmark”.","code":""},{"path":"https://inceptdk.github.io/adaptr/index.html","id":"resources","dir":"","previous_headings":"","what":"Resources","title":"Adaptive Trial Simulator","text":"Website - stand-alone website full package documentation adaptr: R package simulating comparing adaptive clinical trials - article Journal Open Source Software describing package overview methodological considerations regarding adaptive stopping, arm dropping randomisation clinical trials - article Journal Clinical Epidemiology describing key methodological considerations adaptive trials description workflow simulation-based example using package Examples: Effects duration follow-lag data collection performance adaptive clinical trials - article Pharmaceutical Statistics describing simulation study (code) using adaptr assess performance adaptive clinical trials according different follow-/data collection lags. Effects sceptical priors performance adaptive clinical trials binary outcomes - article Pharmaceutical Statistics describing simulation study (code) using adaptr assess performance adaptive clinical trials according different sceptical priors.","code":""},{"path":"https://inceptdk.github.io/adaptr/index.html","id":"installation","dir":"","previous_headings":"","what":"Installation","title":"Adaptive Trial Simulator","text":"easiest way install CRAN directly: Alternatively, can install development version GitHub - requires remotes-package installed. development version may contain additional features yet available CRAN version, may stable fully documented:","code":"install.packages(\"adaptr\") # install.packages(\"remotes\") remotes::install_github(\"INCEPTdk/adaptr@dev\")"},{"path":"https://inceptdk.github.io/adaptr/index.html","id":"usage-and-workflow-overview","dir":"","previous_headings":"","what":"Usage and workflow overview","title":"Adaptive Trial Simulator","text":"central functionality adaptr typical workflow illustrated .","code":""},{"path":"https://inceptdk.github.io/adaptr/index.html","id":"setup","dir":"","previous_headings":"Usage and workflow overview","what":"Setup","title":"Adaptive Trial Simulator","text":"First, package loaded cluster parallel workers initiated setup_cluster() function facilitate parallel computing:","code":"library(adaptr) #> Loading 'adaptr' package v1.4.0. #> For instructions, type 'help(\"adaptr\")' #> or see https://inceptdk.github.io/adaptr/. setup_cluster(2)"},{"path":"https://inceptdk.github.io/adaptr/index.html","id":"specify-trial-design","dir":"","previous_headings":"Usage and workflow overview","what":"Specify trial design","title":"Adaptive Trial Simulator","text":"Setup trial specification (defining trial design scenario) using general setup_trial() function, one special case variants using default priors setup_trial_binom() (binary, binomially distributed outcomes; used example) setup_trial_norm() (continuous, normally distributed outcomes).","code":"# Setup a trial using a binary, binomially distributed, undesirable outcome binom_trial <- setup_trial_binom( arms = c(\"Arm A\", \"Arm B\", \"Arm C\"), # Scenario with identical outcomes in all arms true_ys = c(0.25, 0.25, 0.25), # Response-adaptive randomisation with minimum 20% allocation in all arms min_probs = rep(0.20, 3), # Number of patients with data available at each analysis data_looks = seq(from = 300, to = 2000, by = 100), # Number of patients randomised at each analysis (higher than the numbers # with data, except at last look, due to follow-up/data collection lag) randomised_at_looks = c(seq(from = 400, to = 2000, by = 100), 2000), # Stopping rules for inferiority/superiority not explicitly defined # Stop for equivalence at > 90% probability of differences < 5 %-points equivalence_prob = 0.9, equivalence_diff = 0.05 ) # Print trial specification print(binom_trial, prob_digits = 3) #> Trial specification: generic binomially distributed outcome trial #> * Undesirable outcome #> * No common control arm #> * Best arms: Arm A and Arm B and Arm C #> #> Arms, true outcomes, starting allocation probabilities #> and allocation probability limits: #> arms true_ys start_probs fixed_probs min_probs max_probs #> Arm A 0.25 0.333 NA 0.2 NA #> Arm B 0.25 0.333 NA 0.2 NA #> Arm C 0.25 0.333 NA 0.2 NA #> #> Maximum sample size: 2000 #> Maximum number of data looks: 18 #> Planned data looks after: 300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, 1500, 1600, 1700, 1800, 1900, 2000 patients have reached follow-up #> Number of patients randomised at each look: 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, 1500, 1600, 1700, 1800, 1900, 2000, 2000 #> #> Superiority threshold: 0.99 (all analyses) #> Inferiority threshold: 0.01 (all analyses) #> Equivalence threshold: 0.9 (all analyses) (no common control) #> Absolute equivalence difference: 0.05 #> No futility threshold (not relevant - no common control) #> Soften power for all analyses: 1 (no softening)"},{"path":"https://inceptdk.github.io/adaptr/index.html","id":"calibration","dir":"","previous_headings":"Usage and workflow overview","what":"Calibration","title":"Adaptive Trial Simulator","text":"example trial specification, true -arm differences, stopping rules inferiority superiority explicitly defined. intentional, stopping rules calibrated obtain desired probability stopping superiority scenario -arm differences (corresponding Bayesian type 1 error rate). Trial specifications necessarily calibrated, simulations can run directly using run_trials() function covered (run_trial() single simulation). Calibration trial specification done using calibrate_trial() function, defaults calibrate constant, symmetrical stopping rules inferiority superiority (expecting trial specification identical outcomes arm), can used calibrate parameter trial specification towards performance metric. calibration successful - calibrated, constant stopping threshold superiority printed results (0.9814318) can extracted using calibrated_binom_trial$best_x. Using default calibration functionality, calibrated, constant stopping threshold inferiority symmetrical, .e., 1 - stopping threshold superiority (0.0185682). calibrated trial specification may extracted using calibrated_binom_trial$best_trial_spec , printed, also include calibrated stopping thresholds. Calibration results may saved (reloaded) using path argument, avoid unnecessary repeated simulations.","code":"# Calibrate the trial specification calibrated_binom_trial <- calibrate_trial( trial_spec = binom_trial, n_rep = 1000, # 1000 simulations for each step (more generally recommended) base_seed = 4131, # Base random seed (for reproducible results) target = 0.05, # Target value for calibrated metric (default value) search_range = c(0.9, 1), # Search range for superiority stopping threshold tol = 0.01, # Tolerance range dir = -1 # Tolerance range only applies below target ) # Print result (to check if calibration is successful) calibrated_binom_trial #> Trial calibration: #> * Result: calibration successful #> * Best x: 0.9814318 #> * Best y: 0.048 #> #> Central settings: #> * Target: 0.05 #> * Tolerance: 0.01 (at or below target, range: 0.04 to 0.05) #> * Search range: 0.9 to 1 #> * Gaussian process controls: #> * - resolution: 5000 #> * - kappa: 0.5 #> * - pow: 1.95 #> * - lengthscale: 1 (constant) #> * - x scaled: yes #> * Noisy: no #> * Narrowing: yes #> #> Calibration/simulation details: #> * Total evaluations: 7 (previous + grid + iterations) #> * Repetitions: 1000 #> * Calibration time: 3.66 mins #> * Base random seed: 4131 #> #> See 'help(\"calibrate_trial\")' for details."},{"path":"https://inceptdk.github.io/adaptr/index.html","id":"summarising-results","dir":"","previous_headings":"Usage and workflow overview","what":"Summarising results","title":"Adaptive Trial Simulator","text":"results simulations using calibrated trial specification conducted calibration procedure may extracted using calibrated_binom_trial$best_sims. results can summarised several functions. functions support different ‘selection strategies’ simulations ending superiority, .e., performance metrics can calculated assuming different arms used clinical practice arm ultimately superior. check_performance() function summarises performance metrics tidy data.frame, uncertainty measures (bootstrapped confidence intervals) requested. , performance metrics calculated considering ‘best’ arm (.e., one highest probability overall best) selected simulations ending superiority: Similar results list format (without uncertainty measures) can obtained using summary() method, comes print() method providing formatted results: Individual simulation results may extracted tidy data.frame using extract_results(). Finally, probabilities different remaining arms statuses (uncertainty) last adaptive analysis can summarised using check_remaining_arms() function.","code":"# Calculate performance metrics with uncertainty measures binom_trial_performance <- check_performance( calibrated_binom_trial$best_sims, select_strategy = \"best\", uncertainty = TRUE, # Calculate uncertainty measures n_boot = 1000, # 1000 bootstrap samples (more typically recommended) ci_width = 0.95, # 95% confidence intervals (default) boot_seed = \"base\" # Use same random seed for bootstrapping as for simulations ) # Print results print(binom_trial_performance, digits = 2) #> metric est err_sd err_mad lo_ci hi_ci #> 1 n_summarised 1000.00 0.00 0.00 1000.00 1000.00 #> 2 size_mean 1749.60 11.36 10.97 1727.20 1772.10 #> 3 size_sd 373.74 9.64 9.74 355.15 392.58 #> 4 size_median 2000.00 0.00 0.00 2000.00 2000.00 #> 5 size_p25 1400.00 52.43 0.00 1400.00 1500.00 #> 6 size_p75 2000.00 0.00 0.00 2000.00 2000.00 #> 7 size_p0 400.00 NA NA NA NA #> 8 size_p100 2000.00 NA NA NA NA #> 9 sum_ys_mean 438.69 2.95 2.85 432.74 444.66 #> 10 sum_ys_sd 96.20 2.42 2.37 91.28 100.79 #> 11 sum_ys_median 486.00 1.98 2.97 483.00 490.00 #> 12 sum_ys_p25 364.75 10.95 9.64 352.00 395.00 #> 13 sum_ys_p75 506.00 1.15 1.48 504.00 508.00 #> 14 sum_ys_p0 88.00 NA NA NA NA #> 15 sum_ys_p100 565.00 NA NA NA NA #> 16 ratio_ys_mean 0.25 0.00 0.00 0.25 0.25 #> 17 ratio_ys_sd 0.01 0.00 0.00 0.01 0.01 #> 18 ratio_ys_median 0.25 0.00 0.00 0.25 0.25 #> 19 ratio_ys_p25 0.24 0.00 0.00 0.24 0.24 #> 20 ratio_ys_p75 0.26 0.00 0.00 0.26 0.26 #> 21 ratio_ys_p0 0.20 NA NA NA NA #> 22 ratio_ys_p100 0.30 NA NA NA NA #> 23 prob_conclusive 0.43 0.02 0.01 0.40 0.46 #> 24 prob_superior 0.05 0.01 0.01 0.04 0.06 #> 25 prob_equivalence 0.38 0.02 0.01 0.35 0.41 #> 26 prob_futility 0.00 0.00 0.00 0.00 0.00 #> 27 prob_max 0.57 0.02 0.01 0.54 0.60 #> 28 prob_select_arm_Arm A 0.32 0.02 0.01 0.29 0.35 #> 29 prob_select_arm_Arm B 0.31 0.01 0.01 0.28 0.34 #> 30 prob_select_arm_Arm C 0.37 0.02 0.02 0.34 0.40 #> 31 prob_select_none 0.00 0.00 0.00 0.00 0.00 #> 32 rmse 0.02 0.00 0.00 0.02 0.02 #> 33 rmse_te NA NA NA NA NA #> 34 mae 0.01 0.00 0.00 0.01 0.01 #> 35 mae_te NA NA NA NA NA #> 36 idp NA NA NA NA NA binom_trial_summary <- summary( calibrated_binom_trial$best_sims, select_strategy = \"best\" ) print(binom_trial_summary) #> Multiple simulation results: generic binomially distributed outcome trial #> * Undesirable outcome #> * Number of simulations: 1000 #> * Number of simulations summarised: 1000 (all trials) #> * No common control arm #> * Selection strategy: best remaining available #> * Treatment effect compared to: no comparison #> #> Performance metrics (using posterior estimates from final analysis [all patients]): #> * Sample sizes: mean 1749.6 (SD: 373.7) | median 2000.0 (IQR: 1400.0 to 2000.0) [range: 400.0 to 2000.0] #> * Total summarised outcomes: mean 438.7 (SD: 96.2) | median 486.0 (IQR: 364.8 to 506.0) [range: 88.0 to 565.0] #> * Total summarised outcome rates: mean 0.251 (SD: 0.011) | median 0.250 (IQR: 0.244 to 0.258) [range: 0.198 to 0.295] #> * Conclusive: 42.9% #> * Superiority: 4.8% #> * Equivalence: 38.1% #> * Futility: 0.0% [not assessed] #> * Inconclusive at max sample size: 57.1% #> * Selection probabilities: Arm A: 31.8% | Arm B: 31.0% | Arm C: 37.2% | None: 0.0% #> * RMSE / MAE: 0.01730 / 0.01102 #> * RMSE / MAE treatment effect: not estimated / not estimated #> * Ideal design percentage: not estimable #> #> Simulation details: #> * Simulation time: 33.1 secs #> * Base random seed: 4131 #> * Credible interval width: 95% #> * Number of posterior draws: 5000 #> * Estimation method: posterior medians with MAD-SDs"},{"path":"https://inceptdk.github.io/adaptr/index.html","id":"visualising-results","dir":"","previous_headings":"Usage and workflow overview","what":"Visualising results","title":"Adaptive Trial Simulator","text":"Several visualisation functions included (optional, require ggplot2 package installed). Convergence stability one performance metrics may visually assessed using plot_convergence() function: empirical cumulative distribution functions continuous performance metrics may also visualised: status probabilities overall trial (specific arms) according trial progress can visualised using plot_status() function: Finally, various metrics may summarised progress one multiple trial simulations using plot_history() function, requires non-sparse results (sparse argument must FALSE calibrate_trials(), run_trials(), run_trial(), leading additional results saved).","code":"plot_convergence( calibrated_binom_trial$best_sims, metrics = c(\"size mean\", \"prob_superior\", \"prob_equivalence\"), # select_strategy can be specified, but does not affect the chosen metrics ) plot_metrics_ecdf( calibrated_binom_trial$best_sims, metrics = \"size\" ) # Overall trial status probabilities plot_status( calibrated_binom_trial$best_sims, x_value = \"total n\" # Total number of randomised patients at X-axis )"},{"path":"https://inceptdk.github.io/adaptr/index.html","id":"use-calibrated-stopping-thresholds-in-another-scenario","dir":"","previous_headings":"Usage and workflow overview","what":"Use calibrated stopping thresholds in another scenario","title":"Adaptive Trial Simulator","text":"calibrated stopping thresholds (calibrated scenario -arm differences) may used run simulations overall trial specification, according different scenario (.e., -arm differences present) assess performance metrics (including Bayesian analogue power). First, new trial specification setup using settings , except -arm differences calibrated stopping thresholds: Simulations using trial specification calibrated stopping thresholds differences present can conducted using run_trials() function performance metrics calculated : , simulations may saved reloaded using path argument. Similarly, overall trial statuses scenario differences can visualised:","code":"binom_trial_calib_diff <- setup_trial_binom( arms = c(\"Arm A\", \"Arm B\", \"Arm C\"), true_ys = c(0.25, 0.20, 0.30), # Different outcomes in the arms min_probs = rep(0.20, 3), data_looks = seq(from = 300, to = 2000, by = 100), randomised_at_looks = c(seq(from = 400, to = 2000, by = 100), 2000), # Stopping rules for inferiority/superiority explicitly defined # using the calibration results inferiority = 1 - calibrated_binom_trial$best_x, superiority = calibrated_binom_trial$best_x, equivalence_prob = 0.9, equivalence_diff = 0.05 ) binom_trial_diff_sims <- run_trials( binom_trial_calib_diff, n_rep = 1000, # 1000 simulations (more generally recommended) base_seed = 1234 # Reproducible results ) check_performance( binom_trial_diff_sims, select_strategy = \"best\", uncertainty = TRUE, n_boot = 1000, # 1000 bootstrap samples (more typically recommended) ci_width = 0.95, boot_seed = \"base\" ) #> metric est err_sd err_mad lo_ci hi_ci #> 1 n_summarised 1000.000 0.000 0.000 1000.000 1000.000 #> 2 size_mean 1242.300 16.620 16.976 1209.895 1273.025 #> 3 size_sd 531.190 7.251 7.604 516.617 544.091 #> 4 size_median 1200.000 22.220 0.000 1200.000 1300.000 #> 5 size_p25 800.000 36.095 0.000 700.000 800.000 #> 6 size_p75 1700.000 42.453 0.000 1700.000 1800.000 #> 7 size_p0 400.000 NA NA NA NA #> 8 size_p100 2000.000 NA NA NA NA #> 9 sum_ys_mean 284.999 3.695 3.726 277.724 291.991 #> 10 sum_ys_sd 117.265 1.701 1.732 113.765 120.311 #> 11 sum_ys_median 279.000 5.268 4.448 269.500 289.512 #> 12 sum_ys_p25 186.000 6.682 7.413 174.000 197.019 #> 13 sum_ys_p75 390.000 7.633 7.413 374.000 402.250 #> 14 sum_ys_p0 81.000 NA NA NA NA #> 15 sum_ys_p100 519.000 NA NA NA NA #> 16 ratio_ys_mean 0.232 0.000 0.001 0.231 0.233 #> 17 ratio_ys_sd 0.016 0.000 0.000 0.015 0.017 #> 18 ratio_ys_median 0.230 0.001 0.000 0.230 0.232 #> 19 ratio_ys_p25 0.221 0.000 0.000 0.220 0.222 #> 20 ratio_ys_p75 0.242 0.001 0.001 0.240 0.243 #> 21 ratio_ys_p0 0.195 NA NA NA NA #> 22 ratio_ys_p100 0.298 NA NA NA NA #> 23 prob_conclusive 0.877 0.011 0.010 0.857 0.898 #> 24 prob_superior 0.731 0.014 0.015 0.706 0.759 #> 25 prob_equivalence 0.146 0.011 0.011 0.125 0.167 #> 26 prob_futility 0.000 0.000 0.000 0.000 0.000 #> 27 prob_max 0.123 0.011 0.010 0.102 0.143 #> 28 prob_select_arm_Arm A 0.038 0.006 0.006 0.026 0.049 #> 29 prob_select_arm_Arm B 0.962 0.006 0.006 0.951 0.974 #> 30 prob_select_arm_Arm C 0.000 0.000 0.000 0.000 0.000 #> 31 prob_select_none 0.000 0.000 0.000 0.000 0.000 #> 32 rmse 0.020 0.001 0.001 0.019 0.022 #> 33 rmse_te NA NA NA NA NA #> 34 mae 0.011 0.000 0.000 0.010 0.012 #> 35 mae_te NA NA NA NA NA #> 36 idp 98.100 0.306 0.297 97.549 98.700 plot_status(binom_trial_diff_sims, x_value = \"total n\")"},{"path":"https://inceptdk.github.io/adaptr/index.html","id":"issues-and-enhancements","dir":"","previous_headings":"","what":"Issues and enhancements","title":"Adaptive Trial Simulator","text":"use GitHub issue tracker bug/issue reports proposals enhancements.","code":""},{"path":"https://inceptdk.github.io/adaptr/index.html","id":"contributing","dir":"","previous_headings":"","what":"Contributing","title":"Adaptive Trial Simulator","text":"welcome contributions directly code improve performance well new functionality. latter, please first explain motivate issue. Changes code base follow steps: Fork repository Make branch appropriate name fork Implement changes fork, make sure passes R CMD check (neither errors, warnings, notes) add bullet top NEWS.md short description change, GitHub handle id pull request implementing change (check NEWS.md file see formatting) Create pull request dev branch adaptr","code":""},{"path":"https://inceptdk.github.io/adaptr/index.html","id":"citation","dir":"","previous_headings":"","what":"Citation","title":"Adaptive Trial Simulator","text":"use package, please consider citing :","code":"citation(package = \"adaptr\") #> #> To cite package 'adaptr' in publications use: #> #> Granholm A, Jensen AKG, Lange T, Kaas-Hansen BS (2022). adaptr: an R #> package for simulating and comparing adaptive clinical trials. #> Journal of Open Source Software, 7(72), 4284. URL #> https://doi.org/10.21105/joss.04284. #> #> A BibTeX entry for LaTeX users is #> #> @Article{, #> title = {{adaptr}: an R package for simulating and comparing adaptive clinical trials}, #> author = {Anders Granholm and Aksel Karl Georg Jensen and Theis Lange and Benjamin Skov Kaas-Hansen}, #> journal = {Journal of Open Source Software}, #> year = {2022}, #> volume = {7}, #> number = {72}, #> pages = {4284}, #> url = {https://doi.org/10.21105/joss.04284}, #> doi = {10.21105/joss.04284}, #> }"},{"path":"https://inceptdk.github.io/adaptr/reference/adaptr-package.html","id":null,"dir":"Reference","previous_headings":"","what":"adaptr: Adaptive Trial Simulator — adaptr-package","title":"adaptr: Adaptive Trial Simulator — adaptr-package","text":"Adaptive Trial Simulator adaptr package simulates adaptive (multi-arm, multi-stage) randomised clinical trials using adaptive stopping, adaptive arm dropping /response-adaptive randomisation. package developed part INCEPT (Intensive Care Platform Trial) project, funded primarily grant Sygeforsikringen \"danmark\".","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/adaptr-package.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"adaptr: Adaptive Trial Simulator — adaptr-package","text":"adaptr package contains following primary functions (order typical use): setup_cluster() initiates parallel computation cluster can used run simulations post-processing parallel, increasing speed. Details parallelisation options running adaptr functions parallel described setup_cluster() documentation. setup_trial() function general function sets trial specification. simpler, special-case functions setup_trial_binom() setup_trial_norm() may used easier specification trial designs using binary, binomially distributed continuous, normally distributed outcomes, respectively, limitations flexibility. calibrate_trial() function calibrates trial specification obtain certain value performance metric (typically used calibrate Bayesian type 1 error rate scenario -arm differences), using functions . run_trial() run_trials() functions used conduct single multiple simulations, respectively, according trial specification setup described #2. extract_results(), check_performance() summary() functions used extract results multiple trial simulations, calculate performance metrics, summarise results. plot_convergence() function assesses stability performance metrics according number simulations conducted. plot_metrics_ecdf() function plots empirical cumulative distribution functions numerical performance metrics. check_remaining_arms() function summarises combinations remaining arms across multiple trials simulations. plot_status() plot_history() functions used plot overall trial/arm statuses multiple simulated trials history trial metrics time single/multiple simulated trials, respectively. information see documentation function Overview vignette (vignette(\"Overview\", package = \"adaptr\")) example functions work combination. examples guidance setting trial specifications, see setup_trial() documentation, Basic examples vignette (vignette(\"Basic-examples\", package = \"adaptr\")) Advanced example vignette (vignette(\"Advanced-example\", package = \"adaptr\")). using package, please consider citing using citation(package = \"adaptr\").","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/adaptr-package.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"adaptr: Adaptive Trial Simulator — adaptr-package","text":"Granholm , Jensen AKG, Lange T, Kaas-Hansen BS (2022). adaptr: R package simulating comparing adaptive clinical trials. Journal Open Source Software, 7(72), 4284. doi:10.21105/joss.04284 Granholm , Kaas-Hansen BS, Lange T, Schjørring OL, Andersen LW, Perner , Jensen AKG, Møller MH (2022). overview methodological considerations regarding adaptive stopping, arm dropping randomisation clinical trials. J Clin Epidemiol. doi:10.1016/j.jclinepi.2022.11.002 Website/manual GitHub repository Examples studies using adaptr: Granholm , Lange T, Harhay MO, Jensen AKG, Perner , Møller MH, Kaas-Hansen BS (2023). Effects duration follow-lag data collection performance adaptive clinical trials. Pharm Stat. doi:10.1002/pst.2342 Granholm , Lange T, Harhay MO, Perner , Møller MH, Kaas-Hansen BS (2024). Effects sceptical priors performance adaptive clinical trials binary outcomes. Pharm Stat. doi:10.1002/pst.2387","code":""},{"path":[]},{"path":"https://inceptdk.github.io/adaptr/reference/adaptr-package.html","id":"author","dir":"Reference","previous_headings":"","what":"Author","title":"adaptr: Adaptive Trial Simulator — adaptr-package","text":"Maintainer: Anders Granholm andersgran@gmail.com (ORCID) Authors: Benjamin Skov Kaas-Hansen epiben@hey.com (ORCID) contributors: Aksel Karl Georg Jensen akje@sund.ku.dk (ORCID) [contributor] Theis Lange thlan@sund.ku.dk (ORCID) [contributor]","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/assert_pkgs.html","id":null,"dir":"Reference","previous_headings":"","what":"Check availability of required packages — assert_pkgs","title":"Check availability of required packages — assert_pkgs","text":"Used internally, helper function check SUGGESTED packages available. halt execution queried packages available provide installation instructions.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/assert_pkgs.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check availability of required packages — assert_pkgs","text":"","code":"assert_pkgs(pkgs = NULL)"},{"path":"https://inceptdk.github.io/adaptr/reference/assert_pkgs.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check availability of required packages — assert_pkgs","text":"pkgs, character vector name(s) package(s) check.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/assert_pkgs.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check availability of required packages — assert_pkgs","text":"TRUE packages available, otherwise execution halted error.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/calculate_idp.html","id":null,"dir":"Reference","previous_headings":"","what":"Calculate the ideal design percentage — calculate_idp","title":"Calculate the ideal design percentage — calculate_idp","text":"Used internally check_performance(), calculates ideal design percentage described function's documentation.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/calculate_idp.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Calculate the ideal design percentage — calculate_idp","text":"","code":"calculate_idp(sels, arms, true_ys, highest_is_best)"},{"path":"https://inceptdk.github.io/adaptr/reference/calculate_idp.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Calculate the ideal design percentage — calculate_idp","text":"sels character vector specifying selected arms (according selection strategies described extract_results()). arms character vector unique names trial arms. true_ys numeric vector specifying true outcomes (e.g., event probabilities, mean values, etc.) trial arms. highest_is_best single logical, specifies whether larger estimates outcome favourable ; defaults FALSE, corresponding , e.g., undesirable binary outcomes (e.g., mortality) continuous outcome lower numbers preferred (e.g., hospital length stay).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/calculate_idp.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Calculate the ideal design percentage — calculate_idp","text":"single numeric value 0 100 corresponding ideal design percentage.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/calibrate_trial.html","id":null,"dir":"Reference","previous_headings":"","what":"Calibrate trial specification — calibrate_trial","title":"Calibrate trial specification — calibrate_trial","text":"function calibrates trial specification using Gaussian process-based Bayesian optimisation algorithm. function calibrates input trial specification object (using repeated calls run_trials() adjusting trial specification) target value within search_range single input dimension (x) order find optimal value (y). default (expectedly common use case) calibrate trial specification adjust superiority inferiority thresholds obtain certain probability superiority; used trial specification identical underlying outcomes (-arm differences), probability estimate Bayesian analogue total type-1 error rate outcome driving adaptations, -arm differences present, corresponds estimate Bayesian analogue power. default perform calibration varying single, constant, symmetric thresholds superiority / inferiority throughout trial design, described Details, default values chosen function well case. Advanced users may use function calibrate trial specifications according metrics - see Details specify custom function used modify (recreate) trial specification object calibration process. underlying Gaussian process model control hyperparameters described Details, model partially based code Gramacy 2020 (permission; see References).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/calibrate_trial.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Calibrate trial specification — calibrate_trial","text":"","code":"calibrate_trial( trial_spec, n_rep = 1000, cores = NULL, base_seed = NULL, fun = NULL, target = 0.05, search_range = c(0.9, 1), tol = target/10, dir = 0, init_n = 2, iter_max = 25, resolution = 5000, kappa = 0.5, pow = 1.95, lengthscale = 1, scale_x = TRUE, noisy = is.null(base_seed), narrow = !noisy & !is.null(base_seed), prev_x = NULL, prev_y = NULL, path = NULL, overwrite = FALSE, version = NULL, compress = TRUE, sparse = TRUE, progress = NULL, export = NULL, export_envir = parent.frame(), verbose = FALSE, plot = FALSE )"},{"path":"https://inceptdk.github.io/adaptr/reference/calibrate_trial.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Calibrate trial specification — calibrate_trial","text":"trial_spec trial_spec object, generated validated setup_trial(), setup_trial_binom() setup_trial_norm() function. n_rep single integer, number simulations run evaluation. Values < 100 permitted; values < 1000 permitted recommended . cores NULL single integer. NULL, default value/cluster set setup_cluster() used control whether simulations run parallel default cluster sequentially main process; cluster/value specified setup_cluster(), cores set value stored global \"mc.cores\" option (previously set options(mc.cores = ), 1 option specified. resulting number cores = 1, computations run sequentially primary process, cores > 1, new parallel cluster setup using parallel library removed function completes. See setup_cluster() details. base_seed single integer NULL (default); random seed used basis simulation runs (see run_trials()) random number generation within rest calibration process; used, global random seed restored function run.Note: providing base_seed highly recommended, generally lead faster stable calibration. fun NULL (default), case trial specification calibrated using default process described Details; otherwise user-supplied function used calibration process, structure described Details. target single finite numeric value (defaults 0.05); target value y calibrate trial_spec object . search_range finite numeric vector length 2; lower upper boundaries search best x. Defaults c(0.9, 1.0). tol single finite numeric value (defaults target / 10); accepted tolerance (direction(s) specified dir) accepted; y-value within accepted tolerance target obtained, calibration stops.Note: tol specified sensible considering n_rep; e.g., probability superiority targeted n_rep == 1000, tol 0.01 correspond 10 simulated trials. low tol relative n_rep may lead slow calibration calibration succeed regardless number iterations.Important: even large number simulations conducted, using low tol may lead calibration succeeding may also affected factors, e.g., total number simulated patients, possible maximum differences simulated outcomes, number posterior draws (n_draws setup_trial() family functions), affects minimum differences posterior probabilities simulating trials thus can affect calibration, including using default calibration function. Increasing number posterior draws number repetitions attempted desired tolerance achieved lower numbers. dir single numeric value; specifies direction(s) tolerance range. 0 (default) tolerance range target - tol target + tol. < 0, range target - tol target, > 0, range target target + tol. init_n single integer >= 2. number initial evaluations evenly spread search_range, one evaluation boundary (thus, default value 2 minimum permitted value; calibrating according different target default, higher value may sensible). iter_max single integer > 0 (default 25). maximum number new evaluations initial grid (size specified init_n) set . calibration unsuccessful maximum number iterations, prev_x prev_y arguments (described ) may used start new calibration process re-using previous evaluations. resolution single integer (defaults 5000), size grid predictions used select next value evaluate made.Note: memory use substantially increase higher values. See also narrow argument . kappa single numeric value > 0 (default 0.5); corresponding width uncertainty bounds used find next target evaluate. See Details. pow single numerical value [1, 2] range (default 1.95), controlling smoothness Gaussian process. See Details. lengthscale single numerical value (defaults 1) numerical vector length 2; values must finite non-negative. single value provided, used lengthscale hyperparameter; numerical vector length 2 provided, second value must higher first optimal lengthscale range found using optimisation algorithm. value 0, small amount noise added lengthscales must > 0. Controls smoothness combination pow. See Details. scale_x single logical value; TRUE (default) x-values scaled [0, 1] range according minimum/maximum values provided. FALSE, model use original scale. distances original scale small, scaling may preferred. returned values always original scale. See Details. noisy single logical value; FALSE, noiseless process assumed, interpolation values performed (.e., uncertainty x-values assumed). TRUE, y-values assumed come noisy process, regression performed (.e., uncertainty evaluated x-values assumed included predictions). Specifying FALSE requires base_seed supplied, generally recommended, usually lead faster stable calibration. low n_rep used (trials calibrated metrics default), specifying TRUE may necessary even using valid base_seed. Defaults TRUE base_seed supplied FALSE . narrow single logical value. FALSE, predictions evenly spread full x-range. TRUE, prediction grid spread evenly interval consisting two x-values corresponding y-values closest target opposite directions. Can TRUE base_seed provided noisy FALSE (default value TRUE case, otherwise FALSE), function can safely assumed monotonically increasing decreasing (generally reasonable default used fun), case lead faster search smoother prediction grid relevant region without increasing memory use. prev_x, prev_y numeric vectors equal lengths, corresponding previous evaluations. provided, used calibration process (added initial grid setup, values grid matching values prev_x leading evaluations skipped). path single character string NULL (default); valid file path provided, calibration results either saved path (file exist overwrite TRUE, see ) previous results loaded returned (file exists, overwrite FALSE, input trial_spec central control settings identical previous run, otherwise error produced). Results saved/loaded using saveRDS() / readRDS() functions. overwrite single logical, defaults FALSE, case previous results loaded valid file path provided path object path contains input trial_spec previous calibration used central control settings (otherwise, function errors). TRUE valid file path provided path, complete calibration function run results saved using saveRDS(), regardless whether previous result saved path. version passed saveRDS() saving calibration results, defaults NULL (saveRDS()), means current default version used. Ignored calibration results saved. compress passed saveRDS() saving calibration results, defaults TRUE (saveRDS()), see saveRDS() options. Ignored calibration results saved. sparse, progress, export, export_envir passed run_trials(), see description . verbose single logical, defaults FALSE. TRUE, function print details calibration progress. plot single logical, defaults FALSE. TRUE, function print plots Gaussian process model predictions return part final object; requires ggplot2 package installed.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/calibrate_trial.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Calibrate trial specification — calibrate_trial","text":"list special class \"trial_calibration\", contains following elements can extracted using $ [[: success: single logical, TRUE calibration succeeded best result within tolerance range, FALSE calibration process ended allowed iterations without obtaining result within tolerance range. best_x: single numerical value, x-value (original, input scale) best y-value found, regardless success. best_y: single numerical value, best y-value obtained, regardless success. best_trial_spec: best calibrated version original trial_spec object supplied, regardless success (.e., returned trial specification object adequately calibrated success TRUE). best_sims: trial simulation results (run_trials()) leading best y-value, regardless success. new simulations conducted (e.g., best y-value one prev_y-values), NULL. evaluations: two-column data.frame containing variables x y, corresponding x-values y-values (including values supplied prev_x/prev_y). input_trial_spec: unaltered, uncalibrated, original trial_spec-object provided function. elapsed_time: total run time calibration process. control: list central settings provided function. fun: function used calibration; NULL supplied starting calibration, default function (described Details) returned used function. adaptr_version: version adaptr package used run calibration process. plots: list containing ggplot2 plot objects Gaussian process suggestion step, included plot TRUE.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/calibrate_trial.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Calibrate trial specification — calibrate_trial","text":"Default calibration fun NULL (default), default calibration strategy employed. , target y probability superiority (described check_performance() summary()), function calibrate constant stopping thresholds superiority inferiority (described setup_trial(), setup_trial_binom(), setup_trial_norm()), corresponds Bayesian analogues type 1 error rate differences arms trial specification, expect common use case, power, differences arms trial specification. stopping calibration process , default case, use input x stopping threshold superiority 1 - x stopping threshold inferiority, respectively, .e., stopping thresholds constant symmetric. underlying default function calibrated typically essentially noiseless high enough number simulations used appropriate random base_seed, generally monotonically decreasing. default values control hyperparameters set normally work well case (including init_n, kappa, pow, lengthscale, narrow, scale_x, etc.). Thus, initial grid evaluations used case, base_seed provided, noiseless process assumed narrowing search range iteration performed, uncertainty bounds used acquisition function (corresponding quantiles posterior predictive distribution) relatively narrow. Specifying calibration functions user-specified calibration function following structure: Note: changes trial specification validated; users define calibration function need ensure changes calibrated trial specifications lead invalid values; otherwise, procedure prone error simulations run. Especially, users aware changing true_ys trial specification generated using simplified setup_trial_binom() setup_trial_norm() functions requires changes multiple places object, including functions used generate random outcomes, cases (otherwise doubt) re-generating trial_spec instead modifying preferred safer leads proper validation. Note: y values corresponding certain x values known, user may directly return values without running simulations (e.g., default case x 1 require >100% <0% probabilities stopping rules, impossible, hence y value case definition 1). Gaussian process optimisation function control hyperparameters calibration function uses relatively simple Gaussian optimisation function settings work well default calibration function, can changed required, considered calibrating according targets (effects using settings may evaluated greater detail setting verbose plot TRUE). function may perform interpolation (.e., assuming noiseless, deterministic process uncertainty values already evaluated) regression (.e., assuming noisy, stochastic process), controlled noisy argument. covariance matrix (kernel) defined : exp(-||x - x'||^pow / lengthscale) ||x -x'|| corresponding matrix containing absolute Euclidean distances values x (values prediction grid), scaled [0, 1] range scale_x TRUE original scale FALSE. Scaling generally recommended (leads comparable predictable effects pow lengthscale, regardless true scale), also recommended range values smaller range. absolute distances raised power pow, must value [1, 2] range. Together lengthscale, pow controls smoothness Gaussian process model, 1 corresponding less smoothing (.e., piecewise straight lines evaluations lengthscale 1) values > 1 corresponding smoothing. raising absolute distances chosen power pow, resulting matrix divided lengthscale. default 1 (change), values < 1 leads faster decay correlations thus less smoothing (wiggly fits), values > 1 leads smoothing (less wiggly fits). single specific value supplied lengthscale used; range values provided, secondary optimisation process determines value use within range. minimal noise (\"jitter\") always added diagonals matrices relevant ensure numerical stability; noisy TRUE, \"nugget\" value determined using secondary optimisation process Predictions made equally spaced grid x values size resolution; narrow TRUE, grid spread x values corresponding y values closest closes target, respectively, leading finer grid range relevance (described , used processes assumed noiseless used process can safely assumed monotonically increasing decreasing within search_range). suggest next x value evaluations, function uses acquisition function based bi-directional uncertainty bounds (posterior predictive distributions) widths controlled kappa hyperparameter. Higher kappa/wider uncertainty bounds leads increased exploration (.e., algorithm prone select values high uncertainty, relatively far existing evaluations), lower kappa/narrower uncertainty bounds leads increased exploitation (.e., algorithm prone select values less uncertainty, closer best predicted mean values). value x grid leading one boundaries smallest absolute distance target chosen (within narrowed range, narrow TRUE). See Greenhill et al, 2020 References general description acquisition functions. IMPORTANT: recommend control hyperparameters explicitly specified, even default calibration function. Although default values sensible default calibration function, may change future. , generally recommend users perform small-scale comparisons (.e., fewer simulations final calibration) calibration process different hyperparameters specific use cases beyond default (possibly guided setting verbose plot options TRUE) running substantial number calibrations simulations, exact choices may important influence speed likelihood success calibration process. responsibility user specify sensible values settings hyperparameters.","code":"# The function must take the arguments x and trial_spec # trial_spec is the original trial_spec object which should be modified # (alternatively, it may be re-specified, but the argument should still # be included, even if ignored) function(x, trial_spec) { # Calibrate trial_spec, here as in the default function trial_spec$superiority <- x trial_spec$inferiority <- 1 - x # If relevant, known y values corresponding to specific x values may be # returned without running simulations (here done as in the default # function). In that case, a code block line the one below can be included, # with changed x/y values - of note, the other return values should not be # changed if (x == 1) { return(list(sims = NULL, trial_spec = trial_spec, y = 0)) } # Run simulations - this block should be included unchanged sims <- run_trials(trial_spec, n_rep = n_rep, cores = cores, base_seed = base_seed, sparse = sparse, progress = progress, export = export, export_envir = export_envir) # Return results - only the y value here should be changed # summary() or check_performance() will often be used here list(sims = sims, trial_spec = trial_spec, y = summary(sims)$prob_superior) }"},{"path":"https://inceptdk.github.io/adaptr/reference/calibrate_trial.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Calibrate trial specification — calibrate_trial","text":"Gramacy RB (2020). Chapter 5: Gaussian Process Regression. : Surrogates: Gaussian Process Modeling, Design Optimization Applied Sciences. Chapman Hall/CRC, Boca Raton, Florida, USA. Available online. Greenhill S, Rana S, Gupta S, Vellanki P, Venkatesh S (2020). Bayesian Optimization Adaptive Experimental Design: Review. IEEE Access, 8, 13937-13948. doi:10.1109/ACCESS.2020.2966228","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/calibrate_trial.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Calibrate trial specification — calibrate_trial","text":"","code":"if (FALSE) { # Setup a trial specification to calibrate # This trial specification has similar event rates in all arms # and as the default calibration settings are used, this corresponds to # assessing the Bayesian type 1 error rate for this design and scenario binom_trial <- setup_trial_binom(arms = c(\"A\", \"B\"), true_ys = c(0.25, 0.25), data_looks = 1:5 * 200) # Run calibration using default settings for most parameters res <- calibrate_trial(binom_trial, n_rep = 1000, base_seed = 23) # Print calibration summary result res }"},{"path":"https://inceptdk.github.io/adaptr/reference/cat0.html","id":null,"dir":"Reference","previous_headings":"","what":"cat() with sep = ","title":"cat() with sep = ","text":"Used internally. Passes everything cat() enforces sep = \"\". Relates cat() paste0() relates paste().","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/cat0.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"cat() with sep = ","text":"","code":"cat0(...)"},{"path":"https://inceptdk.github.io/adaptr/reference/cat0.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"cat() with sep = ","text":"... strings concatenated printed.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/check_performance.html","id":null,"dir":"Reference","previous_headings":"","what":"Check performance metrics for trial simulations — check_performance","title":"Check performance metrics for trial simulations — check_performance","text":"Calculates performance metrics trial specification based simulation results run_trials() function, bootstrapped uncertainty measures requested. Uses extract_results(), may used directly extract key trial results without summarising. function also used summary() calculate performance metrics presented function.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/check_performance.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check performance metrics for trial simulations — check_performance","text":"","code":"check_performance( object, select_strategy = \"control if available\", select_last_arm = FALSE, select_preferences = NULL, te_comp = NULL, raw_ests = FALSE, final_ests = NULL, restrict = NULL, uncertainty = FALSE, n_boot = 5000, ci_width = 0.95, boot_seed = NULL, cores = NULL )"},{"path":"https://inceptdk.github.io/adaptr/reference/check_performance.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check performance metrics for trial simulations — check_performance","text":"object trial_results object, output run_trials() function. select_strategy single character string. trial stopped due superiority (1 arm remaining, select_last_arm set TRUE trial designs common control arm; see ), parameter specifies arm considered selected calculating trial design performance metrics, described ; corresponds consequence inconclusive trial, .e., arm used practice. following options available must written exactly (case sensitive, abbreviated): \"control available\" (default): selects first control arm trials common control arm arm active end--trial, otherwise arm selected. trial designs without common control, arm selected. \"none\": selects arm trials ending superiority. \"control\": similar \"control available\", throw error used trial designs without common control arm. \"final control\": selects final control arm regardless whether trial stopped practical equivalence, futility, maximum sample size; strategy can specified trial designs common control arm. \"control best\": selects first control arm still active end--trial, otherwise selects best remaining arm (defined remaining arm highest probability best last adaptive analysis conducted). works trial designs common control arm. \"best\": selects best remaining arm (described \"control best\"). \"list best\": selects first remaining arm specified list (specified using select_preferences, technically character vector). none arms active end--trial, best remaining arm selected (described ). \"list\": specified , arms provided list remain active end--trial, arm selected. select_last_arm single logical, defaults FALSE. TRUE, remaining active arm (last control) selected trials common control arm ending equivalence futility, considering options specified select_strategy. Must FALSE trial designs without common control arm. select_preferences character vector specifying number arms used selection one \"list best\" \"list\" options specified select_strategy. Can contain valid arms available trial. te_comp character string, treatment-effect comparator. Can either NULL (default) case first control arm used trial designs common control arm, string naming single trial arm. used calculating err_te sq_err_te (error squared error treatment effect comparing selected arm comparator arm, described ). raw_ests single logical. FALSE (default), posterior estimates (post_ests post_ests_all, see setup_trial() run_trial()) used calculate err sq_err (error squared error estimated compared specified effect selected arm) err_te sq_err_te (error squared error treatment effect comparing selected arm comparator arm, described te_comp ). TRUE, raw estimates (raw_ests raw_ests_all, see setup_trial() run_trial()) used instead posterior estimates. final_ests single logical. TRUE (recommended) final estimates calculated using outcome data patients randomised trials stopped used (post_ests_all raw_ests_all, see setup_trial() run_trial()); FALSE, estimates calculated arm arm stopped (last adaptive analysis ) using data patients reach followed time point patients randomised used (post_ests raw_ests, see setup_trial() run_trial()). NULL (default), argument set FALSE outcome data available immediate randomisation patients (backwards compatibility, final posterior estimates may vary slightly situation, even using data); otherwise said TRUE. See setup_trial() details estimates calculated. restrict single character string NULL. NULL (default), results summarised simulations; \"superior\", results summarised simulations ending superiority ; \"selected\", results summarised simulations ending selected arm (according specified arm selection strategy simulations ending superiority). summary measures (e.g., prob_conclusive) substantially different interpretations restricted, calculated nonetheless. uncertainty single logical; FALSE (default) uncertainty measures calculated, TRUE, non-parametric bootstrapping used calculate uncertainty measures. n_boot single integer (default 5000); number bootstrap samples use uncertainty = TRUE. Values < 100 allowed values < 1000 lead warning, results likely unstable cases. ci_width single numeric >= 0 < 1, width percentile-based bootstrapped confidence intervals. Defaults 0.95, corresponding 95% confidence intervals. boot_seed single integer, NULL (default), \"base\". value provided, value used initiate random seeds bootstrapping global random seed restored function run. \"base\" specified, base_seed specified run_trials() used. Regardless whether simulations run sequentially parallel, bootstrapped results identical boot_seed specified. cores NULL single integer. NULL, default value set setup_cluster() used control whether extractions simulation results done parallel default cluster sequentially main process; value specified setup_cluster(), cores set value stored global \"mc.cores\" option (previously set options(mc.cores = ), 1 option specified. cores = 1, computations run sequentially primary process, cores > 1, new parallel cluster setup using parallel library removed function completes. See setup_cluster() details.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/check_performance.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check performance metrics for trial simulations — check_performance","text":"tidy data.frame added class trial_performance (control number digits printed, see print()), columns \"metric\" (described ), \"est\" (estimate metric), following four columns uncertainty = TRUE: \"err_sd\"(bootstrapped SDs), \"err_mad\" (bootstrapped MAD-SDs, described setup_trial() stats::mad()), \"lo_ci\", \"hi_ci\", latter two corresponding lower/upper limits percentile-based bootstrapped confidence intervals. Bootstrap estimates calculated minimum (_p0) maximum values (_p100) size, sum_ys, ratio_ys, non-parametric bootstrapping minimum/maximum values sensible - bootstrap estimates values NA. following performance metrics calculated: n_summarised: number simulations summarised. size_mean, size_sd, size_median, size_p25, size_p75, size_p0, size_p100: mean, standard deviation, median well 25-, 75-, 0- (min), 100- (max) percentiles sample sizes (number patients randomised simulated trial) summarised trial simulations. sum_ys_mean, sum_ys_sd, sum_ys_median, sum_ys_p25, sum_ys_p75, sum_ys_p0, sum_ys_p100: mean, standard deviation, median well 25-, 75-, 0- (min), 100- (max) percentiles total sum_ys across arms summarised trial simulations (e.g., total number events trials binary outcome, sums continuous values patients across arms trials continuous outcome). Always uses outcomes randomised patients regardless whether patients outcome data available time trial stopping (corresponding sum_ys_all results run_trial()). ratio_ys_mean, ratio_ys_sd, ratio_ys_median, ratio_ys_p25, ratio_ys_p75, ratio_ys_p0, ratio_ys_p100: mean, standard deviation, median well 25-, 75-, 0- (min), 100- (max) percentiles final ratio_ys (sum_ys described divided total number patients randomised) across arms summarised trial simulations. prob_conclusive: proportion (0 1) conclusive trial simulations, .e., simulations stopped maximum sample size without superiority, equivalence futility decision. prob_superior, prob_equivalence, prob_futility, prob_max: proportion (0 1) trial simulations stopped superiority, equivalence, futility inconclusive maximum allowed sample size, respectively.Note: metrics may make sense summarised simulation results restricted. prob_select_*: selection probabilities arm selection, according specified selection strategy. Contains one element per arm, named prob_select_arm_ prob_select_none probability selecting arm. rmse, rmse_te: root mean squared errors estimates selected arm treatment effect, described extract_results(). mae, mae_te: median absolute errors estimates selected arm treatment effect, described extract_results(). idp: ideal design percentage (IDP; 0-100%), see Details.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/check_performance.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Check performance metrics for trial simulations — check_performance","text":"ideal design percentage (IDP) returned based Viele et al, 2020 doi:10.1177/1740774519877836 (also described Granholm et al, 2022 doi:10.1016/j.jclinepi.2022.11.002 , also describes performance measures) adapted work trials desirable/undesirable outcomes non-binary outcomes. Briefly, expected outcome calculated sum true outcomes arm multiplied corresponding selection probabilities (ignoring simulations selected arm). IDP calculated : desirable outcomes (highest_is_best TRUE):100 * (expected outcome - lowest true outcome) / (highest true outcome - lowest true outcome) undesirable outcomes (highest_is_best FALSE):100 - IDP calculated desirable outcomes","code":""},{"path":[]},{"path":"https://inceptdk.github.io/adaptr/reference/check_performance.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check performance metrics for trial simulations — check_performance","text":"","code":"# Setup a trial specification binom_trial <- setup_trial_binom(arms = c(\"A\", \"B\", \"C\", \"D\"), control = \"A\", true_ys = c(0.20, 0.18, 0.22, 0.24), data_looks = 1:20 * 100) # Run 10 simulations with a specified random base seed res <- run_trials(binom_trial, n_rep = 10, base_seed = 12345) # Check performance measures, without assuming that any arm is selected in # the inconclusive simulations, with bootstrapped uncertainty measures # (unstable in this example due to the very low number of simulations # summarised): check_performance(res, select_strategy = \"none\", uncertainty = TRUE, n_boot = 1000, boot_seed = \"base\") #> metric est err_sd err_mad lo_ci hi_ci #> 1 n_summarised 10.000 0.000 0.000 10.000 10.000 #> 2 size_mean 1840.000 162.458 237.216 1520.000 2000.000 #> 3 size_sd 505.964 297.470 250.048 0.000 772.873 #> 4 size_median 2000.000 66.847 0.000 2000.000 2000.000 #> 5 size_p25 2000.000 362.022 0.000 800.000 2000.000 #> 6 size_p75 2000.000 0.000 0.000 2000.000 2000.000 #> 7 size_p0 400.000 NA NA NA NA #> 8 size_p100 2000.000 NA NA NA NA #> 9 sum_ys_mean 369.900 33.912 36.324 293.050 419.500 #> 10 sum_ys_sd 105.352 46.692 56.287 19.191 162.759 #> 11 sum_ys_median 390.000 16.984 4.448 373.000 418.500 #> 12 sum_ys_p25 376.500 67.721 16.309 152.750 392.000 #> 13 sum_ys_p75 408.500 21.318 25.945 388.500 460.000 #> 14 sum_ys_p0 84.000 NA NA NA NA #> 15 sum_ys_p100 466.000 NA NA NA NA #> 16 ratio_ys_mean 0.202 0.005 0.005 0.193 0.212 #> 17 ratio_ys_sd 0.016 0.003 0.003 0.008 0.021 #> 18 ratio_ys_median 0.196 0.006 0.003 0.190 0.210 #> 19 ratio_ys_p25 0.194 0.005 0.003 0.181 0.200 #> 20 ratio_ys_p75 0.209 0.009 0.009 0.195 0.230 #> 21 ratio_ys_p0 0.180 NA NA NA NA #> 22 ratio_ys_p100 0.233 NA NA NA NA #> 23 prob_conclusive 0.100 0.102 0.148 0.000 0.300 #> 24 prob_superior 0.100 0.102 0.148 0.000 0.300 #> 25 prob_equivalence 0.000 0.000 0.000 0.000 0.000 #> 26 prob_futility 0.000 0.000 0.000 0.000 0.000 #> 27 prob_max 0.900 0.102 0.148 0.700 1.000 #> 28 prob_select_arm_A 0.000 0.000 0.000 0.000 0.000 #> 29 prob_select_arm_B 0.100 0.102 0.148 0.000 0.300 #> 30 prob_select_arm_C 0.000 0.000 0.000 0.000 0.000 #> 31 prob_select_arm_D 0.000 0.000 0.000 0.000 0.000 #> 32 prob_select_none 0.900 0.102 0.148 0.700 1.000 #> 33 rmse 0.023 0.000 0.000 0.023 0.023 #> 34 rmse_te 0.182 0.000 0.000 0.182 0.182 #> 35 mae 0.023 0.000 0.000 0.023 0.023 #> 36 mae_te 0.182 0.000 0.000 0.182 0.182 #> 37 idp 100.000 0.000 0.000 100.000 100.000"},{"path":"https://inceptdk.github.io/adaptr/reference/check_remaining_arms.html","id":null,"dir":"Reference","previous_headings":"","what":"Check remaining arm combinations — check_remaining_arms","title":"Check remaining arm combinations — check_remaining_arms","text":"function summarises numbers proportions combinations remaining arms (.e., excluding arms dropped inferiority futility analysis, arms dropped equivalence earlier analyses trials common control) across multiple simulated trial results. function supplements extract_results(), check_performance(), summary() functions, especially useful designs > 2 arms, provides details functions mentioned .","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/check_remaining_arms.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check remaining arm combinations — check_remaining_arms","text":"","code":"check_remaining_arms(object, ci_width = 0.95)"},{"path":"https://inceptdk.github.io/adaptr/reference/check_remaining_arms.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check remaining arm combinations — check_remaining_arms","text":"object trial_results object, output run_trials() function. ci_width single numeric >= 0 < 1, width approximate confidence intervals proportions combinations (calculated analytically). Defaults 0.95, corresponding 95% confidence intervals.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/check_remaining_arms.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check remaining arm combinations — check_remaining_arms","text":"data.frame containing combinations remaining arms, sorted descending order , following columns: arm_*, one column per arm, named arm_. columns contain empty character string \"\" dropped arms (including arms dropped final analysis), otherwise \"superior\", \"control\", \"equivalence\" (equivalent final analysis), \"active\", described run_trial(). n integer vector, number trial simulations ending combination remaining arms specified preceding columns. prop numeric vector, proportion trial simulations ending combination remaining arms specified preceding columns. se,lo_ci,hi_ci: standard error prop confidence intervals width specified ci_width.","code":""},{"path":[]},{"path":"https://inceptdk.github.io/adaptr/reference/check_remaining_arms.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check remaining arm combinations — check_remaining_arms","text":"","code":"# Setup a trial specification binom_trial <- setup_trial_binom(arms = c(\"A\", \"B\", \"C\", \"D\"), control = \"A\", true_ys = c(0.20, 0.18, 0.22, 0.24), data_looks = 1:20 * 200, equivalence_prob = 0.7, equivalence_diff = 0.03, equivalence_only_first = FALSE) # Run 35 simulations with a specified random base seed res <- run_trials(binom_trial, n_rep = 25, base_seed = 12345) # Check remaining arms (printed with fewer digits) print(check_remaining_arms(res), digits = 3) #> arm_A arm_B arm_C arm_D n prop se lo_ci hi_ci #> 1 superior 5 0.20 0.179 0 0.551 #> 2 superior 5 0.20 0.179 0 0.551 #> 3 control active active active 5 0.20 0.179 0 0.551 #> 4 control equivalence 1 0.04 0.196 0 0.424"},{"path":"https://inceptdk.github.io/adaptr/reference/cov_mat.html","id":null,"dir":"Reference","previous_headings":"","what":"Estimates covariance matrices used by Gaussian process optimisation — cov_mat","title":"Estimates covariance matrices used by Gaussian process optimisation — cov_mat","text":"Used internally, estimates covariance matrices used Gaussian process optimisation function. Calculates pairwise absolute distances raised power (defaults 2) using pow_abs_dist() function, divides result lengthscale hyperparameter (defaults 1, .e., changes due division), subsequently returns inverse exponentiation resulting matrix.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/cov_mat.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Estimates covariance matrices used by Gaussian process optimisation — cov_mat","text":"","code":"cov_mat(x1, x2 = x1, g = NULL, pow = 2, lengthscale = 1)"},{"path":"https://inceptdk.github.io/adaptr/reference/cov_mat.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Estimates covariance matrices used by Gaussian process optimisation — cov_mat","text":"x1 numeric vector, length corresponding number rows returned matrix. x2 numeric vector, length corresponding number columns returned matrix. specified, x1 used x2. g single numerical value; jitter/nugget value added diagonal NULL (default); supplied x1 x2, avoid potentially negative values matrix diagonal due numerical instability. pow single numeric value, power distances raised . Defaults 2, corresponding pairwise, squared, Euclidean distances. lengthscale single numerical value; lengthscale hyperparameter matrix returned pow_abs_dist() divided inverse exponentiation done.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/cov_mat.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Estimates covariance matrices used by Gaussian process optimisation — cov_mat","text":"Covariance matrix length(x1) rows length(x2) columns used Gaussian process optimiser.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/dispatch_trial_runs.html","id":null,"dir":"Reference","previous_headings":"","what":"Simulate single trial after setting seed — dispatch_trial_runs","title":"Simulate single trial after setting seed — dispatch_trial_runs","text":"Helper function dispatch running several trials lapply() parallel::parLapply(), setting seeds correctly base_seed used calling run_trials(). Used internally calls run_trials() function.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/dispatch_trial_runs.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Simulate single trial after setting seed — dispatch_trial_runs","text":"","code":"dispatch_trial_runs(is, trial_spec, seeds, sparse, cores, cl = NULL)"},{"path":"https://inceptdk.github.io/adaptr/reference/dispatch_trial_runs.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Simulate single trial after setting seed — dispatch_trial_runs","text":"vector integers, simulation numbers/indices. trial_spec trial specification provided setup_trial(), setup_trial_binom() setup_trial_norm(). sparse single logical, described run_trial(); defaults TRUE running multiple simulations, case data necessary summarise simulations saved simulation. FALSE, detailed data simulation saved, allowing detailed printing individual trial results plotting using plot_history() (plot_status() require non-sparse results). cores NULL single integer. NULL, default value/cluster set setup_cluster() used control whether simulations run parallel default cluster sequentially main process; cluster/value specified setup_cluster(), cores set value stored global \"mc.cores\" option (previously set options(mc.cores = ), 1 option specified. resulting number cores = 1, computations run sequentially primary process, cores > 1, new parallel cluster setup using parallel library removed function completes. See setup_cluster() details. cl NULL (default) running sequentially, otherwise parallel cluster parallel computation cores > 1.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/dispatch_trial_runs.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Simulate single trial after setting seed — dispatch_trial_runs","text":"Single trial simulation object, described run_trial().","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/equivalent_funs.html","id":null,"dir":"Reference","previous_headings":"","what":"Assert equivalent functions — equivalent_funs","title":"Assert equivalent functions — equivalent_funs","text":"Used internally. Compares definitions two functions (ignoring environments, bytecodes, etc., comparing function arguments bodies, using deparse()).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/equivalent_funs.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Assert equivalent functions — equivalent_funs","text":"","code":"equivalent_funs(fun1, fun2)"},{"path":"https://inceptdk.github.io/adaptr/reference/equivalent_funs.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Assert equivalent functions — equivalent_funs","text":"fun1, fun2 functions compare.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/equivalent_funs.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Assert equivalent functions — equivalent_funs","text":"Single logical.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/extract_history.html","id":null,"dir":"Reference","previous_headings":"","what":"Extract history — extract_history","title":"Extract history — extract_history","text":"Used internally. Extracts relevant parameters conducted adaptive analysis single trial.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/extract_history.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Extract history — extract_history","text":"","code":"extract_history(object, metric = \"prob\")"},{"path":"https://inceptdk.github.io/adaptr/reference/extract_history.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Extract history — extract_history","text":"object single trial_result run_trial(), works run argument sparse = FALSE. metric either \"prob\" (default), case allocation probabilities adaptive analysis returned; \"n\"/\"n \", case total number patients available follow-data (\"n\") allocated (\"n \") arm adaptive analysis returned; \"pct\"/\"pct \" case proportions patients allocated available follow-data (\"pct\") allocated total (\"pct \") arm total number patients returned; \"sum ys\"/\"sum ys \", case total summed available outcome data (\"sum ys\") total summed outcome data including outcomes patients randomised necessarily reached follow-yet (\"sum ys \") arm adaptive analysis returned; \"ratio ys\"/\"ratio ys \", case total summed outcomes specified \"sum ys\"/\"sum ys \" divided number patients analysis adaptive returned.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/extract_history.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Extract history — extract_history","text":"tidy data.frame (one row per arm per look) containing following columns: look: consecutive numbers (integers) interim look. look_ns: total number patients (integers) outcome data available current adaptive analysis look arms trial. look_ns_all: total number patients (integers) randomised current adaptive analysis look arms trial. arm: current arm trial. value: described metric.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/extract_results.html","id":null,"dir":"Reference","previous_headings":"","what":"Extract simulation results — extract_results","title":"Extract simulation results — extract_results","text":"function extracts relevant information multiple simulations trial specification tidy data.frame (1 simulation per row). See also check_performance() summary() functions, uses output function summarise simulation results.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/extract_results.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Extract simulation results — extract_results","text":"","code":"extract_results( object, select_strategy = \"control if available\", select_last_arm = FALSE, select_preferences = NULL, te_comp = NULL, raw_ests = FALSE, final_ests = NULL, cores = NULL )"},{"path":"https://inceptdk.github.io/adaptr/reference/extract_results.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Extract simulation results — extract_results","text":"object trial_results object, output run_trials() function. select_strategy single character string. trial stopped due superiority (1 arm remaining, select_last_arm set TRUE trial designs common control arm; see ), parameter specifies arm considered selected calculating trial design performance metrics, described ; corresponds consequence inconclusive trial, .e., arm used practice. following options available must written exactly (case sensitive, abbreviated): \"control available\" (default): selects first control arm trials common control arm arm active end--trial, otherwise arm selected. trial designs without common control, arm selected. \"none\": selects arm trials ending superiority. \"control\": similar \"control available\", throw error used trial designs without common control arm. \"final control\": selects final control arm regardless whether trial stopped practical equivalence, futility, maximum sample size; strategy can specified trial designs common control arm. \"control best\": selects first control arm still active end--trial, otherwise selects best remaining arm (defined remaining arm highest probability best last adaptive analysis conducted). works trial designs common control arm. \"best\": selects best remaining arm (described \"control best\"). \"list best\": selects first remaining arm specified list (specified using select_preferences, technically character vector). none arms active end--trial, best remaining arm selected (described ). \"list\": specified , arms provided list remain active end--trial, arm selected. select_last_arm single logical, defaults FALSE. TRUE, remaining active arm (last control) selected trials common control arm ending equivalence futility, considering options specified select_strategy. Must FALSE trial designs without common control arm. select_preferences character vector specifying number arms used selection one \"list best\" \"list\" options specified select_strategy. Can contain valid arms available trial. te_comp character string, treatment-effect comparator. Can either NULL (default) case first control arm used trial designs common control arm, string naming single trial arm. used calculating err_te sq_err_te (error squared error treatment effect comparing selected arm comparator arm, described ). raw_ests single logical. FALSE (default), posterior estimates (post_ests post_ests_all, see setup_trial() run_trial()) used calculate err sq_err (error squared error estimated compared specified effect selected arm) err_te sq_err_te (error squared error treatment effect comparing selected arm comparator arm, described te_comp ). TRUE, raw estimates (raw_ests raw_ests_all, see setup_trial() run_trial()) used instead posterior estimates. final_ests single logical. TRUE (recommended) final estimates calculated using outcome data patients randomised trials stopped used (post_ests_all raw_ests_all, see setup_trial() run_trial()); FALSE, estimates calculated arm arm stopped (last adaptive analysis ) using data patients reach followed time point patients randomised used (post_ests raw_ests, see setup_trial() run_trial()). NULL (default), argument set FALSE outcome data available immediate randomisation patients (backwards compatibility, final posterior estimates may vary slightly situation, even using data); otherwise said TRUE. See setup_trial() details estimates calculated. cores NULL single integer. NULL, default value set setup_cluster() used control whether extractions simulation results done parallel default cluster sequentially main process; value specified setup_cluster(), cores set value stored global \"mc.cores\" option (previously set options(mc.cores = ), 1 option specified. cores = 1, computations run sequentially primary process, cores > 1, new parallel cluster setup using parallel library removed function completes. See setup_cluster() details.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/extract_results.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Extract simulation results — extract_results","text":"data.frame containing following columns: sim: simulation number (1 total number simulations). final_n: final sample size simulation. sum_ys: sum total counts arms, e.g., total number events trials binary outcome (setup_trial_binom()) sum arm totals trials continuous outcome (setup_trial_norm()). Always uses outcome data randomised patients regardless whether patients outcome data available time trial stopping (corresponding sum_ys_all results run_trial()). ratio_ys: calculated sum_ys/final_n (described ). final_status: final trial status simulation, either \"superiority\", \"equivalence\", \"futility\", \"max\", described run_trial(). superior_arm: final superior arm simulations stopped superiority. NA simulations stopped superiority. selected_arm: final selected arm (described ). correspond superior_arm simulations stopped superiority NA arm selected. See select_strategy . err: squared error estimate selected arm, calculated estimated effect - true effect selected arm. sq_err: squared error estimate selected arm, calculated err^2 selected arm, err defined . err_te: error treatment effect comparing selected arm comparator arm (specified te_comp). Calculated :(estimated effect selected arm - estimated effect comparator arm) - (true effect selected arm - true effect comparator arm) NA simulations without selected arm, comparator specified (see te_comp ), selected arm comparator arm. sq_err_te: squared error treatment effect comparing selected arm comparator arm (specified te_comp), calculated err_te^2, err_te defined .","code":""},{"path":[]},{"path":"https://inceptdk.github.io/adaptr/reference/extract_results.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Extract simulation results — extract_results","text":"","code":"# Setup a trial specification binom_trial <- setup_trial_binom(arms = c(\"A\", \"B\", \"C\", \"D\"), control = \"A\", true_ys = c(0.20, 0.18, 0.22, 0.24), data_looks = 1:20 * 100) # Run 10 simulations with a specified random base seed res <- run_trials(binom_trial, n_rep = 10, base_seed = 12345) # Extract results and Select the control arm if available # in simulations not ending with superiority extract_results(res, select_strategy = \"control\") #> sim final_n sum_ys ratio_ys final_status superior_arm selected_arm #> 1 1 2000 387 0.1935 max A #> 2 2 2000 391 0.1955 max A #> 3 3 2000 359 0.1795 max #> 4 4 2000 389 0.1945 max A #> 5 5 2000 373 0.1865 max A #> 6 6 400 84 0.2100 superiority B B #> 7 7 2000 395 0.1975 max A #> 8 8 2000 442 0.2210 max A #> 9 9 2000 413 0.2065 max A #> 10 10 2000 466 0.2330 max A #> err sq_err err_te sq_err_te #> 1 0.027072360 7.329127e-04 NA NA #> 2 0.027225919 7.412507e-04 NA NA #> 3 NA NA NA NA #> 4 0.028619492 8.190753e-04 NA NA #> 5 -0.014477338 2.095933e-04 NA NA #> 6 -0.022699865 5.152839e-04 -0.1820624 0.0331467 #> 7 0.009098866 8.278937e-05 NA NA #> 8 0.010663973 1.137203e-04 NA NA #> 9 0.015544164 2.416210e-04 NA NA #> 10 0.019152691 3.668256e-04 NA NA"},{"path":"https://inceptdk.github.io/adaptr/reference/extract_results_batch.html","id":null,"dir":"Reference","previous_headings":"","what":"Extract results from a batch of trials from an object with multiple trials — extract_results_batch","title":"Extract results from a batch of trials from an object with multiple trials — extract_results_batch","text":"Used internally extract_results(). Extracts results batch simulations simulation object multiple simulation results returned run_trials(), used facilitate parallelisation.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/extract_results_batch.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Extract results from a batch of trials from an object with multiple trials — extract_results_batch","text":"","code":"extract_results_batch( trial_results, control = control, select_strategy = select_strategy, select_last_arm = select_last_arm, select_preferences = select_preferences, te_comp = te_comp, which_ests = which_ests, te_comp_index = te_comp_index, te_comp_true_y = te_comp_true_y )"},{"path":"https://inceptdk.github.io/adaptr/reference/extract_results_batch.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Extract results from a batch of trials from an object with multiple trials — extract_results_batch","text":"trial_results list trial results summarise, current batch. control single character string, common control arm trial specification (NULL none). select_strategy single character string. trial stopped due superiority (1 arm remaining, select_last_arm set TRUE trial designs common control arm; see ), parameter specifies arm considered selected calculating trial design performance metrics, described ; corresponds consequence inconclusive trial, .e., arm used practice. following options available must written exactly (case sensitive, abbreviated): \"control available\" (default): selects first control arm trials common control arm arm active end--trial, otherwise arm selected. trial designs without common control, arm selected. \"none\": selects arm trials ending superiority. \"control\": similar \"control available\", throw error used trial designs without common control arm. \"final control\": selects final control arm regardless whether trial stopped practical equivalence, futility, maximum sample size; strategy can specified trial designs common control arm. \"control best\": selects first control arm still active end--trial, otherwise selects best remaining arm (defined remaining arm highest probability best last adaptive analysis conducted). works trial designs common control arm. \"best\": selects best remaining arm (described \"control best\"). \"list best\": selects first remaining arm specified list (specified using select_preferences, technically character vector). none arms active end--trial, best remaining arm selected (described ). \"list\": specified , arms provided list remain active end--trial, arm selected. select_last_arm single logical, defaults FALSE. TRUE, remaining active arm (last control) selected trials common control arm ending equivalence futility, considering options specified select_strategy. Must FALSE trial designs without common control arm. select_preferences character vector specifying number arms used selection one \"list best\" \"list\" options specified select_strategy. Can contain valid arms available trial. te_comp character string, treatment-effect comparator. Can either NULL (default) case first control arm used trial designs common control arm, string naming single trial arm. used calculating err_te sq_err_te (error squared error treatment effect comparing selected arm comparator arm, described ). which_ests single character string, combination raw_ests final_ests arguments extract_results(). te_comp_index single integer, index treatment effect comparator arm (NULL none). te_comp_true_y single numeric value, true y value treatment effect comparator arm (NULL none).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/extract_results_batch.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Extract results from a batch of trials from an object with multiple trials — extract_results_batch","text":"data.frame containing columns returned extract_results() described function (sim start 1, changed relevant extract_results()).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/extract_statuses.html","id":null,"dir":"Reference","previous_headings":"","what":"Extract statuses — extract_statuses","title":"Extract statuses — extract_statuses","text":"Used internally. Extracts overall trial statuses statuses single arm multiple trial simulations. Works sparse results.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/extract_statuses.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Extract statuses — extract_statuses","text":"","code":"extract_statuses(object, x_value, arm = NULL)"},{"path":"https://inceptdk.github.io/adaptr/reference/extract_statuses.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Extract statuses — extract_statuses","text":"object trial_results object run_trials(). x_value single character string, determining whether number adaptive analysis looks (\"look\", default), total cumulated number patients randomised (\"total n\") total cumulated number patients outcome data available adaptive analysis (\"followed n\") plotted x-axis. arm character vector containing one unique, valid arm names, NA, NULL (default). NULL, overall trial statuses plotted, otherwise specified arms arms (NA specified) plotted.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/extract_statuses.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Extract statuses — extract_statuses","text":"tidy data.frame (one row possible status per look) containing following columns: x: look numbers total number patients look, specified x_value. status: possible status (\"Recruiting\", \"Inferiority\" (relevant individual arms), \"Futility\", \"Equivalence\", \"Superiority\", relevant). p: proportion (0-1) patients status value x. value: described metric.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/find_beta_params.html","id":null,"dir":"Reference","previous_headings":"","what":"Find beta distribution parameters from thresholds — find_beta_params","title":"Find beta distribution parameters from thresholds — find_beta_params","text":"Helper function find beta distribution parameters corresponding fewest possible patients events/non-events specified event proportion. Used Advanced example vignette (vignette(\"Advanced-example\", \"adaptr\")) derive beta prior distributions use beta-binomial conjugate models, based belief true event probability lies within specified percentile-based interval (defaults 95%). May similarly used users derive beta priors.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/find_beta_params.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Find beta distribution parameters from thresholds — find_beta_params","text":"","code":"find_beta_params( theta = NULL, boundary_target = NULL, boundary = \"lower\", interval_width = 0.95, n_dec = 0, max_n = 10000 )"},{"path":"https://inceptdk.github.io/adaptr/reference/find_beta_params.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Find beta distribution parameters from thresholds — find_beta_params","text":"theta single numeric > 0 < 1, expected true event probability. boundary_target single numeric > 0 < 1, target lower upper boundary interval. boundary single character string, either \"lower\" (default) \"upper\", used select boundary use finding appropriate parameters beta distribution. interval_width width credible interval whose lower/upper boundary used (see boundary_target); must > 0 < 1; defaults 0.95. n_dec single non-negative integer; returned parameters rounded number decimals. Defaults 0, case parameters correspond whole number patients. max_n single integer > 0 (default 10000), maximum total sum parameters, corresponding maximum total number patients considered function finding optimal parameter values. Corresponds maximum number patients contributing information beta prior; default number patients unlikely used beta prior.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/find_beta_params.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Find beta distribution parameters from thresholds — find_beta_params","text":"single-row data.frame five columns: two shape parameters beta distribution (alpha, beta), rounded according n_dec, actual lower upper boundaries interval median (appropriate names, e.g. p2.5, p50, p97.5 95% interval), using rounded values.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/fmt_dig.html","id":null,"dir":"Reference","previous_headings":"","what":"Format digits before printing — fmt_dig","title":"Format digits before printing — fmt_dig","text":"Used internally.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/fmt_dig.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Format digits before printing — fmt_dig","text":"","code":"fmt_dig(x, dig)"},{"path":"https://inceptdk.github.io/adaptr/reference/fmt_dig.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Format digits before printing — fmt_dig","text":"x numeric, numeric value(s) format. dig single integer, number digits.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/fmt_dig.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Format digits before printing — fmt_dig","text":"Formatted character string.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/fmt_pct.html","id":null,"dir":"Reference","previous_headings":"","what":"Create formatted label with absolute and relative frequencies (percentages) — fmt_pct","title":"Create formatted label with absolute and relative frequencies (percentages) — fmt_pct","text":"Used internally.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/fmt_pct.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create formatted label with absolute and relative frequencies (percentages) — fmt_pct","text":"","code":"fmt_pct(e, n, dec = 1)"},{"path":"https://inceptdk.github.io/adaptr/reference/fmt_pct.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create formatted label with absolute and relative frequencies (percentages) — fmt_pct","text":"e integer, numerator (e.g., number events). n integer, denominator (e.g., total number patients). dec integer, number decimals percentage.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/fmt_pct.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Create formatted label with absolute and relative frequencies (percentages) — fmt_pct","text":"Formatted character string.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/get_draws_binom.html","id":null,"dir":"Reference","previous_headings":"","what":"Generate draws from posterior beta-binomial distributions — get_draws_binom","title":"Generate draws from posterior beta-binomial distributions — get_draws_binom","text":"Used internally. function generates draws posterior distributions using separate beta-binomial models (binomial outcome, conjugate beta prior) arm, flat (beta(1, 1)) priors.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/get_draws_binom.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Generate draws from posterior beta-binomial distributions — get_draws_binom","text":"","code":"get_draws_binom(arms, allocs, ys, control, n_draws)"},{"path":"https://inceptdk.github.io/adaptr/reference/get_draws_binom.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Generate draws from posterior beta-binomial distributions — get_draws_binom","text":"arms character vector, currently active arms specified setup_trial() / setup_trial_binom() / setup_trial_norm(). allocs character vector, allocations patients (including allocations currently inactive arms). ys numeric vector, outcomes patients order alloc (including outcomes patients currently inactive arms). control unused argument built-functions setup_trial_binom() setup_trial_norm, required argument supplied run_trial() function, may used user-defined functions used generate posterior draws. n_draws single integer, number posterior draws.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/get_draws_binom.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Generate draws from posterior beta-binomial distributions — get_draws_binom","text":"matrix (numeric values) length(arms) columns n_draws rows, arms column names.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/get_draws_generic.html","id":null,"dir":"Reference","previous_headings":"","what":"Generic documentation for get_draws_* functions — get_draws_generic","title":"Generic documentation for get_draws_* functions — get_draws_generic","text":"Used internally. See setup_trial() function documentation additional details specify functions generate posterior draws.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/get_draws_generic.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Generic documentation for get_draws_* functions — get_draws_generic","text":"arms character vector, currently active arms specified setup_trial() / setup_trial_binom() / setup_trial_norm(). allocs character vector, allocations patients (including allocations currently inactive arms). ys numeric vector, outcomes patients order alloc (including outcomes patients currently inactive arms). control unused argument built-functions setup_trial_binom() setup_trial_norm, required argument supplied run_trial() function, may used user-defined functions used generate posterior draws. n_draws single integer, number posterior draws.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/get_draws_generic.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Generic documentation for get_draws_* functions — get_draws_generic","text":"matrix (numeric values) length(arms) columns n_draws rows, arms column names.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/get_draws_norm.html","id":null,"dir":"Reference","previous_headings":"","what":"Generate draws from posterior normal distributions — get_draws_norm","title":"Generate draws from posterior normal distributions — get_draws_norm","text":"Used internally. function generates draws posterior, normal distributions continuous outcomes. Technically, posteriors use priors (simulation speed), corresponding use improper flat priors. posteriors correspond (give similar results) using normal-normal models (normally distributed outcome, conjugate normal prior) arm, assuming non-informative, flat prior used. Thus, posteriors directly correspond normal distributions groups' mean mean groups' standard error standard deviation. necessary always return valid draws, cases < 2 patients randomised arm, posterior draws come extremely wide normal distribution mean corresponding mean included patients outcome data standard deviation corresponding difference highest lowest recorded outcomes patients available outcome data multiplied 1000.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/get_draws_norm.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Generate draws from posterior normal distributions — get_draws_norm","text":"","code":"get_draws_norm(arms, allocs, ys, control, n_draws)"},{"path":"https://inceptdk.github.io/adaptr/reference/get_draws_norm.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Generate draws from posterior normal distributions — get_draws_norm","text":"arms character vector, currently active arms specified setup_trial() / setup_trial_binom() / setup_trial_norm(). allocs character vector, allocations patients (including allocations currently inactive arms). ys numeric vector, outcomes patients order alloc (including outcomes patients currently inactive arms). control unused argument built-functions setup_trial_binom() setup_trial_norm, required argument supplied run_trial() function, may used user-defined functions used generate posterior draws. n_draws single integer, number posterior draws.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/get_draws_norm.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Generate draws from posterior normal distributions — get_draws_norm","text":"matrix (numeric values) length(arms) columns n_draws rows, arms column names.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/get_ys_binom.html","id":null,"dir":"Reference","previous_headings":"","what":"Generate binary outcomes from binomial distributions — get_ys_binom","title":"Generate binary outcomes from binomial distributions — get_ys_binom","text":"Used internally. Function factory used generate function generates binary outcomes binomial distributions.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/get_ys_binom.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Generate binary outcomes from binomial distributions — get_ys_binom","text":"","code":"get_ys_binom(arms, event_probs)"},{"path":"https://inceptdk.github.io/adaptr/reference/get_ys_binom.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Generate binary outcomes from binomial distributions — get_ys_binom","text":"arms character vector arms specified setup_trial_binom(). event_probs numeric vector true event probabilities arms specified setup_trial_binom().","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/get_ys_binom.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Generate binary outcomes from binomial distributions — get_ys_binom","text":"function takes argument allocs (character vector allocations) returns numeric vector similar length corresponding, randomly generated outcomes (0 1, binomial distribution).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/get_ys_norm.html","id":null,"dir":"Reference","previous_headings":"","what":"Generate normally distributed continuous outcomes — get_ys_norm","title":"Generate normally distributed continuous outcomes — get_ys_norm","text":"Used internally. Function factory used generate function generates outcomes normal distributions.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/get_ys_norm.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Generate normally distributed continuous outcomes — get_ys_norm","text":"","code":"get_ys_norm(arms, means, sds)"},{"path":"https://inceptdk.github.io/adaptr/reference/get_ys_norm.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Generate normally distributed continuous outcomes — get_ys_norm","text":"arms character vector, arms specified setup_trial_norm(). means numeric vector, true means arms specified setup_trial_norm(). sds numeric vector, true standard deviations (sds) arms specified setup_trial_norm().","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/get_ys_norm.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Generate normally distributed continuous outcomes — get_ys_norm","text":"function takes argument allocs (character vector allocations) returns numeric vector length corresponding, randomly generated outcomes (normal distributions).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/gp_opt.html","id":null,"dir":"Reference","previous_headings":"","what":"Gaussian process-based optimisation — gp_opt","title":"Gaussian process-based optimisation — gp_opt","text":"Used internally. Simple Gaussian process-based Bayesian optimisation function, used find next value evaluate (x) calibrate_trial() function. Uses single input dimension, may rescaled [0, 1] range function, covariance structure based absolute distances values, raised power (pow) subsequently divided lengthscale inverse exponentiation resulting matrix used. pow lengthscale hyperparameters consequently control smoothness controlling rate decay correlations distance. optimisation algorithm uses bi-directional uncertainty bounds acquisition function suggests next target evaluate, wider uncertainty bounds (higher kappa) leading increased 'exploration' (.e., function prone suggest new target values uncertainty high often best evaluation far) narrower uncertainty bounds leading increased 'exploitation' (.e., function prone suggest new target values relatively close mean predictions model). dir argument controls whether suggested value (based uncertainty bounds) value closest target either direction (dir = 0), target (dir > 0), target (dir < 0), , preferred. function evaluated noise-free monotonically increasing decreasing, optimisation function can narrow range predictions based input evaluations (narrow = TRUE), leading finer grid potential new targets suggest compared predictions spaced full range. new value evaluate function suggested already evaluated, random noise added ensure evaluation new value (narrow FALSE, noise based random draw normal distribution current suggested value mean standard deviation x values SD, truncated range x-values; narrow TRUE, new value drawn uniform distribution within current narrowed range suggested. strategies, process repeated suggested value 'new'). Gaussian process model used partially based code Gramacy 2020 (permission), see References.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/gp_opt.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Gaussian process-based optimisation — gp_opt","text":"","code":"gp_opt( x, y, target, dir = 0, resolution = 5000, kappa = 1.96, pow = 1.95, lengthscale = 1, scale_x = TRUE, noisy = FALSE, narrow = FALSE )"},{"path":"https://inceptdk.github.io/adaptr/reference/gp_opt.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Gaussian process-based optimisation — gp_opt","text":"x numeric vector, previous values function calibrated evaluated. y numeric vector, corresponding results previous evaluations x values (must length x). target single numeric value, desired target value calibration process. dir single numeric value (default 0), used selecting next value evaluate . See which_nearest() description. resolution single integer (default 5000), size grid predictions used select next value evaluate made.Note: memory use time substantially increase higher values. kappa single numeric value > 0 (default 1.96), used width uncertainty bounds (based Gaussian process posterior predictive distribution), used select next value evaluate . pow single numerical value, passed cov_mat() controls smoothness Gaussian process. 1 (smoothness, piecewise straight lines subsequent x/y-coordinate lengthscale described 1) 2; defaults 1.95, leads slightly faster decay correlations x values internally scaled [0, 1]-range compared 2. lengthscale single numerical value (default 1) numerical vector length 2; values must finite non-negative. single value provided, used lengthscale hyperparameter passed directly cov_mat(). numerical vector length 2 provided, second value must higher first optimal lengthscale range found using optimisation algorithm. value 0, minimum amount noise added lengthscales must > 0. Controls smoothness/decay combination pow. scale_x single logical value; TRUE (default) x-values scaled [0, 1] range according minimum/maximum values provided. FALSE, model use original scale. distances original scale small, scaling may preferred. returned values always original scale. noisy single logical value. FALSE (default), noiseless process assumed, interpolation values performed (.e., uncertainty evaluated x-values); TRUE, y-values assumed come noisy process, regression performed (.e., uncertainty evaluated x-values included predictions, amount estimated using optimisation algorithm). narrow single logical value. FALSE (default), predictions evenly spread full x-range. TRUE, prediction grid spread evenly interval consisting two x-values corresponding y-values closest target opposite directions. setting used noisy FALSE function can safely assumed monotonically increasing decreasing, case lead faster search smoother prediction grid relevant region without increasing memory use.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/gp_opt.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Gaussian process-based optimisation — gp_opt","text":"List containing two elements, next_x, single numerical value, suggested next x value evaluate function, predictions, data.frame resolution rows four columns: x, x grid values predictions made; y_hat, predicted means, lub uub, lower upper uncertainty bounds predictions according kappa.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/gp_opt.html","id":"references","dir":"Reference","previous_headings":"","what":"References","title":"Gaussian process-based optimisation — gp_opt","text":"Gramacy RB (2020). Chapter 5: Gaussian Process Regression. : Surrogates: Gaussian Process Modeling, Design Optimization Applied Sciences. Chapman Hall/CRC, Boca Raton, Florida, USA. Available online. Greenhill S, Rana S, Gupta S, Vellanki P, Venkatesh S (2020). Bayesian Optimization Adaptive Experimental Design: Review. IEEE Access, 8, 13937-13948. doi:10.1109/ACCESS.2020.2966228","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/make_x_scale.html","id":null,"dir":"Reference","previous_headings":"","what":"Make x-axis scale for history/status plots — make_x_scale","title":"Make x-axis scale for history/status plots — make_x_scale","text":"Used internally. Prepares x-axis scale history/status plots. Requires ggplot2 package installed.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/make_x_scale.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Make x-axis scale for history/status plots — make_x_scale","text":"","code":"make_x_scale(x_value)"},{"path":"https://inceptdk.github.io/adaptr/reference/make_x_scale.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Make x-axis scale for history/status plots — make_x_scale","text":"x_value single character string, determining whether number adaptive analysis looks (\"look\", default), total cumulated number patients randomised (\"total n\") total cumulated number patients outcome data available adaptive analysis (\"followed n\") plotted x-axis.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/make_x_scale.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Make x-axis scale for history/status plots — make_x_scale","text":"appropriate scale ggplot2 plot x-axis according value specified x_value.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/make_y_scale.html","id":null,"dir":"Reference","previous_headings":"","what":"Make y-axis scale for history/status plots — make_y_scale","title":"Make y-axis scale for history/status plots — make_y_scale","text":"Used internally. Prepares y-axis scale history/status plots. Requires ggplot2 package installed.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/make_y_scale.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Make y-axis scale for history/status plots — make_y_scale","text":"","code":"make_y_scale(y_value)"},{"path":"https://inceptdk.github.io/adaptr/reference/make_y_scale.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Make y-axis scale for history/status plots — make_y_scale","text":"y_value single character string, determining values plotted y-axis. following options available: allocation probabilities (\"prob\", default), total number patients outcome data available (\"n\") randomised (\"n \") arm, percentage patients outcome data available (\"pct\") randomised (\"pct \") arm current total, sum available (\"sum ys\") outcome data outcome data randomised patients including outcome data available time current adaptive analysis (\"sum ys \"), ratio outcomes defined \"sum ys\"/\"sum ys \" divided corresponding number patients arm.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/make_y_scale.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Make y-axis scale for history/status plots — make_y_scale","text":"appropriate scale ggplot2 plot y-axis according value specified y_value.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/plot_convergence.html","id":null,"dir":"Reference","previous_headings":"","what":"Plot convergence of performance metrics — plot_convergence","title":"Plot convergence of performance metrics — plot_convergence","text":"Plots performance metrics according number simulations conducted multiple simulated trials. simulated trial results may split number batches illustrate stability performance metrics across different simulations. Calculations done according specified selection restriction strategies described extract_results() check_performance(). Requires ggplot2 package installed.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/plot_convergence.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Plot convergence of performance metrics — plot_convergence","text":"","code":"plot_convergence( object, metrics = \"size mean\", resolution = 100, select_strategy = \"control if available\", select_last_arm = FALSE, select_preferences = NULL, te_comp = NULL, raw_ests = FALSE, final_ests = NULL, restrict = NULL, n_split = 1, nrow = NULL, ncol = NULL, cores = NULL )"},{"path":"https://inceptdk.github.io/adaptr/reference/plot_convergence.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Plot convergence of performance metrics — plot_convergence","text":"object trial_results object, output run_trials() function. metrics performance metrics plot, described check_performance(). Multiple metrics may plotted time. Valid metrics include: size_mean, size_sd, size_median, size_p25, size_p75, size_p0, size_p100, sum_ys_mean, sum_ys_sd, sum_ys_median, sum_ys_p25, sum_ys_p75, sum_ys_p0, sum_ys_p100, ratio_ys_mean, ratio_ys_sd, ratio_ys_median, ratio_ys_p25, ratio_ys_p75, ratio_ys_p0, ratio_ys_p100, prob_conclusive, prob_superior, prob_equivalence, prob_futility, prob_max, prob_select_* (* either \"arm_ arm names none), rmse, rmse_te, mae, mae_te, idp. may specified , case sensitive, either spaces underlines. Defaults \"size mean\". resolution single positive integer, number points calculated plotted, defaults 100 must >= 10. Higher numbers lead smoother plots, increases computation time. value specified higher number simulations (simulations per split), maximum possible value used instead. select_strategy single character string. trial stopped due superiority (1 arm remaining, select_last_arm set TRUE trial designs common control arm; see ), parameter specifies arm considered selected calculating trial design performance metrics, described ; corresponds consequence inconclusive trial, .e., arm used practice. following options available must written exactly (case sensitive, abbreviated): \"control available\" (default): selects first control arm trials common control arm arm active end--trial, otherwise arm selected. trial designs without common control, arm selected. \"none\": selects arm trials ending superiority. \"control\": similar \"control available\", throw error used trial designs without common control arm. \"final control\": selects final control arm regardless whether trial stopped practical equivalence, futility, maximum sample size; strategy can specified trial designs common control arm. \"control best\": selects first control arm still active end--trial, otherwise selects best remaining arm (defined remaining arm highest probability best last adaptive analysis conducted). works trial designs common control arm. \"best\": selects best remaining arm (described \"control best\"). \"list best\": selects first remaining arm specified list (specified using select_preferences, technically character vector). none arms active end--trial, best remaining arm selected (described ). \"list\": specified , arms provided list remain active end--trial, arm selected. select_last_arm single logical, defaults FALSE. TRUE, remaining active arm (last control) selected trials common control arm ending equivalence futility, considering options specified select_strategy. Must FALSE trial designs without common control arm. select_preferences character vector specifying number arms used selection one \"list best\" \"list\" options specified select_strategy. Can contain valid arms available trial. te_comp character string, treatment-effect comparator. Can either NULL (default) case first control arm used trial designs common control arm, string naming single trial arm. used calculating err_te sq_err_te (error squared error treatment effect comparing selected arm comparator arm, described ). raw_ests single logical. FALSE (default), posterior estimates (post_ests post_ests_all, see setup_trial() run_trial()) used calculate err sq_err (error squared error estimated compared specified effect selected arm) err_te sq_err_te (error squared error treatment effect comparing selected arm comparator arm, described te_comp ). TRUE, raw estimates (raw_ests raw_ests_all, see setup_trial() run_trial()) used instead posterior estimates. final_ests single logical. TRUE (recommended) final estimates calculated using outcome data patients randomised trials stopped used (post_ests_all raw_ests_all, see setup_trial() run_trial()); FALSE, estimates calculated arm arm stopped (last adaptive analysis ) using data patients reach followed time point patients randomised used (post_ests raw_ests, see setup_trial() run_trial()). NULL (default), argument set FALSE outcome data available immediate randomisation patients (backwards compatibility, final posterior estimates may vary slightly situation, even using data); otherwise said TRUE. See setup_trial() details estimates calculated. restrict single character string NULL. NULL (default), results summarised simulations; \"superior\", results summarised simulations ending superiority ; \"selected\", results summarised simulations ending selected arm (according specified arm selection strategy simulations ending superiority). summary measures (e.g., prob_conclusive) substantially different interpretations restricted, calculated nonetheless. n_split single positive integer, number consecutive batches simulation results split , plotted separately. Default 1 (splitting); maximum value number simulations summarised (restrictions) divided 10. nrow, ncol number rows columns plotting multiple metrics plot (using faceting ggplot2). Defaults NULL, case determined automatically. cores NULL single integer. NULL, default value set setup_cluster() used control whether extractions simulation results done parallel default cluster sequentially main process; value specified setup_cluster(), cores set value stored global \"mc.cores\" option (previously set options(mc.cores = ), 1 option specified. cores = 1, computations run sequentially primary process, cores > 1, new parallel cluster setup using parallel library removed function completes. See setup_cluster() details.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/plot_convergence.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Plot convergence of performance metrics — plot_convergence","text":"ggplot2 plot object.","code":""},{"path":[]},{"path":"https://inceptdk.github.io/adaptr/reference/plot_convergence.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Plot convergence of performance metrics — plot_convergence","text":"","code":"#### Only run examples if ggplot2 is installed #### if (requireNamespace(\"ggplot2\", quietly = TRUE)){ # Setup a trial specification binom_trial <- setup_trial_binom(arms = c(\"A\", \"B\", \"C\", \"D\"), control = \"A\", true_ys = c(0.20, 0.18, 0.22, 0.24), data_looks = 1:20 * 100) # Run multiple simulation with a fixed random base seed res_mult <- run_trials(binom_trial, n_rep = 25, base_seed = 678) # NOTE: the number of simulations in this example is smaller than # recommended - the plots reflect that, and show that performance metrics # are not stable and have likely not converged yet # Convergence plot of mean sample sizes plot_convergence(res_mult, metrics = \"size mean\") } if (requireNamespace(\"ggplot2\", quietly = TRUE)){ # Convergence plot of mean sample sizes and ideal design percentages, # with simulations split in 2 batches plot_convergence(res_mult, metrics = c(\"size mean\", \"idp\"), n_split = 2) }"},{"path":"https://inceptdk.github.io/adaptr/reference/plot_history.html","id":null,"dir":"Reference","previous_headings":"","what":"Plot trial metric history — plot_history","title":"Plot trial metric history — plot_history","text":"Plots history relevant metrics progress single multiple trial simulations. Simulated trials contribute time stopped, .e., trials stopped earlier others, contribute summary statistics later adaptive looks. Data individual arms trial contribute complete trial stopped. history plots require non-sparse results (sparse set FALSE; see run_trial() run_trials()) ggplot2 package installed.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/plot_history.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Plot trial metric history — plot_history","text":"","code":"plot_history(object, x_value = \"look\", y_value = \"prob\", line = NULL, ...) # S3 method for trial_result plot_history(object, x_value = \"look\", y_value = \"prob\", line = NULL, ...) # S3 method for trial_results plot_history( object, x_value = \"look\", y_value = \"prob\", line = NULL, ribbon = list(width = 0.5, alpha = 0.2), cores = NULL, ... )"},{"path":"https://inceptdk.github.io/adaptr/reference/plot_history.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Plot trial metric history — plot_history","text":"object trial_results object, output run_trials() function. x_value single character string, determining whether number adaptive analysis looks (\"look\", default), total cumulated number patients randomised (\"total n\") total cumulated number patients outcome data available adaptive analysis (\"followed n\") plotted x-axis. y_value single character string, determining values plotted y-axis. following options available: allocation probabilities (\"prob\", default), total number patients outcome data available (\"n\") randomised (\"n \") arm, percentage patients outcome data available (\"pct\") randomised (\"pct \") arm current total, sum available (\"sum ys\") outcome data outcome data randomised patients including outcome data available time current adaptive analysis (\"sum ys \"), ratio outcomes defined \"sum ys\"/\"sum ys \" divided corresponding number patients arm. line list styling lines per ggplot2 conventions (e.g., linetype, linewidth). ... additional arguments, used. ribbon list, line appropriate trial_results objects (.e., multiple simulations run). Also allows specify width interval: must 0 1, 0.5 (default) showing inter-quartile ranges. cores NULL single integer. NULL, default value set setup_cluster() used control whether extractions simulation results done parallel default cluster sequentially main process; value specified setup_cluster(), cores set value stored global \"mc.cores\" option (previously set options(mc.cores = ), 1 option specified. cores = 1, computations run sequentially primary process, cores > 1, new parallel cluster setup using parallel library removed function completes. See setup_cluster() details.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/plot_history.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Plot trial metric history — plot_history","text":"ggplot2 plot object.","code":""},{"path":[]},{"path":"https://inceptdk.github.io/adaptr/reference/plot_history.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Plot trial metric history — plot_history","text":"","code":"#### Only run examples if ggplot2 is installed #### if (requireNamespace(\"ggplot2\", quietly = TRUE)){ # Setup a trial specification binom_trial <- setup_trial_binom(arms = c(\"A\", \"B\", \"C\", \"D\"), control = \"A\", true_ys = c(0.20, 0.18, 0.22, 0.24), data_looks = 1:20 * 100) # Run a single simulation with a fixed random seed res <- run_trial(binom_trial, seed = 12345) # Plot total allocations to each arm according to overall total allocations plot_history(res, x_value = \"total n\", y_value = \"n\") } if (requireNamespace(\"ggplot2\", quietly = TRUE)){ # Run multiple simulation with a fixed random base seed # Notice that sparse = FALSE is required res_mult <- run_trials(binom_trial, n_rep = 15, base_seed = 12345, sparse = FALSE) # Plot allocation probabilities at each look plot_history(res_mult, x_value = \"look\", y_value = \"prob\") # Other y_value options are available but not shown in these examples }"},{"path":"https://inceptdk.github.io/adaptr/reference/plot_metrics_ecdf.html","id":null,"dir":"Reference","previous_headings":"","what":"Plot empirical cumulative distribution functions of performance metrics — plot_metrics_ecdf","title":"Plot empirical cumulative distribution functions of performance metrics — plot_metrics_ecdf","text":"Plots empirical cumulative distribution functions (ECDFs) numerical performance metrics across multiple simulations \"trial_results\" object returned run_trials(). Requires ggplot2 package installed.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/plot_metrics_ecdf.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Plot empirical cumulative distribution functions of performance metrics — plot_metrics_ecdf","text":"","code":"plot_metrics_ecdf( object, metrics = c(\"size\", \"sum_ys\", \"ratio_ys\"), select_strategy = \"control if available\", select_last_arm = FALSE, select_preferences = NULL, te_comp = NULL, raw_ests = FALSE, final_ests = NULL, restrict = NULL, nrow = NULL, ncol = NULL, cores = NULL )"},{"path":"https://inceptdk.github.io/adaptr/reference/plot_metrics_ecdf.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Plot empirical cumulative distribution functions of performance metrics — plot_metrics_ecdf","text":"object trial_results object, output run_trials() function. metrics performance metrics plot, described extract_results(). Multiple metrics may plotted time. Valid metrics include: size, sum_ys, ratio_ys_mean, sq_err, sq_err_te, err, err_te, abs_err, abs_err_te, (described extract_results(), addition abs_err abs_err_te, absolute errors, .e., abs(err) abs(err_te)). may specified using either spaces underlines (case sensitive). Defaults plotting size, sum_ys, ratio_ys_mean. select_strategy single character string. trial stopped due superiority (1 arm remaining, select_last_arm set TRUE trial designs common control arm; see ), parameter specifies arm considered selected calculating trial design performance metrics, described ; corresponds consequence inconclusive trial, .e., arm used practice. following options available must written exactly (case sensitive, abbreviated): \"control available\" (default): selects first control arm trials common control arm arm active end--trial, otherwise arm selected. trial designs without common control, arm selected. \"none\": selects arm trials ending superiority. \"control\": similar \"control available\", throw error used trial designs without common control arm. \"final control\": selects final control arm regardless whether trial stopped practical equivalence, futility, maximum sample size; strategy can specified trial designs common control arm. \"control best\": selects first control arm still active end--trial, otherwise selects best remaining arm (defined remaining arm highest probability best last adaptive analysis conducted). works trial designs common control arm. \"best\": selects best remaining arm (described \"control best\"). \"list best\": selects first remaining arm specified list (specified using select_preferences, technically character vector). none arms active end--trial, best remaining arm selected (described ). \"list\": specified , arms provided list remain active end--trial, arm selected. select_last_arm single logical, defaults FALSE. TRUE, remaining active arm (last control) selected trials common control arm ending equivalence futility, considering options specified select_strategy. Must FALSE trial designs without common control arm. select_preferences character vector specifying number arms used selection one \"list best\" \"list\" options specified select_strategy. Can contain valid arms available trial. te_comp character string, treatment-effect comparator. Can either NULL (default) case first control arm used trial designs common control arm, string naming single trial arm. used calculating err_te sq_err_te (error squared error treatment effect comparing selected arm comparator arm, described ). raw_ests single logical. FALSE (default), posterior estimates (post_ests post_ests_all, see setup_trial() run_trial()) used calculate err sq_err (error squared error estimated compared specified effect selected arm) err_te sq_err_te (error squared error treatment effect comparing selected arm comparator arm, described te_comp ). TRUE, raw estimates (raw_ests raw_ests_all, see setup_trial() run_trial()) used instead posterior estimates. final_ests single logical. TRUE (recommended) final estimates calculated using outcome data patients randomised trials stopped used (post_ests_all raw_ests_all, see setup_trial() run_trial()); FALSE, estimates calculated arm arm stopped (last adaptive analysis ) using data patients reach followed time point patients randomised used (post_ests raw_ests, see setup_trial() run_trial()). NULL (default), argument set FALSE outcome data available immediate randomisation patients (backwards compatibility, final posterior estimates may vary slightly situation, even using data); otherwise said TRUE. See setup_trial() details estimates calculated. restrict single character string NULL. NULL (default), results summarised simulations; \"superior\", results summarised simulations ending superiority ; \"selected\", results summarised simulations ending selected arm (according specified arm selection strategy simulations ending superiority). summary measures (e.g., prob_conclusive) substantially different interpretations restricted, calculated nonetheless. nrow, ncol number rows columns plotting multiple metrics plot (using faceting ggplot2). Defaults NULL, case determined automatically. cores NULL single integer. NULL, default value set setup_cluster() used control whether extractions simulation results done parallel default cluster sequentially main process; value specified setup_cluster(), cores set value stored global \"mc.cores\" option (previously set options(mc.cores = ), 1 option specified. cores = 1, computations run sequentially primary process, cores > 1, new parallel cluster setup using parallel library removed function completes. See setup_cluster() details.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/plot_metrics_ecdf.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Plot empirical cumulative distribution functions of performance metrics — plot_metrics_ecdf","text":"ggplot2 plot object.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/plot_metrics_ecdf.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Plot empirical cumulative distribution functions of performance metrics — plot_metrics_ecdf","text":"Note arguments related arm selection error calculation relevant errors visualised.","code":""},{"path":[]},{"path":"https://inceptdk.github.io/adaptr/reference/plot_metrics_ecdf.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Plot empirical cumulative distribution functions of performance metrics — plot_metrics_ecdf","text":"","code":"#### Only run examples if ggplot2 is installed #### if (requireNamespace(\"ggplot2\", quietly = TRUE)){ # Setup a trial specification binom_trial <- setup_trial_binom(arms = c(\"A\", \"B\", \"C\", \"D\"), control = \"A\", true_ys = c(0.20, 0.18, 0.22, 0.24), data_looks = 1:20 * 100) # Run multiple simulation with a fixed random base seed res_mult <- run_trials(binom_trial, n_rep = 25, base_seed = 678) # NOTE: the number of simulations in this example is smaller than # recommended - the plots reflect that, and would likely be smoother if # a larger number of trials had been simulated # Plot ECDFs of continuous performance metrics plot_metrics_ecdf(res_mult) }"},{"path":"https://inceptdk.github.io/adaptr/reference/plot_status.html","id":null,"dir":"Reference","previous_headings":"","what":"Plot statuses — plot_status","title":"Plot statuses — plot_status","text":"Plots statuses time multiple simulated trials (overall one specific arms). Requires ggplot2 package installed.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/plot_status.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Plot statuses — plot_status","text":"","code":"plot_status( object, x_value = \"look\", arm = NULL, area = list(alpha = 0.5), nrow = NULL, ncol = NULL ) # S3 method for trial_results plot_status( object, x_value = \"look\", arm = NULL, area = list(alpha = 0.5), nrow = NULL, ncol = NULL )"},{"path":"https://inceptdk.github.io/adaptr/reference/plot_status.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Plot statuses — plot_status","text":"object trial_results object, output run_trials() function. x_value single character string, determining whether number adaptive analysis looks (\"look\", default), total cumulated number patients randomised (\"total n\") total cumulated number patients outcome data available adaptive analysis (\"followed n\") plotted x-axis. arm character vector containing one unique, valid arm names, NA, NULL (default). NULL, overall trial statuses plotted, otherwise specified arms arms (NA specified) plotted. area list styling settings area per ggplot2 conventions (e.g., alpha, linewidth). default (list(alpha = 0.5)) sets transparency 50% overlain shaded areas visible. nrow, ncol number rows columns plotting statuses multiple arms plot (using faceting ggplot2). Defaults NULL, case determined automatically relevant.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/plot_status.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Plot statuses — plot_status","text":"ggplot2 plot object.","code":""},{"path":[]},{"path":"https://inceptdk.github.io/adaptr/reference/plot_status.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Plot statuses — plot_status","text":"","code":"#### Only run examples if ggplot2 is installed #### if (requireNamespace(\"ggplot2\", quietly = TRUE)){ # Setup a trial specification binom_trial <- setup_trial_binom(arms = c(\"A\", \"B\", \"C\", \"D\"), control = \"A\", true_ys = c(0.20, 0.18, 0.22, 0.24), data_looks = 1:20 * 100) # Run multiple simulation with a fixed random base seed res_mult <- run_trials(binom_trial, n_rep = 25, base_seed = 12345) # Plot trial statuses at each look according to total allocations plot_status(res_mult, x_value = \"total n\") } if (requireNamespace(\"ggplot2\", quietly = TRUE)){ # Plot trial statuses for all arms plot_status(res_mult, arm = NA) }"},{"path":"https://inceptdk.github.io/adaptr/reference/pow_abs_dist.html","id":null,"dir":"Reference","previous_headings":"","what":"Calculates matrix of absolute distances raised to a power — pow_abs_dist","title":"Calculates matrix of absolute distances raised to a power — pow_abs_dist","text":"Used internally, calculates absolute distances values matrix possibly unequal dimensions, raises power.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/pow_abs_dist.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Calculates matrix of absolute distances raised to a power — pow_abs_dist","text":"","code":"pow_abs_dist(x1, x2 = x1, pow = 2)"},{"path":"https://inceptdk.github.io/adaptr/reference/pow_abs_dist.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Calculates matrix of absolute distances raised to a power — pow_abs_dist","text":"x1 numeric vector, length corresponding number rows returned matrix. x2 numeric vector, length corresponding number columns returned matrix. specified, x1 used x2. pow single numeric value, power distances raised . Defaults 2, corresponding pairwise, squared, Euclidean distances.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/pow_abs_dist.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Calculates matrix of absolute distances raised to a power — pow_abs_dist","text":"Matrix length(x1) rows length(x2) columns including calculated absolute pairwise distances raised pow.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/print.html","id":null,"dir":"Reference","previous_headings":"","what":"Print methods for adaptive trial objects — print","title":"Print methods for adaptive trial objects — print","text":"Prints contents first input x human-friendly way, see Details information.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/print.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Print methods for adaptive trial objects — print","text":"","code":"# S3 method for trial_spec print(x, prob_digits = 3, ...) # S3 method for trial_result print(x, prob_digits = 3, ...) # S3 method for trial_performance print(x, digits = 3, ...) # S3 method for trial_results print( x, select_strategy = \"control if available\", select_last_arm = FALSE, select_preferences = NULL, te_comp = NULL, raw_ests = FALSE, final_ests = NULL, restrict = NULL, digits = 1, cores = NULL, ... ) # S3 method for trial_results_summary print(x, digits = 1, ...) # S3 method for trial_calibration print(x, ...)"},{"path":"https://inceptdk.github.io/adaptr/reference/print.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Print methods for adaptive trial objects — print","text":"x object print, see Details. prob_digits single integer (default 3), number digits used printing probabilities, allocation probabilities softening powers (2 extra digits added stopping rule probability thresholds trial specifications outcome rates summarised results multiple simulations). ... additional arguments, used. digits single integer, number digits used printing numeric results. Default 3 outputs check_performance() 1 outputs run_trials() accompanying summary() method. select_strategy single character string. trial stopped due superiority (1 arm remaining, select_last_arm set TRUE trial designs common control arm; see ), parameter specifies arm considered selected calculating trial design performance metrics, described ; corresponds consequence inconclusive trial, .e., arm used practice. following options available must written exactly (case sensitive, abbreviated): \"control available\" (default): selects first control arm trials common control arm arm active end--trial, otherwise arm selected. trial designs without common control, arm selected. \"none\": selects arm trials ending superiority. \"control\": similar \"control available\", throw error used trial designs without common control arm. \"final control\": selects final control arm regardless whether trial stopped practical equivalence, futility, maximum sample size; strategy can specified trial designs common control arm. \"control best\": selects first control arm still active end--trial, otherwise selects best remaining arm (defined remaining arm highest probability best last adaptive analysis conducted). works trial designs common control arm. \"best\": selects best remaining arm (described \"control best\"). \"list best\": selects first remaining arm specified list (specified using select_preferences, technically character vector). none arms active end--trial, best remaining arm selected (described ). \"list\": specified , arms provided list remain active end--trial, arm selected. select_last_arm single logical, defaults FALSE. TRUE, remaining active arm (last control) selected trials common control arm ending equivalence futility, considering options specified select_strategy. Must FALSE trial designs without common control arm. select_preferences character vector specifying number arms used selection one \"list best\" \"list\" options specified select_strategy. Can contain valid arms available trial. te_comp character string, treatment-effect comparator. Can either NULL (default) case first control arm used trial designs common control arm, string naming single trial arm. used calculating err_te sq_err_te (error squared error treatment effect comparing selected arm comparator arm, described ). raw_ests single logical. FALSE (default), posterior estimates (post_ests post_ests_all, see setup_trial() run_trial()) used calculate err sq_err (error squared error estimated compared specified effect selected arm) err_te sq_err_te (error squared error treatment effect comparing selected arm comparator arm, described te_comp ). TRUE, raw estimates (raw_ests raw_ests_all, see setup_trial() run_trial()) used instead posterior estimates. final_ests single logical. TRUE (recommended) final estimates calculated using outcome data patients randomised trials stopped used (post_ests_all raw_ests_all, see setup_trial() run_trial()); FALSE, estimates calculated arm arm stopped (last adaptive analysis ) using data patients reach followed time point patients randomised used (post_ests raw_ests, see setup_trial() run_trial()). NULL (default), argument set FALSE outcome data available immediate randomisation patients (backwards compatibility, final posterior estimates may vary slightly situation, even using data); otherwise said TRUE. See setup_trial() details estimates calculated. restrict single character string NULL. NULL (default), results summarised simulations; \"superior\", results summarised simulations ending superiority ; \"selected\", results summarised simulations ending selected arm (according specified arm selection strategy simulations ending superiority). summary measures (e.g., prob_conclusive) substantially different interpretations restricted, calculated nonetheless. cores NULL single integer. NULL, default value set setup_cluster() used control whether extractions simulation results done parallel default cluster sequentially main process; value specified setup_cluster(), cores set value stored global \"mc.cores\" option (previously set options(mc.cores = ), 1 option specified. cores = 1, computations run sequentially primary process, cores > 1, new parallel cluster setup using parallel library removed function completes. See setup_cluster() details.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/print.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Print methods for adaptive trial objects — print","text":"Invisibly returns x.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/print.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Print methods for adaptive trial objects — print","text":"behaviour depends class x: trial_spec: prints trial specification setup setup_trial(), setup_trial_binom() setup_trial_norm(). trial_result: prints results single trial simulated run_trial(). details saved trial_result object thus printed sparse argument run_trial() run_trials() set FALSE; TRUE, fewer details printed, omitted details available printing trial_spec object created setup_trial(), setup_trial_binom() setup_trial_norm(). trial_results: prints results multiple simulations generated using run_trials(). documentation multiple trials summarised printing can found summary() function documentation. trial_results_summary: print method summary multiple simulations trial specification, generated using summary() function object generated run_trials().","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/print.html","id":"methods-by-class-","dir":"Reference","previous_headings":"","what":"Methods (by class)","title":"Print methods for adaptive trial objects — print","text":"print(trial_spec): Trial specification print(trial_result): Single trial result print(trial_performance): Trial performance metrics print(trial_results): Multiple trial results print(trial_results_summary): Summary multiple trial results print(trial_calibration): Trial calibration","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/prob_all_equi.html","id":null,"dir":"Reference","previous_headings":"","what":"Calculate the probability that all arms are practically equivalent — prob_all_equi","title":"Calculate the probability that all arms are practically equivalent — prob_all_equi","text":"Used internally. function takes matrix calculated get_draws_binom(), get_draws_norm() corresponding custom function (specified using fun_draws argument setup_trial(); see get_draws_generic()), equivalence difference, calculates probability arms equivalent (absolute differences highest lowest value set posterior draws less difference considered practically equivalent).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/prob_all_equi.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Calculate the probability that all arms are practically equivalent — prob_all_equi","text":"","code":"prob_all_equi(m, equivalence_diff = NULL)"},{"path":"https://inceptdk.github.io/adaptr/reference/prob_all_equi.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Calculate the probability that all arms are practically equivalent — prob_all_equi","text":"m matrix one column per trial arm (named arms) one row draw posterior distributions. equivalence_diff single numeric value (> 0) NULL (default, corresponding equivalence assessment). numeric value specified, estimated absolute differences smaller threshold considered equivalent. designs common control arm, differences non-control arm control arm used, trials without common control arm, difference highest lowest estimated outcome rates used trial stopped equivalence remaining arms equivalent.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/prob_all_equi.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Calculate the probability that all arms are practically equivalent — prob_all_equi","text":"single numeric value corresponding probability arms practically equivalent.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/prob_best.html","id":null,"dir":"Reference","previous_headings":"","what":"Calculate the probabilities of each arm being the best — prob_best","title":"Calculate the probabilities of each arm being the best — prob_best","text":"Used internally. function takes matrix calculated get_draws_binom(), get_draws_norm() corresponding custom function (specified using fun_draws argument setup_trial(); see get_draws_generic()) calculates probabilities arm best (defined either highest lowest value, specified highest_is_best argument setup_trial(), setup_trial_binom() setup_trial_norm()).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/prob_best.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Calculate the probabilities of each arm being the best — prob_best","text":"","code":"prob_best(m, highest_is_best = FALSE)"},{"path":"https://inceptdk.github.io/adaptr/reference/prob_best.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Calculate the probabilities of each arm being the best — prob_best","text":"m matrix one column per trial arm (named arms) one row draw posterior distributions. highest_is_best single logical, specifies whether larger estimates outcome favourable ; defaults FALSE, corresponding , e.g., undesirable binary outcomes (e.g., mortality) continuous outcome lower numbers preferred (e.g., hospital length stay).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/prob_best.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Calculate the probabilities of each arm being the best — prob_best","text":"named numeric vector probabilities (names corresponding arms).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/prob_better.html","id":null,"dir":"Reference","previous_headings":"","what":"Calculate probabilities of comparisons of arms against with common control — prob_better","title":"Calculate probabilities of comparisons of arms against with common control — prob_better","text":"Used internally. function takes matrix calculated get_draws_binom(), get_draws_norm() corresponding custom function (specified using fun_draws argument setup_trial(); see get_draws_generic()) single character specifying control arm, calculates probabilities arm better common control (defined either higher lower control, specified highest_is_best argument setup_trial(), setup_trial_binom() setup_trial_norm()). function also calculates equivalence futility probabilities compared common control arm, specified setup_trial(), setup_trial_binom() setup_trial_norm(), unless equivalence_diff futility_diff, respectively, set NULL (default).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/prob_better.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Calculate probabilities of comparisons of arms against with common control — prob_better","text":"","code":"prob_better( m, control = NULL, highest_is_best = FALSE, equivalence_diff = NULL, futility_diff = NULL )"},{"path":"https://inceptdk.github.io/adaptr/reference/prob_better.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Calculate probabilities of comparisons of arms against with common control — prob_better","text":"m matrix one column per trial arm (named arms) one row draw posterior distributions. control single character string specifying common control arm. highest_is_best single logical, specifies whether larger estimates outcome favourable ; defaults FALSE, corresponding , e.g., undesirable binary outcomes (e.g., mortality) continuous outcome lower numbers preferred (e.g., hospital length stay). equivalence_diff single numeric value (> 0) NULL (default, corresponding equivalence assessment). numeric value specified, estimated absolute differences smaller threshold considered equivalent. designs common control arm, differences non-control arm control arm used, trials without common control arm, difference highest lowest estimated outcome rates used trial stopped equivalence remaining arms equivalent. futility_diff single numeric value (> 0) NULL (default, corresponding futility assessment). numeric value specified, estimated differences threshold beneficial direction (specified highest_is_best) considered futile assessing futility designs common control arm. 1 arm remains dropping arms futility, trial stopped without declaring last arm superior.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/prob_better.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Calculate probabilities of comparisons of arms against with common control — prob_better","text":"named (row names corresponding trial arms) matrix containing 1-3 columns: probs_better, probs_equivalence (equivalence_diff specified), probs_futile (futility_diff specified). columns contain NA control arm.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/prog_breaks.html","id":null,"dir":"Reference","previous_headings":"","what":"Generate breakpoints and other values for printing progress — prog_breaks","title":"Generate breakpoints and other values for printing progress — prog_breaks","text":"Used internally. Generates breakpoints, messages, 'batches' trial numbers simulate using run_trials() progress argument use. Breaks multiples number cores, repeated use values breaks avoided (, e.g., number breaks times number cores possible new trials run). Inputs validated run_trials().","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/prog_breaks.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Generate breakpoints and other values for printing progress — prog_breaks","text":"","code":"prog_breaks(progress, prev_n_rep, n_rep_new, cores)"},{"path":"https://inceptdk.github.io/adaptr/reference/prog_breaks.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Generate breakpoints and other values for printing progress — prog_breaks","text":"progress single numeric > 0 <= 1 NULL. NULL (default), progress printed console. Otherwise, progress messages printed control intervals proportional value specified progress.Note: printing possible within clusters multiple cores, function conducts batches simulations multiple cores (specified), intermittent printing statuses. Thus, cores finish running current assigned batches cores may proceed next batch. substantial differences simulation speeds across cores, using progress may thus increase total run time (especially small values). prev_n_rep single integer, previous number simulations run (add indices generated used). n_rep_new single integers, number new simulations run (.e., n_rep supplied run_trials() minus number previously run simulations grow used run_trials()). cores NULL single integer. NULL, default value/cluster set setup_cluster() used control whether simulations run parallel default cluster sequentially main process; cluster/value specified setup_cluster(), cores set value stored global \"mc.cores\" option (previously set options(mc.cores = ), 1 option specified. resulting number cores = 1, computations run sequentially primary process, cores > 1, new parallel cluster setup using parallel library removed function completes. See setup_cluster() details.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/prog_breaks.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Generate breakpoints and other values for printing progress — prog_breaks","text":"List containing breaks (number patients break), start_mess prog_mess (basis first subsequent progress messages), batches (list entry corresponding simulation numbers batch).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/reallocate_probs.html","id":null,"dir":"Reference","previous_headings":"","what":"Update allocation probabilities — reallocate_probs","title":"Update allocation probabilities — reallocate_probs","text":"Used internally. function calculates new allocation probabilities arm, based information specified setup_trial(), setup_trial_binom() setup_trial_norm() calculated probabilities arm best prob_best().","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/reallocate_probs.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Update allocation probabilities — reallocate_probs","text":"","code":"reallocate_probs( probs_best, fixed_probs, min_probs, max_probs, soften_power = 1, match_arm = NULL, rescale_fixed = FALSE, rescale_limits = FALSE, rescale_factor = 1, rescale_ignore = NULL )"},{"path":"https://inceptdk.github.io/adaptr/reference/reallocate_probs.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Update allocation probabilities — reallocate_probs","text":"probs_best resulting named vector prob_best() function. fixed_probs numeric vector, fixed allocation probabilities arm. Must either numeric vector NA arms without fixed probabilities values 0 1 arms NULL (default), adaptive randomisation used arms one special settings (\"sqrt-based\", \"sqrt-based start\", \"sqrt-based fixed\", \"match\") specified control_prob_fixed (described ). min_probs numeric vector, lower threshold adaptive allocation probabilities; lower probabilities rounded values. Must NA (default arms) lower threshold wanted arms using fixed allocation probabilities. max_probs numeric vector, upper threshold adaptive allocation probabilities; higher probabilities rounded values. Must NA (default arms) threshold wanted arms using fixed allocation probabilities. soften_power either single numeric value numeric vector exactly length maximum number looks/adaptive analyses. Values must 0 1 (default); < 1, re-allocated non-fixed allocation probabilities raised power (followed rescaling sum 1) make adaptive allocation probabilities less extreme, turn used redistribute remaining probability respecting limits defined min_probs /max_probs. 1, softening applied. match_arm index control arm. NULL (default), control arm allocation probability similar best non-control arm. Must NULL designs without common control arm. rescale_fixed logical indicating whether fixed_probs rescaled following arm dropping. rescale_limits logical indicating whether min/max_probs rescaled following arm dropping. rescale_factor numerical, rescale factor defined initial number arms/number active arms. rescale_ignore NULL index arm ignored rescale_fixed rescale_limits arguments.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/reallocate_probs.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Update allocation probabilities — reallocate_probs","text":"named (according arms) numeric vector updated allocation probabilities.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/replace_nonfinite.html","id":null,"dir":"Reference","previous_headings":"","what":"Replace non-finite values with other value (finite-OR-operator) — replace_nonfinite","title":"Replace non-finite values with other value (finite-OR-operator) — replace_nonfinite","text":"Used internally, helper function replaces non-finite (.e., NA, NaN, Inf, -Inf) values according .finite(), primarily used replace NaN/Inf/-Inf NA.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/replace_nonfinite.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Replace non-finite values with other value (finite-OR-operator) — replace_nonfinite","text":"","code":"a %f|% b"},{"path":"https://inceptdk.github.io/adaptr/reference/replace_nonfinite.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Replace non-finite values with other value (finite-OR-operator) — replace_nonfinite","text":"atomic vector type. b single value replace non-finite values .","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/replace_nonfinite.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Replace non-finite values with other value (finite-OR-operator) — replace_nonfinite","text":"values non-finite, replaced b, otherwise left unchanged.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/replace_null.html","id":null,"dir":"Reference","previous_headings":"","what":"Replace NULL with other value (NULL-OR-operator) — replace_null","title":"Replace NULL with other value (NULL-OR-operator) — replace_null","text":"Used internally, primarily working list arguments, , e.g., list_name$element_name yields NULL unspecified.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/replace_null.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Replace NULL with other value (NULL-OR-operator) — replace_null","text":"","code":"a %||% b"},{"path":"https://inceptdk.github.io/adaptr/reference/replace_null.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Replace NULL with other value (NULL-OR-operator) — replace_null","text":", b atomic values type.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/replace_null.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Replace NULL with other value (NULL-OR-operator) — replace_null","text":"NULL, b returned. Otherwise returned.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/rescale.html","id":null,"dir":"Reference","previous_headings":"","what":"Rescale numeric vector to sum to 1 — rescale","title":"Rescale numeric vector to sum to 1 — rescale","text":"Used internally.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/rescale.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Rescale numeric vector to sum to 1 — rescale","text":"","code":"rescale(x)"},{"path":"https://inceptdk.github.io/adaptr/reference/rescale.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Rescale numeric vector to sum to 1 — rescale","text":"x numeric vector.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/rescale.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Rescale numeric vector to sum to 1 — rescale","text":"Numeric vector, x rescaled sum total 1.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/run_trial.html","id":null,"dir":"Reference","previous_headings":"","what":"Simulate a single trial — run_trial","title":"Simulate a single trial — run_trial","text":"function conducts single trial simulation using trial specification specified setup_trial(), setup_trial_binom() setup_trial_norm(). simulation, function randomises \"patients\", randomly generates outcomes, calculates probabilities arm best (better control, ). followed checking inferiority, superiority, equivalence /futility desired; dropping arms, re-adjusting allocation probabilities according criteria specified trial specification. common control arm, trial simulation stopped final specified adaptive analysis, 1 arm superior others, arms considered equivalent (equivalence assessed). common control arm specified, arms compared , 1 pairwise comparisons crosses applicable superiority threshold adaptive analysis, arm become new control old control considered inferior dropped. multiple non-control arms cross applicable superiority threshold adaptive analysis, one highest probability overall best become new control. Equivalence/futility also checked specified, equivalent futile arms dropped designs common control arm entire trial stopped remaining arms equivalent designs without common control arm. trial simulation stopped 1 arm left, final arms equivalent, final specified adaptive analysis. stopping (regardless reason), final analysis including outcome data patients randomised arms conducted (final control arm, , used control analysis). Results analysis saved, used regards adaptive stopping rules. particularly relevant less patients available outcome data last adaptive analyses total number patients randomised (specified setup_trial(), setup_trial_binom(), setup_trial_norm()), final analysis include patients randomised, may last adaptive analysis conducted.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/run_trial.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Simulate a single trial — run_trial","text":"","code":"run_trial(trial_spec, seed = NULL, sparse = FALSE)"},{"path":"https://inceptdk.github.io/adaptr/reference/run_trial.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Simulate a single trial — run_trial","text":"trial_spec trial_spec object, generated validated setup_trial(), setup_trial_binom() setup_trial_norm() function. seed single integer NULL (default). value provided, value used random seed running global random seed restored function run, affected. sparse single logical; FALSE (default) everything listed included returned object. TRUE, limited amount data included returned object. can practical running many simulations saving results using run_trials() function (relies function), output file thus substantially smaller. However, printing individual trial results substantially less detailed sparse results non-sparse results required plot_history().","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/run_trial.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Simulate a single trial — run_trial","text":"trial_result object containing everything listed sparse (described ) FALSE. Otherwise final_status, final_n, followed_n, trial_res, seed, sparse included. final_status: either \"superiority\", \"equivalence\", \"futility\", \"max\" (stopped last possible adaptive analysis), calculated adaptive analyses. final_n: total number patients randomised. followed_n: total number patients available outcome data last adaptive analysis conducted. max_n: pre-specified maximum number patients outcome data available last possible adaptive analysis. max_randomised: pre-specified maximum number patients randomised last possible adaptive analysis. looks: numeric vector, total number patients outcome data available conducted adaptive analysis. planned_looks: numeric vector, cumulated number patients planned outcome data available adaptive analysis, even conducted simulation stopped final possible analysis. randomised_at_looks: numeric vector, cumulated number patients randomised conducted adaptive analysis (including relevant numbers analyses actually conducted). start_control: character, initial common control arm (specified). final_control: character, final common control arm (relevant). control_prob_fixed: fixed common control arm probabilities (specified; see setup_trial()). inferiority, superiority, equivalence_prob, equivalence_diff, equivalence_only_first, futility_prob, futility_diff, futility_only_first, highest_is_best, soften_power: specified setup_trial(). best_arm: best arm(s), described setup_trial(). trial_res: data.frame containing information specified arm setup_trial() including true_ys (true outcomes specified setup_trial()) arm sum outcomes (sum_ys/sum_ys_all; .e., total number events binary outcomes totals continuous outcomes) sum patients (ns/ns_all), summary statistics raw outcome data (raw_ests/raw_ests_all, calculated specified setup_trial(), defaults mean values, .e., event rates binary outcomes means continuous outcomes) posterior estimates (post_ests/post_ests_all, post_errs/post_errs_all, lo_cri/lo_cri_all, hi_cri/hi_cri_all, calculated specified setup_trial()), final_status arm (\"inferior\", \"superior\", \"equivalence\", \"futile\", \"active\", \"control\" (currently active control arm, including current control stopped equivalence)), status_look (specifying cumulated number patients outcome data available adaptive analysis changed final_status \"superior\", \"inferior\", \"equivalence\", \"futile\"), status_probs, probability (last adaptive analysis arm) arm best/better common control arm ()/equivalent common control arm (stopped equivalence; NA control arm stopped due last remaining arm(s) stopped equivalence)/futile stopped futility last analysis included , final_alloc, final allocation probability arm last time patients randomised , including arms stopped maximum sample size, probs_best_last, probabilities remaining arm overall best last conducted adaptive analysis (NA previously dropped arms).Note: variables data.frame version including _all-suffix included, versions WITHOUT suffix calculated using patients available outcome data time analysis, versions _all-suffixes calculated using outcome data patients randomised time analysis, even reached time follow-yet (see setup_trial()). all_looks: list lists containing one list per conducted trial look (adaptive analysis). lists contain variables arms, old_status (status analysis current round conducted), new_status (specified , status current analysis conducted), sum_ys/sum_ys_all (described ), ns/ns_all (described ), old_alloc (allocation probability used look), probs_best (probabilities arm best current adaptive analysis), new_alloc (allocation probabilities updating current adaptive analysis; NA arms trial stopped adaptive analyses conducted), probs_better_first (common control provided, specifying probabilities arm better control first analysis conducted look), probs_better (probs_better_first, updated another arm becomes new control), probs_equivalence_first probs_equivalence (probs_better/probs_better_first, equivalence equivalence assessed). last variables NA arm active applicable adaptive analysis included next adaptive analysis. allocs: character vector containing allocations patients order randomization. ys: numeric vector containing outcomes patients order randomization (0 1 binary outcomes). seed: random seed used, specified. description, add_info, cri_width, n_draws, robust: specified setup_trial(), setup_trial_binom() setup_trial_norm(). sparse: single logical, corresponding sparse input.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/run_trial.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Simulate a single trial — run_trial","text":"","code":"# Setup a trial specification binom_trial <- setup_trial_binom(arms = c(\"A\", \"B\", \"C\", \"D\"), true_ys = c(0.20, 0.18, 0.22, 0.24), data_looks = 1:20 * 100) # Run trial with a specified random seed res <- run_trial(binom_trial, seed = 12345) # Print results with 3 decimals print(res, digits = 3) #> Single simulation result: generic binomially distributed outcome trial #> * Undesirable outcome #> * No common control arm #> #> Final status: inconclusive, stopped at final allowed adaptive analysis #> Final/maximum allowed sample sizes: 2000/2000 (100.0%) #> Available outcome data at last adaptive analysis: 2000/2000 (100.0%) #> #> Trial results overview: #> arms true_ys final_status status_look status_probs final_alloc #> A 0.20 active NA NA 0.0232 #> B 0.18 active NA NA 0.8868 #> C 0.22 active NA NA 0.0900 #> D 0.24 inferior 300 0.0092 0.1078 #> #> Esimates from final analysis (all patients): #> arms sum_ys_all ns_all raw_ests_all post_ests_all post_errs_all lo_cri_all #> A 39 161 0.242 0.244 0.03355 0.184 #> B 297 1613 0.184 0.184 0.00987 0.165 #> C 39 180 0.217 0.218 0.03032 0.162 #> D 16 46 0.348 0.351 0.06806 0.224 #> hi_cri_all #> 0.316 #> 0.204 #> 0.279 #> 0.495 #> #> Estimates from last adaptive analysis including each arm: #> arms sum_ys ns raw_ests post_ests post_errs lo_cri hi_cri #> A 39 161 0.242 0.244 0.03461 0.180 0.315 #> B 297 1613 0.184 0.184 0.00938 0.166 0.204 #> C 39 180 0.217 0.219 0.03105 0.164 0.283 #> D 16 46 0.348 0.353 0.06967 0.226 0.492 #> #> Simulation details: #> * Random seed: 12345 #> * Credible interval width: 95% #> * Number of posterior draws: 5000 #> * Posterior estimation method: medians with MAD-SDs"},{"path":"https://inceptdk.github.io/adaptr/reference/run_trials.html","id":null,"dir":"Reference","previous_headings":"","what":"Simulate multiple trials — run_trials","title":"Simulate multiple trials — run_trials","text":"function conducts multiple simulations using trial specification specified setup_trial(), setup_trial_binom() setup_trial_norm(). function essentially manages random seeds runs multiple simulation using run_trial() - additional details individual simulations provided function's description. function allows simulating trials parallel using multiple cores, automatically saving re-loading saved objects, \"growing\" already saved simulation files (.e., appending additional simulations file).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/run_trials.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Simulate multiple trials — run_trials","text":"","code":"run_trials( trial_spec, n_rep, path = NULL, overwrite = FALSE, grow = FALSE, cores = NULL, base_seed = NULL, sparse = TRUE, progress = NULL, version = NULL, compress = TRUE, export = NULL, export_envir = parent.frame() )"},{"path":"https://inceptdk.github.io/adaptr/reference/run_trials.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Simulate multiple trials — run_trials","text":"trial_spec trial_spec object, generated validated setup_trial(), setup_trial_binom() setup_trial_norm() function. n_rep single integer; number simulations run. path single character string; specified (defaults NULL), files written loaded path using saveRDS() / readRDS() functions. overwrite single logical; defaults FALSE, case previous simulations saved path re-loaded (trial specification used). TRUE, previous file overwritten (even trial specification used). grow TRUE, argument must set FALSE. grow single logical; defaults FALSE. TRUE valid path valid previous file containing less simulations n_rep, additional number simulations run (appropriately re-using base_seed, specified) appended file. cores NULL single integer. NULL, default value/cluster set setup_cluster() used control whether simulations run parallel default cluster sequentially main process; cluster/value specified setup_cluster(), cores set value stored global \"mc.cores\" option (previously set options(mc.cores = ), 1 option specified. resulting number cores = 1, computations run sequentially primary process, cores > 1, new parallel cluster setup using parallel library removed function completes. See setup_cluster() details. base_seed single integer NULL (default); random seed used basis simulations. Regardless whether simulations run sequentially parallel, random number streams identical appropriate (see setup_cluster() details). sparse single logical, described run_trial(); defaults TRUE running multiple simulations, case data necessary summarise simulations saved simulation. FALSE, detailed data simulation saved, allowing detailed printing individual trial results plotting using plot_history() (plot_status() require non-sparse results). progress single numeric > 0 <= 1 NULL. NULL (default), progress printed console. Otherwise, progress messages printed control intervals proportional value specified progress.Note: printing possible within clusters multiple cores, function conducts batches simulations multiple cores (specified), intermittent printing statuses. Thus, cores finish running current assigned batches cores may proceed next batch. substantial differences simulation speeds across cores, using progress may thus increase total run time (especially small values). version passed saveRDS() saving simulations, defaults NULL (saveRDS()), means current default version used. Ignored simulations saved. compress passed saveRDS() saving simulations, defaults TRUE (saveRDS()), see saveRDS() options. Ignored simulations saved. export character vector names objects export parallel core running parallel; passed varlist argument parallel::clusterExport(). Defaults NULL (objects exported), ignored cores == 1. See Details . export_envir environment look objects defined export running parallel export NULL. Defaults environment function called.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/run_trials.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Simulate multiple trials — run_trials","text":"list special class \"trial_results\", contains trial_results (results simulations; note seed NULL individual simulations), trial_spec (trial specification), n_rep, base_seed, elapsed_time (total simulation run time), sparse (described ) adaptr_version (version adaptr package used run simulations). results may extracted, summarised, plotted using extract_results(), check_performance(), summary(), print.trial_results(), plot_convergence(), check_remaining_arms(), plot_status(), plot_history() functions. See definitions functions additional details details additional arguments used select arms simulations ending superiority summary choices.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/run_trials.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Simulate multiple trials — run_trials","text":"Exporting objects using multiple cores setup_trial() used define trial specification custom functions (fun_y_gen, fun_draws, fun_raw_est arguments setup_trial()) run_trials() run cores > 1, necessary export additional functions objects used functions defined user outside function definitions provided. Similarly, functions external packages loaded using library() require() must exported called prefixed namespace, .e., package::function. export export_envir arguments used export objects calling parallel::clusterExport()-function. See also setup_cluster(), may used setup cluster export required objects per session.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/run_trials.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Simulate multiple trials — run_trials","text":"","code":"# Setup a trial specification binom_trial <- setup_trial_binom(arms = c(\"A\", \"B\", \"C\", \"D\"), true_ys = c(0.20, 0.18, 0.22, 0.24), data_looks = 1:20 * 100) # Run 10 simulations with a specified random base seed res <- run_trials(binom_trial, n_rep = 10, base_seed = 12345) # See ?extract_results, ?check_performance, ?summary and ?print for details # on extracting resutls, summarising and printing"},{"path":"https://inceptdk.github.io/adaptr/reference/setup_cluster.html","id":null,"dir":"Reference","previous_headings":"","what":"Setup default cluster for use in parallelised adaptr functions — setup_cluster","title":"Setup default cluster for use in parallelised adaptr functions — setup_cluster","text":"function setups (removes) default cluster use parallelised functions adaptr using parallel package. function also exports objects available cluster sets random number generator appropriately. See Details info adaptr handles sequential/parallel computation.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/setup_cluster.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Setup default cluster for use in parallelised adaptr functions — setup_cluster","text":"","code":"setup_cluster(cores, export = NULL, export_envir = parent.frame())"},{"path":"https://inceptdk.github.io/adaptr/reference/setup_cluster.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Setup default cluster for use in parallelised adaptr functions — setup_cluster","text":"cores can either unspecified, NULL, single integer > 0. NULL 1, existing default cluster removed (), default subsequently run functions sequentially main process cores = 1, according getOption(\"mc.cores\") NULL (unless otherwise specified individual functions calls). parallel::detectCores() function may used see number available cores, although comes caveats (described function documentation), including number cores may always returned may match number cores available use. general, using less cores available may preferable processes run machine time. export character vector names objects export parallel core running parallel; passed varlist argument parallel::clusterExport(). Defaults NULL (objects exported), ignored cores == 1. See Details . export_envir environment look objects defined export running parallel export NULL. Defaults environment function called.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/setup_cluster.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Setup default cluster for use in parallelised adaptr functions — setup_cluster","text":"Invisibly returns default parallel cluster NULL, appropriate. may used functions parallel package advanced users, example load certain libraries cluster prior calling run_trials().","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/setup_cluster.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Setup default cluster for use in parallelised adaptr functions — setup_cluster","text":"Using sequential parallel computing adaptr parallelised adaptr functions cores argument defaults NULL. non-NULL integer > 0 provided cores argument (except setup_cluster()), package run calculations sequentially main process cores = 1, otherwise initiate new cluster size cores removed function completes, regardless whether default cluster global \"mc.cores\" option specified. cores NULL adaptr function (except setup_cluster()), package use default cluster one exists run computations sequentially setup_cluster() last called cores = 1. setup_cluster() called last called cores = NULL, package check global \"mc.cores\" option specified (using options(mc.cores = )). option set value > 1, new, temporary cluster size setup, used, removed function completes. option set set 1, computations run sequentially main process. Generally, recommend using setup_cluster() function avoids overhead re-initiating new clusters every call one parallelised adaptr functions. especially important exporting many large objects parallel cluster, can done (option export objects cluster calling run_trials()). Type clusters used random number generation adaptr package solely uses parallel socket clusters (using parallel::makePSOCKcluster()) thus use forking (available operating systems may cause crashes situations). , user-defined objects used adaptr functions run parallel need exported using either setup_cluster() run_trials(), included generated trial_spec object. adaptr package uses \"L'Ecuyer-CMRG\" kind (see RNGkind()) safe random number generation parallelised functions. also case running adaptr functions sequentially seed provided, ensure results obtained regardless whether sequential parallel computation used. functions restore random number generator kind global random seed use called seed.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/setup_cluster.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Setup default cluster for use in parallelised adaptr functions — setup_cluster","text":"","code":"# Setup a cluster using 2 cores setup_cluster(cores = 2) # Get existing default cluster (printed here as invisibly returned) print(setup_cluster()) #> socket cluster with 2 nodes on host ‘localhost’ # Remove existing default cluster setup_cluster(cores = NULL) # Specify preference for running computations sequentially setup_cluster(cores = 1) # Remove default cluster preference setup_cluster(cores = NULL) # Set global option to default to using 2 new clusters each time # (only used if no default cluster preference is specified) options(mc.cores = 2)"},{"path":"https://inceptdk.github.io/adaptr/reference/setup_trial.html","id":null,"dir":"Reference","previous_headings":"","what":"Setup a generic trial specification — setup_trial","title":"Setup a generic trial specification — setup_trial","text":"Specifies design adaptive trial type outcome validates inputs. Use calibrate_trial() calibrate trial specification obtain specific value certain performance metric (e.g., Bayesian type 1 error rate). Use run_trial() run_trials() conduct single/multiple simulations specified trial, respectively. See setup_trial_binom() setup_trial_norm() simplified setup trial designs common outcome types. additional trial specification examples, see Basic examples vignette (vignette(\"Basic-examples\", package = \"adaptr\")) Advanced example vignette (vignette(\"Advanced-example\", package = \"adaptr\")).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/setup_trial.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Setup a generic trial specification — setup_trial","text":"","code":"setup_trial( arms, true_ys, fun_y_gen = NULL, fun_draws = NULL, start_probs = NULL, fixed_probs = NULL, min_probs = rep(NA, length(arms)), max_probs = rep(NA, length(arms)), rescale_probs = NULL, data_looks = NULL, max_n = NULL, look_after_every = NULL, randomised_at_looks = NULL, control = NULL, control_prob_fixed = NULL, inferiority = 0.01, superiority = 0.99, equivalence_prob = NULL, equivalence_diff = NULL, equivalence_only_first = NULL, futility_prob = NULL, futility_diff = NULL, futility_only_first = NULL, highest_is_best = FALSE, soften_power = 1, fun_raw_est = mean, cri_width = 0.95, n_draws = 5000, robust = TRUE, description = NULL, add_info = NULL )"},{"path":"https://inceptdk.github.io/adaptr/reference/setup_trial.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Setup a generic trial specification — setup_trial","text":"arms character vector unique names trial arms. true_ys numeric vector specifying true outcomes (e.g., event probabilities, mean values, etc.) trial arms. fun_y_gen function, generates outcomes. See setup_trial() Details information specify function.Note: function called setup validate output (global random seed restored afterwards). fun_draws function, generates posterior draws. See setup_trial() Details information specify function.Note: function called three times setup validate output (global random seed restored afterwards). start_probs numeric vector, allocation probabilities arm beginning trial. default (NULL) automatically generates equal randomisation probabilities arm. fixed_probs numeric vector, fixed allocation probabilities arm. Must either numeric vector NA arms without fixed probabilities values 0 1 arms NULL (default), adaptive randomisation used arms one special settings (\"sqrt-based\", \"sqrt-based start\", \"sqrt-based fixed\", \"match\") specified control_prob_fixed (described ). min_probs numeric vector, lower threshold adaptive allocation probabilities; lower probabilities rounded values. Must NA (default arms) lower threshold wanted arms using fixed allocation probabilities. max_probs numeric vector, upper threshold adaptive allocation probabilities; higher probabilities rounded values. Must NA (default arms) threshold wanted arms using fixed allocation probabilities. rescale_probs NULL (default) one either \"fixed\", \"limits\", \"\". Rescales fixed_probs (\"fixed\" \"\") min_probs/max_probs (\"limits\" \"\") arm dropping trial specifications >2 arms using rescale_factor defined initial number arms/number active arms. \"fixed_probs min_probs rescaled initial value * rescale factor, except fixed_probs controlled control_prob_fixed argument, never rescaled. max_probs rescaled 1 - ( (1 - initial value) * rescale_factor). Must NULL 2 arms control_prob_fixed \"sqrt-based fixed\". NULL, one valid non-NA values must specified either min_probs/max_probs fixed_probs (counting fixed value original control control_prob_fixed \"sqrt-based\"/\"sqrt-based start\"/\"sqrt-based fixed\").Note: using argument specific combinations values arguments may lead invalid combined (total) allocation probabilities arm dropping, case probabilities ultimately rescaled sum 1. responsibility user ensure rescaling fixed allocation probabilities minimum/maximum allocation probability limits lead invalid unexpected allocation probabilities arm dropping. Finally, initial values overwritten control_prob_fixed argument arm dropping rescaled. data_looks vector increasing integers, specifies conduct adaptive analyses (= total number patients available outcome data adaptive analysis). last number vector represents final adaptive analysis, .e., final analysis superiority, inferiority, practical equivalence, futility can claimed. Instead specifying data_looks, max_n look_after_every arguments can used combination (case data_looks must NULL, default value). max_n single integer, number patients available outcome data last possible adaptive analysis (defaults NULL). Must specified data_looks NULL. Requires specification look_after_every argument. look_after_every single integer, specified together max_n. Adaptive analyses conducted every look_after_every patients available outcome data, total sample size specified max_n (max_n need multiple look_after_every). specified, data_looks must NULL (default). randomised_at_looks vector increasing integers NULL, specifying number patients randomised time adaptive analysis, new patients randomised using current allocation probabilities said analysis. NULL (default), number patients randomised analysis match number patients available outcome data said analysis, specified data_looks max_n look_after_every, .e., outcome data available immediately randomisation patients. NULL, vector must length number adaptive analyses specified data_looks max_n look_after_every, values must larger equal number patients available outcome data analysis. control single character string, name one arms NULL (default). specified, arm serve common control arm, arms compared inferiority/superiority/equivalence thresholds (see ) comparisons. See setup_trial() Details information behaviour respect comparisons. control_prob_fixed common control arm specified, can set NULL (default), case control arm allocation probability fixed control arms change (allocation probability first control arm may still fixed using fixed_probs, 'reused' new control arm). NULL, vector probabilities either length 1 number arms - 1 can provided, one special arguments \"sqrt-based\", \"sqrt-based start\", \"sqrt-based fixed\" \"match\". See setup_trial() Details details affects trial behaviour. inferiority single numeric value vector numeric values length maximum number possible adaptive analyses, specifying probability threshold(s) inferiority (default 0.01). values must >= 0 <= 1, multiple values supplied, values may lower preceding value. common controlis used, values must < 1 / number arms. arm considered inferior dropped probability best (comparing arms) better control arm (common control used) drops inferiority threshold adaptive analysis. superiority single numeric value vector numeric values length maximum number possible adaptive analyses, specifying probability threshold(s) superiority (default 0.99). values must >= 0 <= 1, multiple values supplied, values may higher preceding value. probability arm best (comparing arms) better control arm (common control used) exceeds superiority threshold adaptive analysis, said arm declared winner trial stopped (common control used last comparator dropped design common control) become new control trial continue (common control specified). equivalence_prob single numeric value, vector numeric values length maximum number possible adaptive analyses NULL (default, corresponding equivalence assessment), specifying probability threshold(s) equivalence. NULL, values must > 0 <= 1, multiple values supplied, value may higher preceding value. NULL, arms dropped equivalence probability either () equivalence compared common control (b) equivalence arms remaining (designs without common control) exceeds equivalence threshold adaptive analysis. Requires specification equivalence_diff equivalence_only_first. equivalence_diff single numeric value (> 0) NULL (default, corresponding equivalence assessment). numeric value specified, estimated absolute differences smaller threshold considered equivalent. designs common control arm, differences non-control arm control arm used, trials without common control arm, difference highest lowest estimated outcome rates used trial stopped equivalence remaining arms equivalent. equivalence_only_first single logical trial specifications equivalence_prob equivalence_diff specified common control arm included, otherwise NULL (default). common control arm used, specifies whether equivalence assessed first control (TRUE) also subsequent control arms (FALSE) one arm superior first control becomes new control. futility_prob single numeric value, vector numeric values length maximum number possible adaptive analyses NULL (default, corresponding futility assessment), specifying probability threshold(s) futility. values must > 0 <= 1, multiple values supplied, value may higher preceding value. NULL, arms dropped futility probability futility compared common control exceeds futility threshold adaptive analysis. Requires common control arm (otherwise argument must NULL), specification futility_diff, futility_only_first. futility_diff single numeric value (> 0) NULL (default, corresponding futility assessment). numeric value specified, estimated differences threshold beneficial direction (specified highest_is_best) considered futile assessing futility designs common control arm. 1 arm remains dropping arms futility, trial stopped without declaring last arm superior. futility_only_first single logical trial specifications designs futility_prob futility_diff specified, otherwise NULL (default required designs without common control arm). Specifies whether futility assessed first control (TRUE) also subsequent control arms (FALSE) one arm superior first control becomes new control. highest_is_best single logical, specifies whether larger estimates outcome favourable ; defaults FALSE, corresponding , e.g., undesirable binary outcomes (e.g., mortality) continuous outcome lower numbers preferred (e.g., hospital length stay). soften_power either single numeric value numeric vector exactly length maximum number looks/adaptive analyses. Values must 0 1 (default); < 1, re-allocated non-fixed allocation probabilities raised power (followed rescaling sum 1) make adaptive allocation probabilities less extreme, turn used redistribute remaining probability respecting limits defined min_probs /max_probs. 1, softening applied. fun_raw_est function takes numeric vector returns single numeric value, used calculate raw summary estimate outcomes arm. Defaults mean(), always used setup_trial_binom() setup_trial_norm() functions.Note: function called one time per arm setup validate output structure. cri_width single numeric >= 0 < 1, width percentile-based credible intervals used summarising individual trial results. Defaults 0.95, corresponding 95% credible intervals. n_draws single integer, number draws posterior distributions arm used running trial. Defaults 5000; can reduced speed gain (potential loss stability results low) increased increased precision (increasing simulation time). Values < 100 allowed values < 1000 recommended warned . robust single logical, TRUE (default) medians median absolute deviations (scaled comparable standard deviation normal distributions; MAD_SDs, see stats::mad()) used summarise posterior distributions; FALSE, means standard deviations (SDs) used instead (slightly faster, may less appropriate posteriors skewed natural scale). description optional single character string describing trial design, used print functions NULL (default). add_info optional single string containing additional information regarding trial design specifications, used print functions NULL (default).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/setup_trial.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Setup a generic trial specification — setup_trial","text":"trial_spec object used run simulations run_trial() run_trials(). output essentially list containing input values (combined data.frame called trial_arms), class signals inputs validated inappropriate combinations settings ruled . Also contains best_arm, holding arm(s) best value(s) true_ys. Use str() peruse actual content returned object.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/setup_trial.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Setup a generic trial specification — setup_trial","text":"specify fun_y_gen function function must take following arguments: allocs: character vector, trial arms new patients allocated since last adaptive analysis randomised . function must return single numeric vector, corresponding outcomes patients allocated since last adaptive analysis, order allocs. See Advanced example vignette (vignette(\"Advanced-example\", package = \"adaptr\")) example details. specify fun_draws function function must take following arguments: arms: character vector, unique trial arms, order , currently active arms included function called. allocs: vector allocations patients, corresponding trial arms, including patients allocated currently active inactive arms called. ys: vector outcomes patients order allocs, including outcomes patients allocated currently active inactive arms called. control: single character, current control arm, NULL designs without common control arm, required regardless argument supplied run_trial()/run_trials(). n_draws: single integer, number posterior draws arm. function must return matrix (containing numeric values) arms named columns n_draws rows. matrix must columns currently active arms (called). row contain single posterior draw arm original outcome scale: estimated , e.g., log(odds), estimates must transformed probabilities similarly measures. Important: matrix contain NAs, even patients randomised arm yet. See provided example one way alleviate . See Advanced examples vignette (vignette(\"Advanced-example\", package = \"adaptr\")) example details. Notes Different estimation methods prior distributions may used; complex functions lead slower simulations compared simpler methods obtaining posterior draws, including specified using setup_trial_binom() setup_trial_norm() functions. Technically, using log relative effect measures — e.g. log(odds ratios) log(risk ratios) - differences compared reference arm (e.g., mean differences absolute risk differences) instead absolute values arm work extent (cautious!): Stopping superiority/inferiority/max sample sizes work. Stopping equivalence/futility may used relative effect measures log scale, thresholds adjusted accordingly. Several summary statistics run_trial() (sum_ys posterior estimates) may nonsensical relative effect measures used (depending calculation method; see raw_ests argument relevant functions). vein, extract_results() (sum_ys, sq_err, sq_err_te), summary() (sum_ys_mean/sd/median/q25/q75/q0/q100, rmse, rmse_te) may equally nonsensical calculated relative scale (see raw_ests argument relevant functions. Using additional custom functions loaded packages custom functions fun_y_gen, fun_draws, fun_raw_est functions calls user-specified functions (uses objects defined user outside functions setup_trial()-call) functions external packages simulations conducted multiple cores, objects functions must prefixed namespaces (.e., package::function()) exported, described setup_cluster() run_trials(). information arguments control: one treatment arms superior control arm (.e., passes superiority threshold defined ), arm become new control (multiple arms superior, one highest probability overall best become new control), previous control dropped inferiority, remaining arms immediately compared new control adaptive analysis dropped inferior (possibly equivalent/futile, see ) compared new control arm. applies trials common control. control_prob_fixed: length 1, allocation probability used control group (including new arm becomes control original control dropped). multiple values specified first value used arms active, second one arm dropped, forth. 1 values specified, previously set fixed_probs, min_probs max_probs new control arms ignored. allocation probabilities sum 1 (e.g, due multiple limits) rescaled . Can also set one special arguments \"sqrt-based\", \"sqrt-based start\", \"sqrt-based fixed\" \"match\" (written exactly one , case sensitive). requires start_probs NULL relevant fixed_probs NULL (NA control arm). one \"sqrt-based\"/\"sqrt-based start\"/\"sqrt-based fixed\" options used, function set square-root-transformation-based starting allocation probabilities. defined :square root number non-control arms 1-ratio arms scaled sum 1, generally increase power comparisons common control, discussed , e.g., Park et al, 2020 doi:10.1016/j.jclinepi.2020.04.025 . \"sqrt-based\" \"sqrt-based fixed\", square-root-transformation-based allocation probabilities used initially also new controls arms dropped (probabilities always calculated based number active non-control arms). \"sqrt-based\", response-adaptive randomisation used non-control arms, non-control arms use fixed, square-root based allocation probabilities times (probabilities always calculated based number active non-control arms). \"sqrt-based start\", control arm allocation probability fixed square-root based probability times calculated according initial number arms (probability also used new control(s) original control dropped). \"match\" specified, control group allocation probability always matched similar highest non-control arm allocation probability. Superiority inferiority trial designs without common control arm, superiority inferiority assessed comparing currently active groups. means \"final\" analysis trial without common control > 2 arms conducted including arms (often done practice) adaptive trial stopped, final probabilities best arm superior may differ slightly. example, trial three arms common control arm, one arm may dropped early inferiority defined < 1% probability overall best arm. trial may continue two remaining arms, stopped one declared superior defined > 99% probability overall best arm. final analysis conducted including arms, final probability best arm overall superior generally slightly lower probability first dropped arm best often > 0%, even low inferiority threshold. less relevant trial designs common control, pairwise assessments superiority/inferiority compared common control influenced similarly previously dropped arms (previously dropped arms may included analyses, even posterior distributions returned ). Similarly, actual clinical trials randomised_at_looks specified numbers higher number patients available outcome data analysis, final probabilities may change somewhat patients completed follow-included final analysis. Equivalence Equivalence assessed inferiority superiority assessed (case superiority, assessed new control arm designs common control, specified - see ). Futility Futility assessed inferiority, superiority, equivalence assessed (case superiority, assessed new control arm designs common control, specified - see ). Arms thus dropped equivalence futility. Varying probability thresholds Different probability thresholds (superiority, inferiority, equivalence, futility) may specified different adaptive analyses. may used, e.g., apply strict probability thresholds earlier analyses (make one stopping rules apply earlier analyses), similar use monitoring boundaries different thresholds used interim analyses conventional, frequentist group sequential trial designs. See Basic examples vignette (vignette(\"Basic-examples\", package = \"adaptr\")) example.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/setup_trial.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Setup a generic trial specification — setup_trial","text":"","code":"# Setup a custom trial specification with right-skewed, log-normally # distributed continuous outcomes (higher values are worse) # Define the function that will generate the outcomes in each arm # Notice: contents should match arms/true_ys in the setup_trial() call below get_ys_lognorm <- function(allocs) { y <- numeric(length(allocs)) # arms (names and order) and values (except for exponentiation) should match # those used in setup_trial (below) means <- c(\"Control\" = 2.2, \"Experimental A\" = 2.1, \"Experimental B\" = 2.3) for (arm in names(means)) { ii <- which(allocs == arm) y[ii] <- rlnorm(length(ii), means[arm], 1.5) } y } # Define the function that will generate posterior draws # In this example, the function uses no priors (corresponding to improper # flat priors) and calculates results on the log-scale, before exponentiating # back to the natural scale, which is required for assessments of # equivalence, futility and general interpretation get_draws_lognorm <- function(arms, allocs, ys, control, n_draws) { draws <- list() logys <- log(ys) for (arm in arms){ ii <- which(allocs == arm) n <- length(ii) if (n > 1) { # Necessary to avoid errors if too few patients randomised to this arm draws[[arm]] <- exp(rnorm(n_draws, mean = mean(logys[ii]), sd = sd(logys[ii])/sqrt(n - 1))) } else { # Too few patients randomised to this arm - extreme uncertainty draws[[arm]] <- exp(rnorm(n_draws, mean = mean(logys), sd = 1000 * (max(logys) - min(logys)))) } } do.call(cbind, draws) } # The actual trial specification is then defined lognorm_trial <- setup_trial( # arms should match those above arms = c(\"Control\", \"Experimental A\", \"Experimental B\"), # true_ys should match those above true_ys = exp(c(2.2, 2.1, 2.3)), fun_y_gen = get_ys_lognorm, # as specified above fun_draws = get_draws_lognorm, # as specified above max_n = 5000, look_after_every = 200, control = \"Control\", # Square-root-based, fixed control group allocation ratio # and response-adaptive randomisation for other arms control_prob_fixed = \"sqrt-based\", # Equivalence assessment equivalence_prob = 0.9, equivalence_diff = 0.5, equivalence_only_first = TRUE, highest_is_best = FALSE, # Summarise raw results by taking the mean on the # log scale and back-transforming fun_raw_est = function(x) exp(mean(log(x))) , # Summarise posteriors using medians with MAD-SDs, # as distributions will not be normal on the actual scale robust = TRUE, # Description/additional info used when printing description = \"continuous, log-normally distributed outcome\", add_info = \"SD on the log scale for all arms: 1.5\" ) # Print trial specification with 3 digits for all probabilities print(lognorm_trial, prob_digits = 3) #> Trial specification: continuous, log-normally distributed outcome #> * Undesirable outcome #> * Common control arm: Control #> * Control arm probability fixed at 0.414 (for 3 arms), 0.5 (for 2 arms) #> * Best arm: Experimental A #> #> Arms, true outcomes, starting allocation probabilities #> and allocation probability limits: #> arms true_ys start_probs fixed_probs min_probs max_probs #> Control 9.03 0.414 0.414 NA NA #> Experimental A 8.17 0.293 NA NA NA #> Experimental B 9.97 0.293 NA NA NA #> #> Maximum sample size: 5000 #> Maximum number of data looks: 25 #> Planned looks after every 200 #> patients have reached follow-up until final look after 5000 patients #> Number of patients randomised at each look: 200, 400, 600, 800, 1000, 1200, 1400, 1600, 1800, 2000, 2200, 2400, 2600, 2800, 3000, 3200, 3400, 3600, 3800, 4000, 4200, 4400, 4600, 4800, 5000 #> #> Superiority threshold: 0.99 (all analyses) #> Inferiority threshold: 0.01 (all analyses) #> Equivalence threshold: 0.9 (all analyses) (only checked for first control) #> Absolute equivalence difference: 0.5 #> No futility threshold #> Soften power for all analyses: 1 (no softening) #> #> Additional info: SD on the log scale for all arms: 1.5"},{"path":"https://inceptdk.github.io/adaptr/reference/setup_trial_binom.html","id":null,"dir":"Reference","previous_headings":"","what":"Setup a trial specification using a binary, binomially distributed outcome — setup_trial_binom","title":"Setup a trial specification using a binary, binomially distributed outcome — setup_trial_binom","text":"Specifies design adaptive trial binary, binomially distributed outcome validates inputs. Uses beta-binomial conjugate models beta(1, 1) prior distributions, corresponding uniform prior (addition 2 patients, 1 event 1 without, arm) trial. Use calibrate_trial() calibrate trial specification obtain specific value certain performance metric (e.g., Bayesian type 1 error rate). Use run_trial() run_trials() conduct single/multiple simulations specified trial, respectively. Note: add_info specified setup_trial() set NULL trial specifications setup function.details: please see setup_trial(). See setup_trial_norm() simplified setup trials normally distributed continuous outcome. additional trial specification examples, see Basic examples vignette (vignette(\"Basic-examples\", package = \"adaptr\")) Advanced example vignette (vignette(\"Advanced-example\", package = \"adaptr\")).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/setup_trial_binom.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Setup a trial specification using a binary, binomially distributed outcome — setup_trial_binom","text":"","code":"setup_trial_binom( arms, true_ys, start_probs = NULL, fixed_probs = NULL, min_probs = rep(NA, length(arms)), max_probs = rep(NA, length(arms)), rescale_probs = NULL, data_looks = NULL, max_n = NULL, look_after_every = NULL, randomised_at_looks = NULL, control = NULL, control_prob_fixed = NULL, inferiority = 0.01, superiority = 0.99, equivalence_prob = NULL, equivalence_diff = NULL, equivalence_only_first = NULL, futility_prob = NULL, futility_diff = NULL, futility_only_first = NULL, highest_is_best = FALSE, soften_power = 1, cri_width = 0.95, n_draws = 5000, robust = TRUE, description = \"generic binomially distributed outcome trial\" )"},{"path":"https://inceptdk.github.io/adaptr/reference/setup_trial_binom.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Setup a trial specification using a binary, binomially distributed outcome — setup_trial_binom","text":"arms character vector unique names trial arms. true_ys numeric vector, true probabilities (0 1) outcomes trial arms. start_probs numeric vector, allocation probabilities arm beginning trial. default (NULL) automatically generates equal randomisation probabilities arm. fixed_probs numeric vector, fixed allocation probabilities arm. Must either numeric vector NA arms without fixed probabilities values 0 1 arms NULL (default), adaptive randomisation used arms one special settings (\"sqrt-based\", \"sqrt-based start\", \"sqrt-based fixed\", \"match\") specified control_prob_fixed (described ). min_probs numeric vector, lower threshold adaptive allocation probabilities; lower probabilities rounded values. Must NA (default arms) lower threshold wanted arms using fixed allocation probabilities. max_probs numeric vector, upper threshold adaptive allocation probabilities; higher probabilities rounded values. Must NA (default arms) threshold wanted arms using fixed allocation probabilities. rescale_probs NULL (default) one either \"fixed\", \"limits\", \"\". Rescales fixed_probs (\"fixed\" \"\") min_probs/max_probs (\"limits\" \"\") arm dropping trial specifications >2 arms using rescale_factor defined initial number arms/number active arms. \"fixed_probs min_probs rescaled initial value * rescale factor, except fixed_probs controlled control_prob_fixed argument, never rescaled. max_probs rescaled 1 - ( (1 - initial value) * rescale_factor). Must NULL 2 arms control_prob_fixed \"sqrt-based fixed\". NULL, one valid non-NA values must specified either min_probs/max_probs fixed_probs (counting fixed value original control control_prob_fixed \"sqrt-based\"/\"sqrt-based start\"/\"sqrt-based fixed\").Note: using argument specific combinations values arguments may lead invalid combined (total) allocation probabilities arm dropping, case probabilities ultimately rescaled sum 1. responsibility user ensure rescaling fixed allocation probabilities minimum/maximum allocation probability limits lead invalid unexpected allocation probabilities arm dropping. Finally, initial values overwritten control_prob_fixed argument arm dropping rescaled. data_looks vector increasing integers, specifies conduct adaptive analyses (= total number patients available outcome data adaptive analysis). last number vector represents final adaptive analysis, .e., final analysis superiority, inferiority, practical equivalence, futility can claimed. Instead specifying data_looks, max_n look_after_every arguments can used combination (case data_looks must NULL, default value). max_n single integer, number patients available outcome data last possible adaptive analysis (defaults NULL). Must specified data_looks NULL. Requires specification look_after_every argument. look_after_every single integer, specified together max_n. Adaptive analyses conducted every look_after_every patients available outcome data, total sample size specified max_n (max_n need multiple look_after_every). specified, data_looks must NULL (default). randomised_at_looks vector increasing integers NULL, specifying number patients randomised time adaptive analysis, new patients randomised using current allocation probabilities said analysis. NULL (default), number patients randomised analysis match number patients available outcome data said analysis, specified data_looks max_n look_after_every, .e., outcome data available immediately randomisation patients. NULL, vector must length number adaptive analyses specified data_looks max_n look_after_every, values must larger equal number patients available outcome data analysis. control single character string, name one arms NULL (default). specified, arm serve common control arm, arms compared inferiority/superiority/equivalence thresholds (see ) comparisons. See setup_trial() Details information behaviour respect comparisons. control_prob_fixed common control arm specified, can set NULL (default), case control arm allocation probability fixed control arms change (allocation probability first control arm may still fixed using fixed_probs, 'reused' new control arm). NULL, vector probabilities either length 1 number arms - 1 can provided, one special arguments \"sqrt-based\", \"sqrt-based start\", \"sqrt-based fixed\" \"match\". See setup_trial() Details details affects trial behaviour. inferiority single numeric value vector numeric values length maximum number possible adaptive analyses, specifying probability threshold(s) inferiority (default 0.01). values must >= 0 <= 1, multiple values supplied, values may lower preceding value. common controlis used, values must < 1 / number arms. arm considered inferior dropped probability best (comparing arms) better control arm (common control used) drops inferiority threshold adaptive analysis. superiority single numeric value vector numeric values length maximum number possible adaptive analyses, specifying probability threshold(s) superiority (default 0.99). values must >= 0 <= 1, multiple values supplied, values may higher preceding value. probability arm best (comparing arms) better control arm (common control used) exceeds superiority threshold adaptive analysis, said arm declared winner trial stopped (common control used last comparator dropped design common control) become new control trial continue (common control specified). equivalence_prob single numeric value, vector numeric values length maximum number possible adaptive analyses NULL (default, corresponding equivalence assessment), specifying probability threshold(s) equivalence. NULL, values must > 0 <= 1, multiple values supplied, value may higher preceding value. NULL, arms dropped equivalence probability either () equivalence compared common control (b) equivalence arms remaining (designs without common control) exceeds equivalence threshold adaptive analysis. Requires specification equivalence_diff equivalence_only_first. equivalence_diff single numeric value (> 0) NULL (default, corresponding equivalence assessment). numeric value specified, estimated absolute differences smaller threshold considered equivalent. designs common control arm, differences non-control arm control arm used, trials without common control arm, difference highest lowest estimated outcome rates used trial stopped equivalence remaining arms equivalent. equivalence_only_first single logical trial specifications equivalence_prob equivalence_diff specified common control arm included, otherwise NULL (default). common control arm used, specifies whether equivalence assessed first control (TRUE) also subsequent control arms (FALSE) one arm superior first control becomes new control. futility_prob single numeric value, vector numeric values length maximum number possible adaptive analyses NULL (default, corresponding futility assessment), specifying probability threshold(s) futility. values must > 0 <= 1, multiple values supplied, value may higher preceding value. NULL, arms dropped futility probability futility compared common control exceeds futility threshold adaptive analysis. Requires common control arm (otherwise argument must NULL), specification futility_diff, futility_only_first. futility_diff single numeric value (> 0) NULL (default, corresponding futility assessment). numeric value specified, estimated differences threshold beneficial direction (specified highest_is_best) considered futile assessing futility designs common control arm. 1 arm remains dropping arms futility, trial stopped without declaring last arm superior. futility_only_first single logical trial specifications designs futility_prob futility_diff specified, otherwise NULL (default required designs without common control arm). Specifies whether futility assessed first control (TRUE) also subsequent control arms (FALSE) one arm superior first control becomes new control. highest_is_best single logical, specifies whether larger estimates outcome favourable ; defaults FALSE, corresponding , e.g., undesirable binary outcomes (e.g., mortality) continuous outcome lower numbers preferred (e.g., hospital length stay). soften_power either single numeric value numeric vector exactly length maximum number looks/adaptive analyses. Values must 0 1 (default); < 1, re-allocated non-fixed allocation probabilities raised power (followed rescaling sum 1) make adaptive allocation probabilities less extreme, turn used redistribute remaining probability respecting limits defined min_probs /max_probs. 1, softening applied. cri_width single numeric >= 0 < 1, width percentile-based credible intervals used summarising individual trial results. Defaults 0.95, corresponding 95% credible intervals. n_draws single integer, number draws posterior distributions arm used running trial. Defaults 5000; can reduced speed gain (potential loss stability results low) increased increased precision (increasing simulation time). Values < 100 allowed values < 1000 recommended warned . robust single logical, TRUE (default) medians median absolute deviations (scaled comparable standard deviation normal distributions; MAD_SDs, see stats::mad()) used summarise posterior distributions; FALSE, means standard deviations (SDs) used instead (slightly faster, may less appropriate posteriors skewed natural scale). description character string, default \"generic binomially distributed outcome trial\". See arguments setup_trial().","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/setup_trial_binom.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Setup a trial specification using a binary, binomially distributed outcome — setup_trial_binom","text":"trial_spec object used run simulations run_trial() run_trials(). output essentially list containing input values (combined data.frame called trial_arms), class signals inputs validated inappropriate combinations settings ruled . Also contains best_arm, holding arm(s) best value(s) true_ys. Use str() peruse actual content returned object.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/setup_trial_binom.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Setup a trial specification using a binary, binomially distributed outcome — setup_trial_binom","text":"","code":"# Setup a trial specification using a binary, binomially # distributed, undesirable outcome binom_trial <- setup_trial_binom( arms = c(\"Arm A\", \"Arm B\", \"Arm C\"), true_ys = c(0.25, 0.20, 0.30), # Minimum allocation of 15% in all arms min_probs = rep(0.15, 3), data_looks = seq(from = 300, to = 2000, by = 100), # Stop for equivalence if > 90% probability of # absolute differences < 5 percentage points equivalence_prob = 0.9, equivalence_diff = 0.05, soften_power = 0.5 # Limit extreme allocation ratios ) # Print using 3 digits for probabilities print(binom_trial, prob_digits = 3) #> Trial specification: generic binomially distributed outcome trial #> * Undesirable outcome #> * No common control arm #> * Best arm: Arm B #> #> Arms, true outcomes, starting allocation probabilities #> and allocation probability limits: #> arms true_ys start_probs fixed_probs min_probs max_probs #> Arm A 0.25 0.333 NA 0.15 NA #> Arm B 0.20 0.333 NA 0.15 NA #> Arm C 0.30 0.333 NA 0.15 NA #> #> Maximum sample size: 2000 #> Maximum number of data looks: 18 #> Planned data looks after: 300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, 1500, 1600, 1700, 1800, 1900, 2000 patients have reached follow-up #> Number of patients randomised at each look: 300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, 1500, 1600, 1700, 1800, 1900, 2000 #> #> Superiority threshold: 0.99 (all analyses) #> Inferiority threshold: 0.01 (all analyses) #> Equivalence threshold: 0.9 (all analyses) (no common control) #> Absolute equivalence difference: 0.05 #> No futility threshold (not relevant - no common control) #> Soften power for all analyses: 0.5"},{"path":"https://inceptdk.github.io/adaptr/reference/setup_trial_norm.html","id":null,"dir":"Reference","previous_headings":"","what":"Setup a trial specification using a continuous, normally distributed outcome — setup_trial_norm","title":"Setup a trial specification using a continuous, normally distributed outcome — setup_trial_norm","text":"Specifies design adaptive trial continuous, normally distributed outcome validates inputs. Uses normally distributed posterior distributions mean values trial arm; technically, priors used (using normal-normal conjugate prior models extremely wide uniform priors gives similar results simple, unadjusted estimates). corresponds use improper, flat priors, although explicitly specified . Use calibrate_trial() calibrate trial specification obtain specific value certain performance metric (e.g., Bayesian type 1 error rate). Use run_trial() run_trials() conduct single/multiple simulations specified trial, respectively.Note: add_info specified setup_trial() set arms standard deviations used trials specified using function.details: please see setup_trial(). See setup_trial_binom() simplified setup trials binomially distributed binary outcomes. additional trial specification examples, see Basic examples vignette (vignette(\"Basic-examples\", package = \"adaptr\")) Advanced example vignette (vignette(\"Advanced-example\", package = \"adaptr\")).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/setup_trial_norm.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Setup a trial specification using a continuous, normally distributed outcome — setup_trial_norm","text":"","code":"setup_trial_norm( arms, true_ys, sds, start_probs = NULL, fixed_probs = NULL, min_probs = rep(NA, length(arms)), max_probs = rep(NA, length(arms)), rescale_probs = NULL, data_looks = NULL, max_n = NULL, look_after_every = NULL, randomised_at_looks = NULL, control = NULL, control_prob_fixed = NULL, inferiority = 0.01, superiority = 0.99, equivalence_prob = NULL, equivalence_diff = NULL, equivalence_only_first = NULL, futility_prob = NULL, futility_diff = NULL, futility_only_first = NULL, highest_is_best = FALSE, soften_power = 1, cri_width = 0.95, n_draws = 5000, robust = FALSE, description = \"generic normally distributed outcome trial\" )"},{"path":"https://inceptdk.github.io/adaptr/reference/setup_trial_norm.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Setup a trial specification using a continuous, normally distributed outcome — setup_trial_norm","text":"arms character vector unique names trial arms. true_ys numeric vector, simulated means outcome trial arms. sds numeric vector, true standard deviations (must > 0) outcome trial arms. start_probs numeric vector, allocation probabilities arm beginning trial. default (NULL) automatically generates equal randomisation probabilities arm. fixed_probs numeric vector, fixed allocation probabilities arm. Must either numeric vector NA arms without fixed probabilities values 0 1 arms NULL (default), adaptive randomisation used arms one special settings (\"sqrt-based\", \"sqrt-based start\", \"sqrt-based fixed\", \"match\") specified control_prob_fixed (described ). min_probs numeric vector, lower threshold adaptive allocation probabilities; lower probabilities rounded values. Must NA (default arms) lower threshold wanted arms using fixed allocation probabilities. max_probs numeric vector, upper threshold adaptive allocation probabilities; higher probabilities rounded values. Must NA (default arms) threshold wanted arms using fixed allocation probabilities. rescale_probs NULL (default) one either \"fixed\", \"limits\", \"\". Rescales fixed_probs (\"fixed\" \"\") min_probs/max_probs (\"limits\" \"\") arm dropping trial specifications >2 arms using rescale_factor defined initial number arms/number active arms. \"fixed_probs min_probs rescaled initial value * rescale factor, except fixed_probs controlled control_prob_fixed argument, never rescaled. max_probs rescaled 1 - ( (1 - initial value) * rescale_factor). Must NULL 2 arms control_prob_fixed \"sqrt-based fixed\". NULL, one valid non-NA values must specified either min_probs/max_probs fixed_probs (counting fixed value original control control_prob_fixed \"sqrt-based\"/\"sqrt-based start\"/\"sqrt-based fixed\").Note: using argument specific combinations values arguments may lead invalid combined (total) allocation probabilities arm dropping, case probabilities ultimately rescaled sum 1. responsibility user ensure rescaling fixed allocation probabilities minimum/maximum allocation probability limits lead invalid unexpected allocation probabilities arm dropping. Finally, initial values overwritten control_prob_fixed argument arm dropping rescaled. data_looks vector increasing integers, specifies conduct adaptive analyses (= total number patients available outcome data adaptive analysis). last number vector represents final adaptive analysis, .e., final analysis superiority, inferiority, practical equivalence, futility can claimed. Instead specifying data_looks, max_n look_after_every arguments can used combination (case data_looks must NULL, default value). max_n single integer, number patients available outcome data last possible adaptive analysis (defaults NULL). Must specified data_looks NULL. Requires specification look_after_every argument. look_after_every single integer, specified together max_n. Adaptive analyses conducted every look_after_every patients available outcome data, total sample size specified max_n (max_n need multiple look_after_every). specified, data_looks must NULL (default). randomised_at_looks vector increasing integers NULL, specifying number patients randomised time adaptive analysis, new patients randomised using current allocation probabilities said analysis. NULL (default), number patients randomised analysis match number patients available outcome data said analysis, specified data_looks max_n look_after_every, .e., outcome data available immediately randomisation patients. NULL, vector must length number adaptive analyses specified data_looks max_n look_after_every, values must larger equal number patients available outcome data analysis. control single character string, name one arms NULL (default). specified, arm serve common control arm, arms compared inferiority/superiority/equivalence thresholds (see ) comparisons. See setup_trial() Details information behaviour respect comparisons. control_prob_fixed common control arm specified, can set NULL (default), case control arm allocation probability fixed control arms change (allocation probability first control arm may still fixed using fixed_probs, 'reused' new control arm). NULL, vector probabilities either length 1 number arms - 1 can provided, one special arguments \"sqrt-based\", \"sqrt-based start\", \"sqrt-based fixed\" \"match\". See setup_trial() Details details affects trial behaviour. inferiority single numeric value vector numeric values length maximum number possible adaptive analyses, specifying probability threshold(s) inferiority (default 0.01). values must >= 0 <= 1, multiple values supplied, values may lower preceding value. common controlis used, values must < 1 / number arms. arm considered inferior dropped probability best (comparing arms) better control arm (common control used) drops inferiority threshold adaptive analysis. superiority single numeric value vector numeric values length maximum number possible adaptive analyses, specifying probability threshold(s) superiority (default 0.99). values must >= 0 <= 1, multiple values supplied, values may higher preceding value. probability arm best (comparing arms) better control arm (common control used) exceeds superiority threshold adaptive analysis, said arm declared winner trial stopped (common control used last comparator dropped design common control) become new control trial continue (common control specified). equivalence_prob single numeric value, vector numeric values length maximum number possible adaptive analyses NULL (default, corresponding equivalence assessment), specifying probability threshold(s) equivalence. NULL, values must > 0 <= 1, multiple values supplied, value may higher preceding value. NULL, arms dropped equivalence probability either () equivalence compared common control (b) equivalence arms remaining (designs without common control) exceeds equivalence threshold adaptive analysis. Requires specification equivalence_diff equivalence_only_first. equivalence_diff single numeric value (> 0) NULL (default, corresponding equivalence assessment). numeric value specified, estimated absolute differences smaller threshold considered equivalent. designs common control arm, differences non-control arm control arm used, trials without common control arm, difference highest lowest estimated outcome rates used trial stopped equivalence remaining arms equivalent. equivalence_only_first single logical trial specifications equivalence_prob equivalence_diff specified common control arm included, otherwise NULL (default). common control arm used, specifies whether equivalence assessed first control (TRUE) also subsequent control arms (FALSE) one arm superior first control becomes new control. futility_prob single numeric value, vector numeric values length maximum number possible adaptive analyses NULL (default, corresponding futility assessment), specifying probability threshold(s) futility. values must > 0 <= 1, multiple values supplied, value may higher preceding value. NULL, arms dropped futility probability futility compared common control exceeds futility threshold adaptive analysis. Requires common control arm (otherwise argument must NULL), specification futility_diff, futility_only_first. futility_diff single numeric value (> 0) NULL (default, corresponding futility assessment). numeric value specified, estimated differences threshold beneficial direction (specified highest_is_best) considered futile assessing futility designs common control arm. 1 arm remains dropping arms futility, trial stopped without declaring last arm superior. futility_only_first single logical trial specifications designs futility_prob futility_diff specified, otherwise NULL (default required designs without common control arm). Specifies whether futility assessed first control (TRUE) also subsequent control arms (FALSE) one arm superior first control becomes new control. highest_is_best single logical, specifies whether larger estimates outcome favourable ; defaults FALSE, corresponding , e.g., undesirable binary outcomes (e.g., mortality) continuous outcome lower numbers preferred (e.g., hospital length stay). soften_power either single numeric value numeric vector exactly length maximum number looks/adaptive analyses. Values must 0 1 (default); < 1, re-allocated non-fixed allocation probabilities raised power (followed rescaling sum 1) make adaptive allocation probabilities less extreme, turn used redistribute remaining probability respecting limits defined min_probs /max_probs. 1, softening applied. cri_width single numeric >= 0 < 1, width percentile-based credible intervals used summarising individual trial results. Defaults 0.95, corresponding 95% credible intervals. n_draws single integer, number draws posterior distributions arm used running trial. Defaults 5000; can reduced speed gain (potential loss stability results low) increased increased precision (increasing simulation time). Values < 100 allowed values < 1000 recommended warned . robust single logical, TRUE (default) medians median absolute deviations (scaled comparable standard deviation normal distributions; MAD_SDs, see stats::mad()) used summarise posterior distributions; FALSE, means standard deviations (SDs) used instead (slightly faster, may less appropriate posteriors skewed natural scale). description character string, default \"generic normally distributed outcome trial\". See arguments setup_trial().","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/setup_trial_norm.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Setup a trial specification using a continuous, normally distributed outcome — setup_trial_norm","text":"trial_spec object used run simulations run_trial() run_trials(). output essentially list containing input values (combined data.frame called trial_arms), class signals inputs validated inappropriate combinations settings ruled . Also contains best_arm, holding arm(s) best value(s) true_ys. Use str() peruse actual content returned object.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/setup_trial_norm.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Setup a trial specification using a continuous, normally distributed outcome — setup_trial_norm","text":"posteriors used type trial (generic, continuous, normally distributed outcome) definition normally distributed, FALSE used default value robust argument.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/setup_trial_norm.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Setup a trial specification using a continuous, normally distributed outcome — setup_trial_norm","text":"","code":"# Setup a trial specification using a continuous, normally distributed, desirable outcome norm_trial <- setup_trial_norm( arms = c(\"Control\", \"New A\", \"New B\", \"New C\"), true_ys = c(15, 20, 14, 13), sds = c(2, 2.5, 1.9, 1.8), # SDs in each arm max_n = 500, look_after_every = 50, control = \"Control\", # Common control arm # Square-root-based, fixed control group allocation ratios control_prob_fixed = \"sqrt-based fixed\", # Desirable outcome highest_is_best = TRUE, soften_power = 0.5 # Limit extreme allocation ratios ) # Print using 3 digits for probabilities print(norm_trial, prob_digits = 3) #> Trial specification: generic normally distributed outcome trial #> * Desirable outcome #> * Common control arm: Control #> * Control arm probability fixed at 0.366 (for 4 arms), 0.414 (for 3 arms), 0.5 (for 2 arms) #> * Best arm: New A #> #> Arms, true outcomes, starting allocation probabilities #> and allocation probability limits: #> arms true_ys start_probs fixed_probs min_probs max_probs #> Control 15 0.366 0.366 NA NA #> New A 20 0.211 0.211 NA NA #> New B 14 0.211 0.211 NA NA #> New C 13 0.211 0.211 NA NA #> #> Maximum sample size: 500 #> Maximum number of data looks: 10 #> Planned looks after every 50 #> patients have reached follow-up until final look after 500 patients #> Number of patients randomised at each look: 50, 100, 150, 200, 250, 300, 350, 400, 450, 500 #> #> Superiority threshold: 0.99 (all analyses) #> Inferiority threshold: 0.01 (all analyses) #> No equivalence threshold #> No futility threshold #> Soften power for all analyses: 0.5 #> #> Additional info: Arm SDs - Control: 2; New A: 2.5; New B: 1.9; New C: 1.8."},{"path":"https://inceptdk.github.io/adaptr/reference/stop0_warning0.html","id":null,"dir":"Reference","previous_headings":"","what":"stop() and warning() with call. = FALSE — stop0_warning0","title":"stop() and warning() with call. = FALSE — stop0_warning0","text":"Used internally. Calls stop0() warning() enforces call. = FALSE, suppress call error/warning.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/stop0_warning0.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"stop() and warning() with call. = FALSE — stop0_warning0","text":"","code":"stop0(...) warning0(...)"},{"path":"https://inceptdk.github.io/adaptr/reference/stop0_warning0.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"stop() and warning() with call. = FALSE — stop0_warning0","text":"... zero objects can coerced character (pasted together separator) single condition object.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/summarise_dist.html","id":null,"dir":"Reference","previous_headings":"","what":"Summarise distribution — summarise_dist","title":"Summarise distribution — summarise_dist","text":"Used internally, summarise posterior distributions, logic apply distribution (thus, name).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/summarise_dist.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Summarise distribution — summarise_dist","text":"","code":"summarise_dist(x, robust = TRUE, interval_width = 0.95)"},{"path":"https://inceptdk.github.io/adaptr/reference/summarise_dist.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Summarise distribution — summarise_dist","text":"x numeric vector posterior draws. robust single logical. TRUE (default) median median absolute deviation (MAD-SD; scaled comparable standard deviation normal distributions) used summarise distribution; FALSE, mean standard deviation (SD) used instead (slightly faster, may less appropriate skewed distribution). interval_width single numeric value (> 0 <1); width interval; default 0.95, corresponding 95% percentile-base credible intervals posterior distributions.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/summarise_dist.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Summarise distribution — summarise_dist","text":"numeric vector four named elements: est (median/mean), err (MAD-SD/SD), lo hi (lower upper boundaries interval).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/summarise_dist.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Summarise distribution — summarise_dist","text":"MAD-SDs scaled correspond SDs distributions normal, similarly stats::mad() function; see details regarding calculation function's description.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/summarise_num.html","id":null,"dir":"Reference","previous_headings":"","what":"Summarise numeric vector — summarise_num","title":"Summarise numeric vector — summarise_num","text":"Used internally, summarise numeric vectors.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/summarise_num.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Summarise numeric vector — summarise_num","text":"","code":"summarise_num(x)"},{"path":"https://inceptdk.github.io/adaptr/reference/summarise_num.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Summarise numeric vector — summarise_num","text":"x numeric vector.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/summarise_num.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Summarise numeric vector — summarise_num","text":"numeric vector seven named elements: mean, sd, median, p25, p75, p0, p100 corresponding mean, standard deviation, median, 25-/75-/0-/100-percentiles.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/summary.html","id":null,"dir":"Reference","previous_headings":"","what":"Summary of simulated trial results — summary","title":"Summary of simulated trial results — summary","text":"Summarises simulation results run_trials() function. Uses extract_results() check_performance(), may used directly extract key trial results without summarising calculate performance metrics (uncertainty measures desired) return tidy data.frame.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/summary.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Summary of simulated trial results — summary","text":"","code":"# S3 method for trial_results summary( object, select_strategy = \"control if available\", select_last_arm = FALSE, select_preferences = NULL, te_comp = NULL, raw_ests = FALSE, final_ests = NULL, restrict = NULL, cores = NULL, ... )"},{"path":"https://inceptdk.github.io/adaptr/reference/summary.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Summary of simulated trial results — summary","text":"object trial_results object, output run_trials() function. select_strategy single character string. trial stopped due superiority (1 arm remaining, select_last_arm set TRUE trial designs common control arm; see ), parameter specifies arm considered selected calculating trial design performance metrics, described ; corresponds consequence inconclusive trial, .e., arm used practice. following options available must written exactly (case sensitive, abbreviated): \"control available\" (default): selects first control arm trials common control arm arm active end--trial, otherwise arm selected. trial designs without common control, arm selected. \"none\": selects arm trials ending superiority. \"control\": similar \"control available\", throw error used trial designs without common control arm. \"final control\": selects final control arm regardless whether trial stopped practical equivalence, futility, maximum sample size; strategy can specified trial designs common control arm. \"control best\": selects first control arm still active end--trial, otherwise selects best remaining arm (defined remaining arm highest probability best last adaptive analysis conducted). works trial designs common control arm. \"best\": selects best remaining arm (described \"control best\"). \"list best\": selects first remaining arm specified list (specified using select_preferences, technically character vector). none arms active end--trial, best remaining arm selected (described ). \"list\": specified , arms provided list remain active end--trial, arm selected. select_last_arm single logical, defaults FALSE. TRUE, remaining active arm (last control) selected trials common control arm ending equivalence futility, considering options specified select_strategy. Must FALSE trial designs without common control arm. select_preferences character vector specifying number arms used selection one \"list best\" \"list\" options specified select_strategy. Can contain valid arms available trial. te_comp character string, treatment-effect comparator. Can either NULL (default) case first control arm used trial designs common control arm, string naming single trial arm. used calculating err_te sq_err_te (error squared error treatment effect comparing selected arm comparator arm, described ). raw_ests single logical. FALSE (default), posterior estimates (post_ests post_ests_all, see setup_trial() run_trial()) used calculate err sq_err (error squared error estimated compared specified effect selected arm) err_te sq_err_te (error squared error treatment effect comparing selected arm comparator arm, described te_comp ). TRUE, raw estimates (raw_ests raw_ests_all, see setup_trial() run_trial()) used instead posterior estimates. final_ests single logical. TRUE (recommended) final estimates calculated using outcome data patients randomised trials stopped used (post_ests_all raw_ests_all, see setup_trial() run_trial()); FALSE, estimates calculated arm arm stopped (last adaptive analysis ) using data patients reach followed time point patients randomised used (post_ests raw_ests, see setup_trial() run_trial()). NULL (default), argument set FALSE outcome data available immediate randomisation patients (backwards compatibility, final posterior estimates may vary slightly situation, even using data); otherwise said TRUE. See setup_trial() details estimates calculated. restrict single character string NULL. NULL (default), results summarised simulations; \"superior\", results summarised simulations ending superiority ; \"selected\", results summarised simulations ending selected arm (according specified arm selection strategy simulations ending superiority). summary measures (e.g., prob_conclusive) substantially different interpretations restricted, calculated nonetheless. cores NULL single integer. NULL, default value set setup_cluster() used control whether extractions simulation results done parallel default cluster sequentially main process; value specified setup_cluster(), cores set value stored global \"mc.cores\" option (previously set options(mc.cores = ), 1 option specified. cores = 1, computations run sequentially primary process, cores > 1, new parallel cluster setup using parallel library removed function completes. See setup_cluster() details. ... additional arguments, used.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/summary.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Summary of simulated trial results — summary","text":"\"trial_results_summary\" object containing following values: n_rep: number simulations. n_summarised: described check_performance(). highest_is_best: specified setup_trial(). elapsed_time: total simulation time. size_mean, size_sd, size_median, size_p25, size_p75, size_p0, size_p100, sum_ys_mean, sum_ys_sd, sum_ys_median, sum_ys_p25, sum_ys_p75, sum_ys_p0, sum_ys_p100, ratio_ys_mean, ratio_ys_sd, ratio_ys_median, ratio_ys_p25, ratio_ys_p75, ratio_ys_p0, ratio_ys_p100, prob_conclusive, prob_superior, prob_equivalence, prob_futility, prob_max, prob_select_* (* either \"arm_ arm names none), rmse, rmse_te, mae, mae_te, idp: performance metrics described check_performance(). Note sum_ys_ ratio_ys_ measures use outcome data randomised patients, regardless whether outcome data available last analysis , described extract_results(). select_strategy, select_last_arm, select_preferences, te_comp, raw_ests, final_ests, restrict: specified . control: control arm specified setup_trial(), setup_trial_binom() setup_trial_norm(); NULL control. equivalence_assessed, futility_assessed: single logicals, specifies whether trial design specification includes assessments equivalence /futility. base_seed: specified run_trials(). cri_width, n_draws, robust, description, add_info: specified setup_trial(), setup_trial_binom() setup_trial_norm().","code":""},{"path":[]},{"path":"https://inceptdk.github.io/adaptr/reference/summary.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Summary of simulated trial results — summary","text":"","code":"# Setup a trial specification binom_trial <- setup_trial_binom(arms = c(\"A\", \"B\", \"C\", \"D\"), control = \"A\", true_ys = c(0.20, 0.18, 0.22, 0.24), data_looks = 1:20 * 100) # Run 10 simulations with a specified random base seed res <- run_trials(binom_trial, n_rep = 10, base_seed = 12345) # Summarise simulations - select the control arm if available in trials not # ending with a superiority decision res_sum <- summary(res, select_strategy = \"control\") # Print summary print(res_sum, digits = 1) #> Multiple simulation results: generic binomially distributed outcome trial #> * Undesirable outcome #> * Number of simulations: 10 #> * Number of simulations summarised: 10 (all trials) #> * Common control arm: A #> * Selection strategy: first control if available (otherwise no selection) #> * Treatment effect compared to: no comparison #> #> Performance metrics (using posterior estimates from last adaptive analysis): #> * Sample sizes: mean 1840.0 (SD: 506.0) | median 2000.0 (IQR: 2000.0 to 2000.0) [range: 400.0 to 2000.0] #> * Total summarised outcomes: mean 369.9 (SD: 105.4) | median 390.0 (IQR: 376.5 to 408.5) [range: 84.0 to 466.0] #> * Total summarised outcome rates: mean 0.202 (SD: 0.016) | median 0.196 (IQR: 0.194 to 0.209) [range: 0.180 to 0.233] #> * Conclusive: 10.0% #> * Superiority: 10.0% #> * Equivalence: 0.0% [not assessed] #> * Futility: 0.0% [not assessed] #> * Inconclusive at max sample size: 90.0% #> * Selection probabilities: A: 80.0% | B: 10.0% | C: 0.0% | D: 0.0% | None: 10.0% #> * RMSE / MAE: 0.02061 / 0.01915 #> * RMSE / MAE treatment effect: 0.18206 / 0.18206 #> * Ideal design percentage: 70.4% #> #> Simulation details: #> * Simulation time: 0.695 secs #> * Base random seed: 12345 #> * Credible interval width: 95% #> * Number of posterior draws: 5000 #> * Estimation method: posterior medians with MAD-SDs"},{"path":"https://inceptdk.github.io/adaptr/reference/update_saved_calibration.html","id":null,"dir":"Reference","previous_headings":"","what":"Update previously saved calibration result — update_saved_calibration","title":"Update previously saved calibration result — update_saved_calibration","text":"function updates previously saved \"trial_calibration\"-object created saved calibrate_trial() using previous version adaptr, including embedded trial specification trial results objects (internally using update_saved_trials() function). allows use calibration results, including calibrated trial specification best simulations results calibration process, used without errors version package. function run per saved simulation object issue warning object already date. overview changes made according adaptr package version used generate original object provided Details.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/update_saved_calibration.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Update previously saved calibration result — update_saved_calibration","text":"","code":"update_saved_calibration(path, version = NULL, compress = TRUE)"},{"path":"https://inceptdk.github.io/adaptr/reference/update_saved_calibration.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Update previously saved calibration result — update_saved_calibration","text":"path single character; path saved \"trial_calibration\"-object containing calibration result saved calibrate_trial(). version passed saveRDS() saving updated object, defaults NULL (saveRDS()), means current default version used. compress passed saveRDS() saving updated object, defaults TRUE (saveRDS()), see saveRDS() options.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/update_saved_calibration.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Update previously saved calibration result — update_saved_calibration","text":"Invisibly returns updated \"trial_calibration\"-object.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/update_saved_calibration.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Update previously saved calibration result — update_saved_calibration","text":"following changes made according version adaptr used generate original \"trial_calibration\" object: v1.3.0+: updates version number \"trial_calibration\"-object updates embedded \"trial_results\"-object (saved $best_sims, ) \"trial_spec\"-objects (saved $input_trial_spec $best_trial_spec) described update_saved_trials().","code":""},{"path":[]},{"path":"https://inceptdk.github.io/adaptr/reference/update_saved_trials.html","id":null,"dir":"Reference","previous_headings":"","what":"Update previously saved simulation results — update_saved_trials","title":"Update previously saved simulation results — update_saved_trials","text":"function updates previously saved \"trial_results\" object created saved run_trials() using previous version adaptr, allowing results previous simulations post-processed (including performance metric calculation, printing plotting) without errors version package. function run per saved simulation object issue warning object already date. overview changes made according adaptr package version used generate original object provided Details.NOTE: values updated set NA (posterior estimates 'final' analysis conducted last adaptive analysis including outcome data patients), thus using raw_ests = TRUE final_ests = TRUE extract_results() summary() functions lead missing values values calculated updated simulation objects.NOTE: objects created adaptr package, .e., trial specifications generated setup_trial() / setup_trial_binom() / setup_trial_norm() single simulation results run_trials() included part returned output run_trials() re-created re-running relevant code using updated version adaptr; manually re-loaded previous sessions, may cause errors problems updated version package.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/update_saved_trials.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Update previously saved simulation results — update_saved_trials","text":"","code":"update_saved_trials(path, version = NULL, compress = TRUE)"},{"path":"https://inceptdk.github.io/adaptr/reference/update_saved_trials.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Update previously saved simulation results — update_saved_trials","text":"path single character; path saved \"trial_results\"-object containing simulations saved run_trials(). version passed saveRDS() saving updated object, defaults NULL (saveRDS()), means current default version used. compress passed saveRDS() saving updated object, defaults TRUE (saveRDS()), see saveRDS() options.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/update_saved_trials.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Update previously saved simulation results — update_saved_trials","text":"Invisibly returns updated \"trial_results\"-object.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/update_saved_trials.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Update previously saved simulation results — update_saved_trials","text":"following changes made according version adaptr used generate original \"trial_results\" object: v1.2.0+: updates version number reallocate_probs argument embedded trial specification. v1.1.1 earlier: updates version number everything related follow-data collection lag (versions, randomised_at_looks argument setup_trial() functions exist, practical purposes identical number patients available data look) reallocate_probs argument embedded trial specification.","code":""},{"path":[]},{"path":"https://inceptdk.github.io/adaptr/reference/validate_trial.html","id":null,"dir":"Reference","previous_headings":"","what":"Validate trial specification — validate_trial","title":"Validate trial specification — validate_trial","text":"Used internally. Validates inputs common trial specifications, specified setup_trial(), setup_trial_binom() setup_trial_norm().","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/validate_trial.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Validate trial specification — validate_trial","text":"","code":"validate_trial( arms, true_ys, start_probs = NULL, fixed_probs = NULL, min_probs = rep(NA, length(arms)), max_probs = rep(NA, length(arms)), rescale_probs = NULL, data_looks = NULL, max_n = NULL, look_after_every = NULL, randomised_at_looks = NULL, control = NULL, control_prob_fixed = NULL, inferiority = 0.01, superiority = 0.99, equivalence_prob = NULL, equivalence_diff = NULL, equivalence_only_first = NULL, futility_prob = NULL, futility_diff = NULL, futility_only_first = NULL, highest_is_best = FALSE, soften_power = 1, cri_width = 0.95, n_draws = 5000, robust = FALSE, description = NULL, add_info = NULL, fun_y_gen, fun_draws, fun_raw_est )"},{"path":"https://inceptdk.github.io/adaptr/reference/validate_trial.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Validate trial specification — validate_trial","text":"arms character vector unique names trial arms. true_ys numeric vector specifying true outcomes (e.g., event probabilities, mean values, etc.) trial arms. start_probs numeric vector, allocation probabilities arm beginning trial. default (NULL) automatically generates equal randomisation probabilities arm. fixed_probs numeric vector, fixed allocation probabilities arm. Must either numeric vector NA arms without fixed probabilities values 0 1 arms NULL (default), adaptive randomisation used arms one special settings (\"sqrt-based\", \"sqrt-based start\", \"sqrt-based fixed\", \"match\") specified control_prob_fixed (described ). min_probs numeric vector, lower threshold adaptive allocation probabilities; lower probabilities rounded values. Must NA (default arms) lower threshold wanted arms using fixed allocation probabilities. max_probs numeric vector, upper threshold adaptive allocation probabilities; higher probabilities rounded values. Must NA (default arms) threshold wanted arms using fixed allocation probabilities. rescale_probs NULL (default) one either \"fixed\", \"limits\", \"\". Rescales fixed_probs (\"fixed\" \"\") min_probs/max_probs (\"limits\" \"\") arm dropping trial specifications >2 arms using rescale_factor defined initial number arms/number active arms. \"fixed_probs min_probs rescaled initial value * rescale factor, except fixed_probs controlled control_prob_fixed argument, never rescaled. max_probs rescaled 1 - ( (1 - initial value) * rescale_factor). Must NULL 2 arms control_prob_fixed \"sqrt-based fixed\". NULL, one valid non-NA values must specified either min_probs/max_probs fixed_probs (counting fixed value original control control_prob_fixed \"sqrt-based\"/\"sqrt-based start\"/\"sqrt-based fixed\").Note: using argument specific combinations values arguments may lead invalid combined (total) allocation probabilities arm dropping, case probabilities ultimately rescaled sum 1. responsibility user ensure rescaling fixed allocation probabilities minimum/maximum allocation probability limits lead invalid unexpected allocation probabilities arm dropping. Finally, initial values overwritten control_prob_fixed argument arm dropping rescaled. data_looks vector increasing integers, specifies conduct adaptive analyses (= total number patients available outcome data adaptive analysis). last number vector represents final adaptive analysis, .e., final analysis superiority, inferiority, practical equivalence, futility can claimed. Instead specifying data_looks, max_n look_after_every arguments can used combination (case data_looks must NULL, default value). max_n single integer, number patients available outcome data last possible adaptive analysis (defaults NULL). Must specified data_looks NULL. Requires specification look_after_every argument. look_after_every single integer, specified together max_n. Adaptive analyses conducted every look_after_every patients available outcome data, total sample size specified max_n (max_n need multiple look_after_every). specified, data_looks must NULL (default). randomised_at_looks vector increasing integers NULL, specifying number patients randomised time adaptive analysis, new patients randomised using current allocation probabilities said analysis. NULL (default), number patients randomised analysis match number patients available outcome data said analysis, specified data_looks max_n look_after_every, .e., outcome data available immediately randomisation patients. NULL, vector must length number adaptive analyses specified data_looks max_n look_after_every, values must larger equal number patients available outcome data analysis. control single character string, name one arms NULL (default). specified, arm serve common control arm, arms compared inferiority/superiority/equivalence thresholds (see ) comparisons. See setup_trial() Details information behaviour respect comparisons. control_prob_fixed common control arm specified, can set NULL (default), case control arm allocation probability fixed control arms change (allocation probability first control arm may still fixed using fixed_probs, 'reused' new control arm). NULL, vector probabilities either length 1 number arms - 1 can provided, one special arguments \"sqrt-based\", \"sqrt-based start\", \"sqrt-based fixed\" \"match\". See setup_trial() Details details affects trial behaviour. inferiority single numeric value vector numeric values length maximum number possible adaptive analyses, specifying probability threshold(s) inferiority (default 0.01). values must >= 0 <= 1, multiple values supplied, values may lower preceding value. common controlis used, values must < 1 / number arms. arm considered inferior dropped probability best (comparing arms) better control arm (common control used) drops inferiority threshold adaptive analysis. superiority single numeric value vector numeric values length maximum number possible adaptive analyses, specifying probability threshold(s) superiority (default 0.99). values must >= 0 <= 1, multiple values supplied, values may higher preceding value. probability arm best (comparing arms) better control arm (common control used) exceeds superiority threshold adaptive analysis, said arm declared winner trial stopped (common control used last comparator dropped design common control) become new control trial continue (common control specified). equivalence_prob single numeric value, vector numeric values length maximum number possible adaptive analyses NULL (default, corresponding equivalence assessment), specifying probability threshold(s) equivalence. NULL, values must > 0 <= 1, multiple values supplied, value may higher preceding value. NULL, arms dropped equivalence probability either () equivalence compared common control (b) equivalence arms remaining (designs without common control) exceeds equivalence threshold adaptive analysis. Requires specification equivalence_diff equivalence_only_first. equivalence_diff single numeric value (> 0) NULL (default, corresponding equivalence assessment). numeric value specified, estimated absolute differences smaller threshold considered equivalent. designs common control arm, differences non-control arm control arm used, trials without common control arm, difference highest lowest estimated outcome rates used trial stopped equivalence remaining arms equivalent. equivalence_only_first single logical trial specifications equivalence_prob equivalence_diff specified common control arm included, otherwise NULL (default). common control arm used, specifies whether equivalence assessed first control (TRUE) also subsequent control arms (FALSE) one arm superior first control becomes new control. futility_prob single numeric value, vector numeric values length maximum number possible adaptive analyses NULL (default, corresponding futility assessment), specifying probability threshold(s) futility. values must > 0 <= 1, multiple values supplied, value may higher preceding value. NULL, arms dropped futility probability futility compared common control exceeds futility threshold adaptive analysis. Requires common control arm (otherwise argument must NULL), specification futility_diff, futility_only_first. futility_diff single numeric value (> 0) NULL (default, corresponding futility assessment). numeric value specified, estimated differences threshold beneficial direction (specified highest_is_best) considered futile assessing futility designs common control arm. 1 arm remains dropping arms futility, trial stopped without declaring last arm superior. futility_only_first single logical trial specifications designs futility_prob futility_diff specified, otherwise NULL (default required designs without common control arm). Specifies whether futility assessed first control (TRUE) also subsequent control arms (FALSE) one arm superior first control becomes new control. highest_is_best single logical, specifies whether larger estimates outcome favourable ; defaults FALSE, corresponding , e.g., undesirable binary outcomes (e.g., mortality) continuous outcome lower numbers preferred (e.g., hospital length stay). soften_power either single numeric value numeric vector exactly length maximum number looks/adaptive analyses. Values must 0 1 (default); < 1, re-allocated non-fixed allocation probabilities raised power (followed rescaling sum 1) make adaptive allocation probabilities less extreme, turn used redistribute remaining probability respecting limits defined min_probs /max_probs. 1, softening applied. cri_width single numeric >= 0 < 1, width percentile-based credible intervals used summarising individual trial results. Defaults 0.95, corresponding 95% credible intervals. n_draws single integer, number draws posterior distributions arm used running trial. Defaults 5000; can reduced speed gain (potential loss stability results low) increased increased precision (increasing simulation time). Values < 100 allowed values < 1000 recommended warned . robust single logical, TRUE (default) medians median absolute deviations (scaled comparable standard deviation normal distributions; MAD_SDs, see stats::mad()) used summarise posterior distributions; FALSE, means standard deviations (SDs) used instead (slightly faster, may less appropriate posteriors skewed natural scale). description optional single character string describing trial design, used print functions NULL (default). add_info optional single string containing additional information regarding trial design specifications, used print functions NULL (default). fun_y_gen function, generates outcomes. See setup_trial() Details information specify function.Note: function called setup validate output (global random seed restored afterwards). fun_draws function, generates posterior draws. See setup_trial() Details information specify function.Note: function called three times setup validate output (global random seed restored afterwards). fun_raw_est function takes numeric vector returns single numeric value, used calculate raw summary estimate outcomes arm. Defaults mean(), always used setup_trial_binom() setup_trial_norm() functions.Note: function called one time per arm setup validate output structure.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/validate_trial.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Validate trial specification — validate_trial","text":"object class trial_spec containing validated trial specification.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/vapply_helpers.html","id":null,"dir":"Reference","previous_headings":"","what":"vapply helpers — vapply_helpers","title":"vapply helpers — vapply_helpers","text":"Used internally. Helpers simplifying code invoking vapply().","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/vapply_helpers.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"vapply helpers — vapply_helpers","text":"","code":"vapply_num(X, FUN, ...) vapply_int(X, FUN, ...) vapply_str(X, FUN, ...) vapply_lgl(X, FUN, ...)"},{"path":"https://inceptdk.github.io/adaptr/reference/vapply_helpers.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"vapply helpers — vapply_helpers","text":"X vector (atomic list) expression object. objects (including classed objects) coerced base::.list. FUN function applied element X: see ‘Details’. case functions like +, %*%, function name must backquoted quoted. ... optional arguments FUN.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/verify_int.html","id":null,"dir":"Reference","previous_headings":"","what":"Verify input is single integer (potentially within range) — verify_int","title":"Verify input is single integer (potentially within range) — verify_int","text":"Used internally.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/verify_int.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Verify input is single integer (potentially within range) — verify_int","text":"","code":"verify_int(x, min_value = -Inf, max_value = Inf, open = \"no\")"},{"path":"https://inceptdk.github.io/adaptr/reference/verify_int.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Verify input is single integer (potentially within range) — verify_int","text":"x value check. min_value, max_value single integers (), lower upper bounds x lie. open single character, determines whether min_value max_value excluded . Valid values: \"\" (= closed interval; min_value max_value included; default value), \"right\", \"left\", \"yes\" (= open interval, min_value max_value excluded).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/verify_int.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Verify input is single integer (potentially within range) — verify_int","text":"Single logical.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/which_nearest.html","id":null,"dir":"Reference","previous_headings":"","what":"Find the index of value nearest to a target value — which_nearest","title":"Find the index of value nearest to a target value — which_nearest","text":"Used internally, find index value vector nearest target value, possibly specific preferred direction.","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/which_nearest.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Find the index of value nearest to a target value — which_nearest","text":"","code":"which_nearest(values, target, dir)"},{"path":"https://inceptdk.github.io/adaptr/reference/which_nearest.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Find the index of value nearest to a target value — which_nearest","text":"values numeric vector, values considered. target single numeric value, target find value closest . dir single numeric value. 0 (default), finds index value closest target, regardless direction. < 0 > 0, finds index value closest target, considers values /target, respectfully, (otherwise returns closest value regardless direction).","code":""},{"path":"https://inceptdk.github.io/adaptr/reference/which_nearest.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Find the index of value nearest to a target value — which_nearest","text":"Single integer, index value closest target according dir.","code":""},{"path":"https://inceptdk.github.io/adaptr/news/index.html","id":"adaptr-140","dir":"Changelog","previous_headings":"","what":"adaptr 1.4.0","title":"adaptr 1.4.0","text":"minor release implementing new functionality, including bug fixes, updates documentation, argument checking test coverage.","code":""},{"path":"https://inceptdk.github.io/adaptr/news/index.html","id":"new-features-and-major-changes-1-4-0","dir":"Changelog","previous_headings":"","what":"New features and major changes:","title":"adaptr 1.4.0","text":"Added rescale_probs argument setup_trial() family functions, allowing automatic rescaling fixed allocation probabilities minimum/maximum allocation probability limits arms dropped simulations trial designs >2 arms. extract_results() function now also returns errors simulation (addition squared errors) check_performance(), plot_convergence(), summary() functions (including print() methods) now calculate present median absolute errors (addition root mean squared errors). plot_metrics_ecdf() function now supports plotting errors (raw, squared, absolute), now takes necessary additional arguments passed extract_results() used arm selection simulated trials stopped superiority. Added update_saved_calibration() function update calibrated trial objects (including embedded trial specifications results) saved calibrate_trial() function using previous versions package. Rewritten README ‘Overview’ vignette better reflect typical usage workflow.","code":""},{"path":"https://inceptdk.github.io/adaptr/news/index.html","id":"minor-changes-and-bug-fixes-1-4-0","dir":"Changelog","previous_headings":"","what":"Minor changes and bug fixes:","title":"adaptr 1.4.0","text":"setup_trial() family functions now stops error less two arms provided. setup_trial() family functions now stops error control_prob_fixed \"match\" fixed_probs provided common control arm. Improved error message true_ys-argument missing setup_trial_binom() true_ys- sds-argument missing setup_trial_norm(). Changed number rows used plot_convergence() plot_status() total number plots <= 3 nrow ncol NULL. Fixed bug extract_results() (thus functions relying ), causing arm selection inconclusive trial simulations error stopped practical equivalence simulated patients randomised included last analysis. Improved test coverage. Minor edits clarification package documentation. Added references two open access articles (code) simulation studies using adaptr assess performance adaptive clinical trials according different follow-/data collection lags (https://doi.org/10.1002/pst.2342) different sceptical priors (https://doi.org/10.1002/pst.2387)","code":""},{"path":"https://inceptdk.github.io/adaptr/news/index.html","id":"adaptr-132","dir":"Changelog","previous_headings":"","what":"adaptr 1.3.2","title":"adaptr 1.3.2","text":"CRAN release: 2023-08-21 patch release bug fixes documentation updates. Fixed bug check_performance() caused proportion conclusive trial simulations (prob_conclusive) calculated incorrectly restricted simulations ending superiority selected arm according selection strategy used restrict. bug also affected summary() method multiple simulations (relies check_performance()). Fixed bug plot_convergence() caused arm selection probabilities incorrectly calculated plotted (bug affect functions calculating summarising simulation results). Corrections plot_convergence() summary() method documentation arm selection probability extraction. Fixed inconsistency argument names documentation internal %f|% function (renamed arguments consistency internal %||% function).","code":""},{"path":"https://inceptdk.github.io/adaptr/news/index.html","id":"adaptr-131","dir":"Changelog","previous_headings":"","what":"adaptr 1.3.1","title":"adaptr 1.3.1","text":"CRAN release: 2023-05-02 patch release triggered CRAN request fix failing test also includes minor documentation updates. Fixed single test failed CRAN due update testthat dependency waldo. Fixed erroneous duplicated text README thus also GitHub package website. Minor edits/clarifications documentation including function documentation, README, vignettes.","code":""},{"path":"https://inceptdk.github.io/adaptr/news/index.html","id":"adaptr-130","dir":"Changelog","previous_headings":"","what":"adaptr 1.3.0","title":"adaptr 1.3.0","text":"CRAN release: 2023-03-31 release implements new functionality (importantly trial calibration), improved parallelism, single important bug fix, multiple minor fixes, changes, improvements.","code":""},{"path":"https://inceptdk.github.io/adaptr/news/index.html","id":"new-features-and-major-changes-1-3-0","dir":"Changelog","previous_headings":"","what":"New features and major changes:","title":"adaptr 1.3.0","text":"Added calibrate_trial() function, can used calibrate trial specification obtain (approximately) desired value certain performance characteristic. Typically, used calibrate trial specifications control overall Bayesian type-1 error rates scenario -arm differences, function extensible may used calibrate trial specifications performance metrics. function uses quite efficient Gaussian process-based Bayesian optimisation algorithm, based part code Robert Gramacy (Surrogates chapter 5, see: https://bookdown.org/rbg/surrogates/chap5.html), permission. better parallelism. functions extract_results(), check_performance(), plot_convergence(), plot_history(), summary() print() methods trial_results objects may now run parallel via cores argument described . Please note, functions parallelised, already fast time took copy data clusters meant parallel versions functions actually slower original ones, even run results 10-100K simulations. setup_cluster() function added can now used setup use parallel cluster throughout session, avoiding overhead setting stopping new clusters time. default value cores argument functions now NULL; actual value supplied, always used initiate new, temporary cluster size, left NULL defaults defined setup_cluster() used (), otherwise \"mc.cores\" global option used (new, temporary clusters size) specified options(mc.cores = ), otherwise 1. Finally, adaptr now always uses parallel (forked) clusters default parallel works operating systems. Better (safer, correct) random number generation. Previously, random number generation managed ad-hoc fashion produce similar results sequentially parallel; influence minimal, package now uses \"L'Ecuyer-CMRG\" random number generator (see base::RNGkind()) appropriately manages random number streams across parallel workers, also run sequentially, ensuring identical results regardless use parallelism . Important: Due change, simulation results run_trials() bootstrapped uncertainty measures check_performance() identical generated previous versions package. addition, individual trial_result objects trial_results object returned run_trials() longer contain individual seed values, instead NULL. Added plot_metrics_ecdf() function, plots empirical cumulative distribution functions numerical performance metrics across multiple trial simulations. Added check_remaining_arms() function, summarises combinations remaining arms across multiple simulations.","code":""},{"path":"https://inceptdk.github.io/adaptr/news/index.html","id":"bug-fixes-1-3-0","dir":"Changelog","previous_headings":"","what":"Bug fixes:","title":"adaptr 1.3.0","text":"Fixed bug extract_results() (thus also functionality relying : check_performance(), plot_convergence(), summary() method multiple simulated trials) caused incorrect total event counts event rates calculated trial specification follow-/outcome-data lag (total event count last adaptive analysis incorrectly used, ratios divided total number patients randomised). fixed documentation relevant functions updated clarify behaviour . bug affect results simulations without follow-/outcome- data lag. Values inferiority must now less 1 / number arms common control group used, setup_trial() family functions now throws error case. Larger values invalid lead simultaneous dropping arms, caused run_trial() crash. print() method results check_performance() respect digits argument; fixed.","code":""},{"path":"https://inceptdk.github.io/adaptr/news/index.html","id":"minor-changes-1-3-0","dir":"Changelog","previous_headings":"","what":"Minor changes:","title":"adaptr 1.3.0","text":"Now includes min/max values summarising numerical performance metrics check_performance() summary(), may plotted using plot_convergence() well. setup_trial() functions now accepts equivalence_prob futility_prob thresholds 1. run_trial() stops drops arms equivalence/futility probabilities exceed current threshold, values 1 makes stopping impossible. values, however, may used sequence thresholds effectively prevent early stopping equivalence/futility allowing later. overwrite TRUE run_trials(), previous object overwritten, even previous object used different trial specification. Various minor updates, corrections, clarifications, structural changes package documentation (including package description website). Changed size linewidth examples plot_status() plot_history() describing arguments may passed ggplot2 due deprecation/change aesthetic names ggplot2 3.4.0. Documentation plot_convergence(), plot_status(), plot_history() now prints plots rendering documentation ggplot2 installed (include example plots website). setup_trial() functions longer prints message informing single best arm. Various minor changes print() methods (including changed number digits stopping rule probability thresholds). setup_trial() family functions now restores global random seed run outcome generator/draws generator functions called validation, involving random number generation. always documented, seems preferable restore global random seed trial setup functions validated. Always explicitly uses inherits = FALSE calls base::get(), base::exists(), base::assign() ensure .Random.seed checked/used/assigned global environment. unlikely ever cause errors done, serves extra safety.","code":""},{"path":"https://inceptdk.github.io/adaptr/news/index.html","id":"adaptr-120","dir":"Changelog","previous_headings":"","what":"adaptr 1.2.0","title":"adaptr 1.2.0","text":"CRAN release: 2022-12-13 minor release implementing new functionality, updating documentation, fixing multiple minor issues, mostly validation supplied arguments.","code":""},{"path":"https://inceptdk.github.io/adaptr/news/index.html","id":"new-features-1-2-0","dir":"Changelog","previous_headings":"","what":"New features:","title":"adaptr 1.2.0","text":"Simulate follow-(data collection) lag: added option different numbers simulated patients outcome data available compared total number simulated patients randomised adaptive analysis (randomised_at_looks argument setup_trial() family functions). Defaults behaviour previously (.e., assuming outcome data immediately available following randomisation). consequence, run_trial() now always conducts final analysis last adaptive analysis (including final posterior ‘raw’ estimates), including outcome data patients randomised arms, regardless many outcome data available last conducted adaptive analysis. sets results saved printed individual simulations; extract_results(), summary() print() methods multiple simulations gained additional argument final_ests controls whether results final analysis last relevant adaptive analysis including arm used calculating performance metrics (defaults set ensure backwards compatibility otherwise use final estimates situations patients included final adaptive analysis). example added Basic examples vignette illustrating use argument. Updated plot_history() plot_status() add possibility plot different metrics according number patients randomised specified new randomised_at_looks argument setup_trial() functions described . Added update_saved_trials() function, ‘updates’ multiple trial simulation objects saved run_trials() using previous versions adaptr. reformats objects work updated functions. values can added previously saved simulation results without re-running; values replaced NAs, - used - may lead printing plotting missing values. However, function allows re-use data previous simulations without re-run (mostly relevant time-consuming simulations). Important: please notice objects (.e., objects returned setup_trial() family functions single simulations returned run_trial()) may create problems errors functions created previous versions package manually reloaded; objects updated re-running code using newest version package. Similarly, manually reloaded results run_trials() updated using function may cause errors/problems used. Added check_performance() function (corresponding print() method) calculates performance metrics can used calculate uncertainty measures using non-parametric bootstrapping. function now used internally summary() method multiple trial objects. Added plot_convergence() function plots performance metrics according number simulations conducted multiple simulated trials (possibly splitting simulations batches), used assess stability performance metrics. Added possibility define different probability thresholds different adaptive analyses setup_trials() family functions (inferiority, superiority, equivalence, futility probability thresholds), according updates run_trial() print() method trial specifications. Updated plot_status(); multiple arms may now simultaneously plotted specifying one valid arm NA (lead statuses arms plotted) arm argument. addition, arm name(s) now always included plots.","code":""},{"path":"https://inceptdk.github.io/adaptr/news/index.html","id":"documentation-bug-fixes-and-other-changes-1-2-0","dir":"Changelog","previous_headings":"","what":"Documentation, bug fixes, and other changes:","title":"adaptr 1.2.0","text":"Added reference open access article describing key methodological considerations adaptive clinical trials using adaptive stopping, arm dropping, randomisation package documentation (https://doi.org/10.1016/j.jclinepi.2022.11.002). proportion conclusive trials restricting trials summarised (extract_results()) may now calculated summary() method multiple trial simulations new check_performance() function, even measure may difficult interpret trials summarised restricted. Minor fixes, updates, added clarification documentation multiple places, including vignettes, also updated illustrate new functions added. Minor fix print() method individual trial results, correctly print additional information trials. Fixed bug number patients included used subsequent data_looks setup_trial() family functions; now produces error. Added internal vapply_lgl() helper function; internal vapply() helper functions now used consistently simplify code. Added multiple internal (non-exported) helper functions simplify code throughout package: stop0(), warning0(), %f|%, summarise_num(). Added names = FALSE argument quantile() calls summary() method trial_results objects avoid unnecessary naming components subsequently extracted returned object. Ideal design percentages may calculated NaN, Inf -Inf scenarios differences; now converted NA returned various functions. Minor edits/clarifications several errors/warnings/messages. Minor fix internal verify_int() function; supplied , e.g., character vector, execution stopped error instead returning FALSE, needed print proper error messages checks. Minor fix plot_status(), upper area (representing trials/arms still recruiting) sometimes erroneously plotted due floating point issue summed proportions sometimes slightly exceed 1. Added additional tests test increase coverage existing new functions. Minor fix internal reallocate_probs() function, \"match\"-ing control arm allocation highest probability non-control arm probabilities initially 0, returned vector lacked names, now added. Minor fixes internal validate_trial() function order : give error multiple values supplied control_prob_fixed argument; give correct error multiple values provided equivalence_diff futility_diff; give error NA supplied futility_only_first; add tolerance checks data_looks randomised_at_looks avoid errors due floating point imprecision specified using multiplication similar; correct errors decimal numbers patient count arguments supplied; additional minor updates errors/messages.","code":""},{"path":"https://inceptdk.github.io/adaptr/news/index.html","id":"adaptr-111","dir":"Changelog","previous_headings":"","what":"adaptr 1.1.1","title":"adaptr 1.1.1","text":"CRAN release: 2022-08-16 patch release triggered CRAN request updates. Minor formatting changes adaptr-package help page comply CRAN request use HTML5 (used R >=4.2.0). Minor bug fixes print() methods trial specifications summaries multiple trial results. Minor updates messages setup_trial().","code":""},{"path":"https://inceptdk.github.io/adaptr/news/index.html","id":"adaptr-110","dir":"Changelog","previous_headings":"","what":"adaptr 1.1.0","title":"adaptr 1.1.0","text":"CRAN release: 2022-06-17 Minor release: Updates run_trials() function allow exporting objects clusters running simulations multiple cores. Updates internal function verify_int() due updates R >= 4.2.0, avoid incorrect error messages future versions due changed behaviour && function used arguments length > 1 (https://stat.ethz.ch/pipermail/r-announce/2022/000683.html). Minor documentation edits updated citation info (reference software paper published Journal Open Source Software, https://doi.org/10.21105/joss.04284).","code":""},{"path":"https://inceptdk.github.io/adaptr/news/index.html","id":"adaptr-100","dir":"Changelog","previous_headings":"","what":"adaptr 1.0.0","title":"adaptr 1.0.0","text":"CRAN release: 2022-03-15 First release.","code":""}]