Skip to content

Commit

Permalink
misc updates
Browse files Browse the repository at this point in the history
  • Loading branch information
spiromar committed Feb 18, 2025
1 parent c2c2263 commit 0718081
Show file tree
Hide file tree
Showing 5 changed files with 39 additions and 28 deletions.
5 changes: 3 additions & 2 deletions create-wwc-dbs-13Sep2023.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ title: "create-wwc-dbs-13Sep2023"
output:
html_document:
number_sections: true
date: "2024-07-16"
date: "2025-01-20"
---

```{r, include=FALSE}
Expand Down Expand Up @@ -190,6 +190,7 @@ summary(wwc$SE)

# Use pkonfound to calculate sensitivity measures
These calculations will be the ones used as the WWC benchmark distribution by R Shiny App.

## Create additional needed variables

```{r}
Expand All @@ -208,7 +209,7 @@ wwc <- wwc %>% mutate(se.logodds = SE * 1.65)
```

## Pkonfound Calculations (All Findings)
Loop through all findings, ise pkonfound to calculate sensitivity measures for each finding.
Loop through all findings, using pkonfound to calculate sensitivity measures for each finding.
Note: Ignore specialized dichotomous settings for now, and just treat as if every effect size is continuous. Will come back to dichotomous only below.

```{r}
Expand Down
43 changes: 25 additions & 18 deletions create-wwc-dbs-13Sep2023.html
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@



<meta name="date" content="2024-07-16" />
<meta name="date" content="2025-01-20" />

<title>create-wwc-dbs-13Sep2023</title>

Expand Down Expand Up @@ -350,7 +350,7 @@


<h1 class="title toc-ignore">create-wwc-dbs-13Sep2023</h1>
<h4 class="date">2024-07-16</h4>
<h4 class="date">2025-01-20</h4>

</div>

Expand Down Expand Up @@ -397,6 +397,7 @@ <h1><span class="header-section-number">1</span> Overview and
## ✖ dplyr::lag() masks stats::lag()
## ℹ Use the conflicted package (&lt;http://conflicted.r-lib.org/&gt;) to force all conflicts to become errors</code></pre>
<pre class="r"><code>library(konfound)</code></pre>
<pre><code>## Warning: package &#39;konfound&#39; was built under R version 4.3.3</code></pre>
<pre><code>## Warning in check_dep_version(): ABI version mismatch:
## lme4 was built with Matrix ABI version 1
## Current Matrix ABI version is 0
Expand All @@ -411,7 +412,7 @@ <h1><span class="header-section-number">1</span> Overview and
#setwd(&quot;/Users/smarouli/Dropbox (ASU)/causal inference/IES/What Works Clearinghouse Project/benchmarks-db&quot;)

packageVersion(&quot;konfound&quot;)</code></pre>
<pre><code>## [1] &#39;1.0.0&#39;</code></pre>
<pre><code>## [1] &#39;1.0.2&#39;</code></pre>
</div>
<div id="import-and-clean-up-wwc-data" class="section level1" number="2">
<h1><span class="header-section-number">2</span> Import and clean up WWC
Expand Down Expand Up @@ -691,7 +692,10 @@ <h1><span class="header-section-number">4</span> Calculate values needed
<h1><span class="header-section-number">5</span> Use pkonfound to
calculate sensitivity measures</h1>
<p>These calculations will be the ones used as the WWC benchmark
distribution by R Shiny App. ## Create additional needed variables</p>
distribution by R Shiny App.</p>
<div id="create-additional-needed-variables" class="section level2" number="5.1">
<h2><span class="header-section-number">5.1</span> Create additional
needed variables</h2>
<pre class="r"><code># calculate the number of covariates value to enter into pkonfound.
# pkonfound uses that to calculate degrees of freedom as follows:
# df = n - ncovariates - 2
Expand All @@ -704,10 +708,11 @@ <h1><span class="header-section-number">5</span> Use pkonfound to
# calculate for all rows, but will ignore for continuous outcomes
wwc &lt;- wwc %&gt;% mutate(g.logodds = f_Effect_Size_WWC * 1.65)
wwc &lt;- wwc %&gt;% mutate(se.logodds = SE * 1.65) </code></pre>
<div id="pkonfound-calculations-all-findings" class="section level2" number="5.1">
<h2><span class="header-section-number">5.1</span> Pkonfound
</div>
<div id="pkonfound-calculations-all-findings" class="section level2" number="5.2">
<h2><span class="header-section-number">5.2</span> Pkonfound
Calculations (All Findings)</h2>
<p>Loop through all findings, ise pkonfound to calculate sensitivity
<p>Loop through all findings, using pkonfound to calculate sensitivity
measures for each finding. Note: Ignore specialized dichotomous settings
for now, and just treat as if every effect size is continuous. Will come
back to dichotomous only below.</p>
Expand Down Expand Up @@ -771,8 +776,8 @@ <h2><span class="header-section-number">5.1</span> Pkonfound
## $ RIR_perc.g.ols &lt;dbl&gt; 38.873676, 80.670530, 92.971200, 96.485606, 42.5…
## $ f_FindingID &lt;dbl&gt; 1650, 1651, 1652, 1653, 1654, 1655, 1656, 1657, …</code></pre>
</div>
<div id="pkonfound-calculations-dichotomous-only" class="section level2" number="5.2">
<h2><span class="header-section-number">5.2</span> Pkonfound
<div id="pkonfound-calculations-dichotomous-only" class="section level2" number="5.3">
<h2><span class="header-section-number">5.3</span> Pkonfound
Calculations (Dichotomous Only)</h2>
<p>Using pkonfound again, except this time separating out the
dichotomous outcomes and using the log odds transformation of hedges g
Expand Down Expand Up @@ -870,29 +875,30 @@ <h2><span class="header-section-number">5.2</span> Pkonfound
)
glimpse(b_lo)</code></pre>
<pre><code>## Rows: 1,105
## Columns: 8
## Columns: 9
## $ RIR_primary.lo &lt;dbl&gt; 7, 10, 7, 5, 41, 22, 47, 47, 49, 32, 51, 51,…
## $ RIR_perc.lo &lt;dbl&gt; 25.000000, 66.666667, 33.333333, 20.833333, …
## $ fragility_primary.lo &lt;dbl&gt; 4, 4, 3, 3, 20, 10, 23, 23, 24, 15, 25, 25, …
## $ user_SE.lo &lt;dbl&gt; 0.46312738, 0.46616881, 0.34918993, 0.403632…
## $ analysis_SE.lo &lt;dbl&gt; 0.6006234, 0.5685273, 0.4535775, 0.5167852, …
## $ needtworows &lt;lgl&gt; FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FA…
## $ f_FindingID &lt;dbl&gt; 1292, 1293, 1324, 1334, 42, 43, 44, 45, 46, …
## $ RIR_supplemental.lo &lt;dbl&gt; NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, …
## $ fragility_supplemental.lo &lt;dbl&gt; NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, …</code></pre>
</div>
<div id="merge-pkonfound-results-from-both-loops" class="section level2" number="5.3">
<h2><span class="header-section-number">5.3</span> Merge pkonfound
<div id="merge-pkonfound-results-from-both-loops" class="section level2" number="5.4">
<h2><span class="header-section-number">5.4</span> Merge pkonfound
results from both loops</h2>
<pre class="r"><code>ncol(b); ncol(b_lo)</code></pre>
<pre><code>## [1] 16</code></pre>
<pre><code>## [1] 8</code></pre>
<pre><code>## [1] 9</code></pre>
<pre class="r"><code>nrow(b); nrow(b_lo)</code></pre>
<pre><code>## [1] 8642</code></pre>
<pre><code>## [1] 1105</code></pre>
<pre class="r"><code>b_merged &lt;- left_join(b, b_lo, by = &quot;f_FindingID&quot;)
glimpse(b_merged)</code></pre>
<pre><code>## Rows: 8,642
## Columns: 23
## Columns: 24
## $ obs_r &lt;dbl&gt; 0.428888105, -0.056008028, -0.020394047, 0.0…
## $ act_r &lt;dbl&gt; 0.428888105, -0.056008028, -0.020394047, 0.0…
## $ critical_r &lt;dbl&gt; 0.27871059, -0.27871059, -0.27871059, 0.2787…
Expand All @@ -914,6 +920,7 @@ <h2><span class="header-section-number">5.3</span> Merge pkonfound
## $ fragility_primary.lo &lt;dbl&gt; NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, …
## $ user_SE.lo &lt;dbl&gt; NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, …
## $ analysis_SE.lo &lt;dbl&gt; NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, …
## $ needtworows &lt;lgl&gt; NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, …
## $ RIR_supplemental.lo &lt;dbl&gt; NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, …
## $ fragility_supplemental.lo &lt;dbl&gt; NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, …</code></pre>
</div>
Expand Down Expand Up @@ -1009,21 +1016,21 @@ <h2><span class="header-section-number">7.1</span> Check for duplicates
## # i_Delivery_Method_School &lt;dbl&gt;, i_Delivery_Method_Small_Group &lt;dbl&gt;,
## # i_Delivery_Method_Whole_Class &lt;dbl&gt;, i_Demographics_Sample_ELL &lt;dbl&gt;, …</code></pre>
<pre class="r"><code>b_merged %&gt;% group_by(f_FindingID) %&gt;% filter(n() &gt; 1)</code></pre>
<pre><code>## # A tibble: 0 × 23
<pre><code>## # A tibble: 0 × 24
## # Groups: f_FindingID [0]
## # ℹ 23 variables: obs_r &lt;dbl&gt;, act_r &lt;dbl&gt;, critical_r &lt;dbl&gt;, r_final &lt;dbl&gt;,
## # ℹ 24 variables: obs_r &lt;dbl&gt;, act_r &lt;dbl&gt;, critical_r &lt;dbl&gt;, r_final &lt;dbl&gt;,
## # rxcv &lt;dbl&gt;, rycv &lt;dbl&gt;, rxcvGz &lt;dbl&gt;, rycvGz &lt;dbl&gt;, itcvGz.g.ols &lt;dbl&gt;,
## # itcv.g.ols &lt;dbl&gt;, beta_threshold &lt;dbl&gt;, beta_threshold_verify &lt;dbl&gt;,
## # pctbias.g.ols &lt;dbl&gt;, RIR_primary.g.ols &lt;dbl&gt;, RIR_perc.g.ols &lt;dbl&gt;,
## # f_FindingID &lt;dbl&gt;, RIR_primary.lo &lt;dbl&gt;, RIR_perc.lo &lt;dbl&gt;,
## # fragility_primary.lo &lt;dbl&gt;, user_SE.lo &lt;dbl&gt;, analysis_SE.lo &lt;dbl&gt;,
## # RIR_supplemental.lo &lt;dbl&gt;, fragility_supplemental.lo &lt;dbl&gt;</code></pre>
## # needtworows &lt;lgl&gt;, RIR_supplemental.lo &lt;dbl&gt;, …</code></pre>
</div>
<div id="merge-datasets" class="section level2" number="7.2">
<h2><span class="header-section-number">7.2</span> Merge datasets</h2>
<pre class="r"><code>ncol(wwc); ncol(b_merged)</code></pre>
<pre><code>## [1] 307</code></pre>
<pre><code>## [1] 23</code></pre>
<pre><code>## [1] 24</code></pre>
<pre class="r"><code>nrow(wwc); nrow(b_merged)</code></pre>
<pre><code>## [1] 8642</code></pre>
<pre><code>## [1] 8642</code></pre>
Expand Down
19 changes: 11 additions & 8 deletions ui.R
Original file line number Diff line number Diff line change
Expand Up @@ -284,17 +284,20 @@ shinyUI(
alt = "Konfound R package logo"),
"Sensitivity Analysis Benchmarks")),

h3("Data from What Works Clearinghouse"),
tags$p("Explore sensitivity analyses calculated for the",
h3("Robustness of Findings in What Works Clearinghouse"),
tags$p("This page contains sensitivity analysis measures calculated for findings the ",
tags$a(href="https://ies.ed.gov/ncee/wwc/", "What Works Clearinghouse"),
"through an interactive web app."),
# tags$p(tags$a(href="", "Read more here for details about compiling the database.")),
" has rated as meeting its standards for a strong and well-executed research design.
These values can be used to create tailored reference distributions that help locate
the robustness of your finding in a distribution of other similar and well-designed educational studies."),
tags$p("For more information on using these benchmarks, please see ", tags$a(href="", "Practice Guide.")),
tags$p("For details on the calculations of the benchmark values, please see ", tags$a(href="", "here.")),
tags$p(actionButton("visit_website_button",
icon = icon("globe", lib = "font-awesome"),
label = "KonFound-It website",
onclick = "window.open('https://konfound-it.org/', '_blank')")
),
tags$p(tags$i(paste("Powered by version", packageVersion('konfound'), "of the konfound R package."))),
# tags$p(tags$i(paste("Powered by version", packageVersion('konfound'), "of the konfound R package."))),



Expand Down Expand Up @@ -348,11 +351,11 @@ shinyUI(
selected = "All"
),
selectInput("selectedVariable", "Choose a Sensitivity Measure:",
choices = c("Robustness of Inferences to Replacement (RIR)" = "RIR_primary",
"RIR as a percentage of Sample Size" = "RIR_percent",
choices = c("RIR as a percentage of Sample Size" = "RIR_percent",
"Robustness of Inferences to Replacement (RIR)" = "RIR_primary",
"Fragility (dichotomous only)" = "fragility_primary.lo",
"Unselected"),
selected = "Unselected"
selected = "RIR_percent"
)
),

Expand Down
Binary file modified wwc-merged-13Sep2023.RDS
Binary file not shown.
Binary file modified wwc-shiny-13Sep2023.RDS
Binary file not shown.

0 comments on commit 0718081

Please sign in to comment.