Skip to content

Commit

Permalink
zenodo link
Browse files Browse the repository at this point in the history
  • Loading branch information
ontogenerator committed Mar 25, 2020
1 parent 8732520 commit 9997f6f
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 3 deletions.
5 changes: 2 additions & 3 deletions analysis/TDRE.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -842,12 +842,11 @@ mean_perc_rel <- mean_perc_pokes %>%
```

On average (mean $\pm$ SD), mice made `r paste(mean_tot_pokes$mean, "±", mean_tot_pokes$sd)` nose pokes per drinking session (Fig. \@ref(fig:npokes)), with an average proportion of `r paste(mean_perc_rel$mean, "±", mean_perc_rel$sd)` nose pokes at the rewarding dispensers. In order to focus on post-acquisition performance [@rivalan_principles_2017], we excluded the first 150 nose pokes at the rewarding dispensers. We then calculated the *discrimination performance* for each mouse and each condition of each experiment. Since each condition was repeated twice (first exposure and reversal), we calculated the discrimination performance as the total number of nose pokes at the high-profitability dispenser divided by the sum of the total number of nose pokes at the high- and at the low-profitability dispensers. Nose pokes at the non-rewarding dispensers were ignored. In the conditions in which the profitability was equal, the dispenser with the higher reward volume was treated as the "high-profitability" dispenser.
Data analysis was done using R [@r_development_core_team_r:_2019]. When comparing discrimination performances, we used the two one-sided procedure (TOST) for equivalence testing [@lauzon_easy_2009; @lakens_equivalence_2017]. First, we picked a smallest effect size of interest (sesoi) *a priori* as the difference in discrimination performance of 0.1 units in either direction. (The sesoi can be graphically represented as the [-0.1, 0.1] interval around the difference of zero, or as [-0.6, 0.6] around the chance performance of 0.5.) Then, we estimated the mean differences and their confidence intervals (CIs) from 1000 non-parametric bootstraps using the `smean.cl.boot` function in the package `Hmisc` [@harrell_r_2019]. For a single equivalence test the 90% CI is usually constructed, i.e. $1 - 2\alpha$ with $\alpha = 0.05$, because both the upper and the lower confidence bounds are tested against the sesoi [@lauzon_easy_2009; @lakens_equivalence_2017]. Thus, equivalence was statistically supported if the 90% CI was completely bounded by the sesoi interval around the effect size of zero (the null hypothesis). A difference was considered to be statistically supported if the 95% CI did not contain zero and the 90% CI was not completely bounded by the sesoi interval. If the 95% CI contained zero, but the 90% CI was not completely bounded by the sesoi, then results were inconclusive. Researchers have shown that in order to correct for multiple comparisons in equivalence tests, it suffices to only apply a familywise correction of the $\alpha$ for the problematic cases where the type I error is most likely [@davidson_more_2019], i.e. when equivalence is supported, but the mean difference is close to the sesoi bound. The families of tests, for which multiple comparisons occur in our study, are the four contrasts in each of experiments 1, 2, and 4 (three families), the tests on the two slopes in experiment 3, and the six before-after contrasts between experiment 1 and 4. For each of these five families the $\alpha$ was divided by $k^2/4$, where $k$ was the number of problematic cases in each family [@caffo_correction_2013]. However, the number of problematic cases did not exceed two in any of the test families, which resulted in the corrected $alpha$ equal to the original value of 0.05. Furthermore, even with $k$ equal to four, two, and six (the total number of tests in each test family), only a single result changed from non-equivalent to inconclusive. We therefore report the uncorrected 90% and 95% CIs.
When comparing discrimination performances, we used the two one-sided procedure (TOST) for equivalence testing [@lauzon_easy_2009; @lakens_equivalence_2017]. First, we picked a smallest effect size of interest (sesoi) *a priori* as the difference in discrimination performance of 0.1 units in either direction. (The sesoi can be graphically represented as the [-0.1, 0.1] interval around the difference of zero, or as [-0.6, 0.6] around the chance performance of 0.5.) Then, we estimated the mean differences and their confidence intervals (CIs) from 1000 non-parametric bootstraps using the `smean.cl.boot` function in the package `Hmisc` [@harrell_r_2019]. For a single equivalence test the 90% CI is usually constructed, i.e. $1 - 2\alpha$ with $\alpha = 0.05$, because both the upper and the lower confidence bounds are tested against the sesoi [@lauzon_easy_2009; @lakens_equivalence_2017]. Thus, equivalence was statistically supported if the 90% CI was completely bounded by the sesoi interval around the effect size of zero (the null hypothesis). A difference was considered to be statistically supported if the 95% CI did not contain zero and the 90% CI was not completely bounded by the sesoi interval. If the 95% CI contained zero, but the 90% CI was not completely bounded by the sesoi, then results were inconclusive. Researchers have shown that in order to correct for multiple comparisons in equivalence tests, it suffices to only apply a familywise correction of the $\alpha$ for the problematic cases where the type I error is most likely [@davidson_more_2019], i.e. when equivalence is supported, but the mean difference is close to the sesoi bound. The families of tests, for which multiple comparisons occur in our study, are the four contrasts in each of experiments 1, 2, and 4 (three families), the tests on the two slopes in experiment 3, and the six before-after contrasts between experiment 1 and 4. For each of these five families the $\alpha$ was divided by $k^2/4$, where $k$ was the number of problematic cases in each family [@caffo_correction_2013]. However, the number of problematic cases did not exceed two in any of the test families, which resulted in the corrected $alpha$ equal to the original value of 0.05. Furthermore, even with $k$ equal to four, two, and six (the total number of tests in each test family), only a single result changed from non-equivalent to inconclusive. We therefore report the uncorrected 90% and 95% CIs.
Data analysis and simulations were done using R [@r_development_core_team_r:_2019]. All data and code is available in the Zenodo repository: https://doi.org/10.5281/zenodo.3726686.

## Simulations

Simulations were done in R [@r_development_core_team_r:_2019]. Code is available at zenodo repository xxx.

### Environment
Each of the experimental conditions was recreated in the simulations as a binary choice task between the high-profitability and the low-profitability options. We did not simulate the two non-rewarding options. Upon a visit by a virtual mouse, a choice option would deliver a reward with its corresponding volume and probability (Table \@ref(tab:conds)). The virtual environment was not spatially and temporally explicit. Thus, no reversal conditions were simulated and the test of each experimental condition consisted in a sequence of 100 choices. All experimental conditions in all four experiments were tested.

Expand Down
Binary file modified analysis/TDRE.pdf
Binary file not shown.

0 comments on commit 9997f6f

Please sign in to comment.