Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
Variability of responses affect the power of a design and capturing many sources of random variations in a mixed-effect model is not an easy task. So far, mixed-effects do not have a reasonable analytic solution for estimating the probability that a test correctly rejects the null hypothesis. Analytical formulas used in classical procedures work with analytical formulas that do not have enough flexibility to account for all sources of variation. Simulation-based power analyses are a good alternative and power can be calculated from the proportion of significant simulations to all simulations (Johnson et al., 2015). The power of the Study 1 that we aim to replicate was 67.20%. To estimate power a priori for the current project we used the effect size from Study 1 (IJzerman et al., 2012) with a decrease of 15%. This allows us to avoid two common problems related to observed power: (1) using pilot studies that are designed to be underpowered - we use a regular study, (2) using effects sizes from published studies can involves a risk of using only significant effects and result in inflating the effect size estimate of the population (due to publication bias). Since the original model in Study 1 was built with *lmer4* package we proceeded to use the package *simr*. To achieve better power for the current project we checked the effect for an increased number of participants and ran the model through the package simr to explore trade-offs between sample size and power (Green & MacLeod, 2016). To estimate power, we relied on the original data but estimated the effect size at 15% lower as recommended by Kumle et al. (2021). We used powerSim package to recalculate power for different numbers of participants and powerCurve to create the power curve graph. Finally, from obtained calculations, we choose adequate sample size for desired power (over 95%). Using this lower effect size, entering sample sizes in steps of 40, 70, 90, 95, 100, and 110 and we found that N=95 provides us with ~96% power, which will thus be the required sample size per “cluster”. With cluster, we mean climatically comparable countries (i.e., they are at comparable distances from the equator; Cluster 1: Croatia (2; 5,022.31 km from equator), Serbia (4,983.64 km), Bosnia (4,920.38 km), France (5,114.97 km), and Turkey (4,336.61 km); Cluster 2: Singapore (151.97 km), Nigeria (1,111.95 km), Cluster 3: Poland (5,782.14 km)). Our clustering may change after In Principle Acceptance, but before data collection, if we can recruit more labs. As the original, single-lab study had 41 participants in a between-groups design (20 and 21 participants in inclusion and exclusion conditions, respectively), we will ask labs to overshoot at least that number per site (N=50 per site). At 95 participants with a 15% decrease in power, we would achieve 96% power. We therefore aim for 95 participants per "cluster". - [Simulation script][1] - [Observed power under different scenarios][2] - [Power curve][3]- - Recall that we used [the original data][4] for this simulation. Kumle, L., Võ, M.LH. & Draschkow, D. Estimating power in (generalized) linear mixed models: An open introduction and tutorial in R. (2021). Behavior Research Methods, 53, 2528–2543. https://doi.org/10.3758/s13428-021-01546-0 [1]: https://osf.io/ehw79 [2]: https://osf.io/exnq5 [3]: https://osf.io/m596b [4]: https://osf.io/7s3tb
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.