Main content

Files | Discussion Wiki | Discussion | Discussion
default Loading...

Home

Menu

Loading wiki pages...

View
Wiki Version:
As it is an exploratory analysis, we did not have any previous effect size of the correlation between BAT and the questionnaires, especially with individual differences in attachment style. The method we are using here is a Bayesian Sequential Analysis. It is based on an optional stopping rule which means that the size of the sample required depends on the own results of the sample. So it is not necessary to determine a sample size beforehand because the sampling will stop when the Bayes Factor is above or below a threshold (for more, see [Analyses][1]). However, [Schönbrodt and Perugini][2] found in 2013 that correlations only stabilize at 161 participants, so it will be robust to test at least 161 participants. Finally, to acknowledge if 161 participants could be enough for the BSA to provide evidence for the null or the alternate hypothesis, we conducted an a priori Bayesian power analysis (cf. [script by Daniel Lakens][3]). Bayesian power analysis approximates the sample size required given a true effect size, the prior effect size of the alternative hypothesis and the Bayes Factor (BF) threshold we want to use. Here, we use a prior effect size of 0.5 because it is the default value of the regressionBF function used for the BSA. I hypothesize that our true effect size is 0,2 (meaning that the alternate hypothesis is true). It is a low effect size which will provide a conservative approximation of the sample we need. We want to use a BF threshold of 10 but if possible, 20 would provide more robust conclusions. A BF threshold of 5 would be less conservative but we might have more false negatives. The analysis is based on 100000 simulations of the experiment. When the sample size is implemented, the program computes the number of times the experiment provided a BF in favour of the null hypothesis and the number of times the experiment provided a BF in favour of the alternative hypothesis (see the program [here][4]). This Bayesian power analysis is based on a lot of assumptions, but it provides an estimation of whether 161 participants will be sufficient to support the alternative hypothesis (given that it is true). The Bayesian power analysis states that with a sample size of 161, in 12% of the cases the BF analysis provides evidence for the alternative hypothesis if it is true. To increase this percentage, we can try to decrease the BF threshold. With a BF of 5 and a sample size of 161, in 18% of the cases the analysis provides evidence for the alternative hypothesis but in 10% of cases, it provides evidence for the null hypothesis despite the alternative hypothesis being true. Therefore, to avoid false negative we will use 10 as BF threshold. We can also increase the chance of finding evidence that supports the alternative hypothesis by increasing the sample size. 250 participants would provide support to the alternative hypothesis in 22% of cases. Overall, if our target sample size is 250 participants, the BF will exceed 10 in almost 1/5 of the cases. It is a small probability but we made the assumptions that H1 was true, that its effect size was 0,2 and that our BF threshold was 10 (which is a big threshold) in this power analysis. Considering that 250 students are already a big sample for one university, we will attempt to recruit up to 250 at the Université de Grenoble Alpes, keeping in mind that at each new participant the BSA can provide evidence for one or the other hypothesis if exceeding one of the two BF thresholds (see [the BSA][5]). We only consider the data of participants who completed the whole procedure. This is a convenience sample, meaning that the external validity of our experiment is limited. [1]: https://osf.io [2]: https://www.sciencedirect.com/science/article/abs/pii/S0092656613000858 [3]: http://daniellakens.blogspot.com/2016/01/power-analysis-for-default-bayesian-t.html [4]: https://osf.io/7bpm9/ [5]: https://osf.io/n7ytq/wiki/home/
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.