Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
**Analyses** **Analysis Strategy** Following the workflow of IJzerman et al. (2020) we utilized a multilevel random-effects meta-analysis model using the restricted maximum-likelihood estimation with the R package metafor, version 2.0 (Viechtbauer, 2010). To deal with dependencies in the data we used a robust sandwich-type variance estimation (RVE) applied to the estimated variance-covariance matrix of within-study ES (Hedges, Tipton, & Johnson, 2010). We did this to address the violation of the assumption of the independence of the residual errors within clusters, due to the fact that in some studies, several effects may be estimated on the same sample. Further, we included all the important dependent variables in our models (even if some of them came from a single study) and we statistically accounted for clustering effects in the data. We coded whether the contrasts were “focal” from the perspective of the author (if the coded effect was mentioned in the abstract, we coded it as focal). Only one focal outcome per study was chosen at random for the *p*-curve analysis and the 4-parameter selection model. This procedure was repeated iteratively. Heterogeneity was estimated for all the effects, using τ (SD of the distribution of true effects) and I2 (proportion of total variation in study estimates due to heterogeneity). We did not interpret any estimates when the number of included effects (k) were less than 10, because the large expected sampling variability of such estimates based on such *k*. We first excluded studies with high risk of bias and studies with mathematically inconsistent means or SDs . If the outcome is a discrete variable (e.g., Likert scale items), means and SDs follow a fixed granular pattern for each combination of N and the number of items (Anaya, 2016; Brown & Heathers, 2016). We then carried out an in-depth diagnosis of the random-effects meta-analytic model, analyzing especially the presence of influential outliers. In case that there were excessively influential outliers, we examined their effect on the overall result in a sensitivity analysis. With this subset of studies, we first tested the overall effect sizes of the two different strategies (self-administered mindfulness meditation and biofeedback) related to the components of stress and to the affective consequences of stress. Subsequently, we accounted for publication bias to assess the relative level of empirical support for the specific intervention. With bias-corrected estimates of the effect size, we proceeded into conducting subgroup analyses to determine whether the intervention efficacy varied as a function of study characteristics or self administered mindfulness. For example, we examined, as a moderator, the source from which the participants received their treatment (e.g., app, audio, or video). For what concerns both strategies, five moderator variables were taken into account: Number of sessions (of self-administered mindfulness or biofeedback), intervention duration, number of females versus males, type of comparison group (active or passive control), and timing of the effect (after the intervention, after last follow-up). Lastly, we included a conditional subgroup analysis related to the type of population (student non-clinical, non-student non-clinical, clinical): If we were to find considerable heterogeneity we conducted subgroup analyses on these groups to check if it was the source of our heterogeneity. Again, for all subgroup analyses, effect sets with less than 10 effects were not analyzed. Lastly, we omitted observational studies with a sensitivity analysis. A figure of the subgroup analyses can be found in Appendix F. If we decided that additional subgroup analyses would be necessary in an exploratory vein, we disclosed them on our OSF page (using the template provided by Moreau and Gamble 2020; see Appendix A in [Materials][1]). **Correction for publication bias** Publication bias is a state of affairs where significant results have more probability of getting published (Sutton, Duval, Tweedie, Abrams, & Jones, 2000). Under publication bias, meta-analytic effect size estimates tend to have a high false-positive rate (if H0 is true), or they end up being overestimated (if H0 is false; Carter, Schönbrodt, Gervais, & Hilgard, 2019; Ioannidis, 2008). A secondary consequence of (or reason for) publication bias is the fact that researchers might conduct data-contingent analyses (Simmons, Nelson, & Simonsohn, 2011) to obtain *p*-values less than .05. To try to mitigate these problems and estimate an unbiased effect size of stress regulation strategies, we tried to account for publication bias using a variety of approaches. Using a similar procedure as IJzerman et al., (2020), we first estimated the evidential value with the *p*-curve method, applied on a set of significant results (Simonsohn, Nelson, & Simmons, 2014). Second, we tried to estimate an unbiased average effect using techniques that have only recently become available: the 4-parameter selection model (McShane, Böckenholt, & Hansen, 2016) and a mixed effect implementation of PET-PEESE (Stanley & Doucouliagos, 2014). ***P*****-curve**. *P*-curve is a technique used to assess the evidential value in a set of significant findings (Simonsohn, Nelson, & Simmons, 2014). According to Simonsohn, Nelson, and Simmons (2014), we can infer the presence of bias by observing the shape of the p-curve. Under a null effect (d = 0) the distribution of *p*-values is uniform. When an effect is present (d ≠ 0) the results of that experiment are more likely to be associated with small rather than high *p*-values. The greater the statistical power, the steeper the *p*-curve (leading to a higher degree of right-skew in *p*-values). A *p*-curve with a left-skewed distribution of significant *p*-values may instead suggest the presence of questionable research practices. In the present meta-analysis, we iteratively permuted the choice of effect sizes for each dependent set of effects and averaged over the iterations. **4-parameter selection model (4PSM)**. The 4PSM is an implementation of selection methods, which are techniques that estimate and correct for publication biases regarding the size, direction, and statistical significance of study results (McShane et al., 2016). The 4PSM is a statistical model that has two components: 1) a data model describing how the data are generated when there is no publication bias and 2) a selection model that emulates the publication process. Each of them consist in turn of two parameters. The two data model parameters are: a) Effect size parameter, which models the population average effect size and b) heterogeneity parameter. The selection model is represented by c) a weight parameter, which models the likelihood that a study with non-significant results is published compared to a study with a *p*-value below .05 and d) the likelihood of a result being in the opposite direction. These parameters allow for an estimate of the effect size and degree of heterogeneity in a way that accounts for publication bias (McShane et al., 2016). If a given set of results yielded less than four *p*-values per interval, the model dropped the fourth parameter to provide for a more stable estimation. Just like the *p*-curve, the 4PSM also assumes the included effect sizes to be independent. Therefore, we implemented the same permutation-based procedure, iteratively selecting only independent effect sizes and averaging. **PET-PEESE**. PET-PEESE is a conditional regression-based meta-analytic model that aims to correct for publication bias. The difference between PET and PEESE is that in the former, the ES is regressed on the standard error while in the latter, the effect size is regressed on the squared standard error (variance) instead. A slope of the regression line indicates a relationship between the standard error and the ES. This pattern hints at the presence of publication bias or small-study effects. Because the PET model has a greater accuracy when the effect is absent and the PEESE model is more accurate when the effect is present, Stanley and Doucouliagos (2014) suggested to use the PET model first. If the estimate of the PET is significantly different from zero, PEESE is used, and its result is taken as the bias-corrected ES of interest. If PET does not detect any effect, the estimate of the PET is retained instead. However, in many realistic situations, PET tends to have an unfavorable combination of false-positive rate and statistical power. Here, we used the 4PSM as a conditional estimator for PET-PEESE. The 4PSM have proven to have a relatively adequate false-positive rate via simulations (Carter et al., 2019, McShane et al., 2016). According to Carter et al. (2019), these characteristics are relatively preserved even when the heterogeneity is moderate to high and in the presence of questionable research practices. A chart of the techniques employed to account for publication bias can be found in Appendix G in [Materials][2]. **Quality assessment** In order to assess the quality of the study, we used the Risk of Bias 2 (RoB 2) Tool by the Cochrane Foundation (2020). Using the RoB 2, we assessed the risk of bias corresponding to five predetermined domains: 1) risk of bias arising from the randomization process, 2) risk of bias due to deviations from the intended intervention (effect of assignment to intervention), 3) risk of bias due to missing outcome data, 4) risk of bias in the measurement of the outcome, and 5) risk of bias in the selection of the reported result. Within each domain, a series of questions were answered to extract the relative risk of bias for each domain. Based on the answer to these questions, we calculated the risk of bias corresponding to each domain (1 = “low risk of bias, 2 = “some concerns”, 3 = “high risk of bias”). Then, based on the judgment for each individual domain, an overall rating on the risk of bias was calculated. A study was categorized as having a Level 3 overall score (“high risk of bias”), if one of two conditions were met: A) the study scores having high risk of bias in at least one of the five domains or B) the study led to some concerns for at least three of the five domains. A study was categorized as having overall Level 2 Bias (“some concern”), if it was rated as having “some concerns” in at least one domain. Finally, we scored the study at Level 1 overall (“low risk of bias”), if the study had low risk of bias in all of the five domains. **Each script is available in our [GitHub repository][3] or alternatively directly at the top of the files section of the Data Analytic Plan.** [1]: https://osf.io/dpj4r/ [2]: https://osf.io/dpj4r/ [3]: https://github.com/co-relab/Stress-regulation-meta-analysis
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.