Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
Efficacy of interventions to reduce coercive practices in mental health care: umbrella review of randomised trials Corrado Barbui, Marianna Purgato, Giovanni Ostuzzi, Michela Nosè, Federico Tedeschi Review question The objective of this review is to evaluate the strength and credibility evidence on the efficacy of interventions to reduce coercive practices in mental health care. Searches We will search MEDLINE, PUBMED, COCHRANE CENTRAL, PsycINFO, CINAHL, EPISTEMONIKOS, CAMPBELL COLLABORATION to identify systematic reviews (SRs) and meta-analyses of clinical trials examining the efficacy of interventions to reduce coercive practices in mental health care. No language restrictions will be applied. Electronic database searches will be supplemented by a manual search of reference lists from relevant studies. Only SRs published from 2010 onwards will be considered for inclusion. We will document included and excluded studies following the Preferred Reporting Items for Systematic Reviews and Meta-analyses reporting standards (PRISMA) (Moher et al., 2009). Types of study to be included Only SRs with a quantitative synthesis of trial results (meta-analysis) will be included. SRs without study-level effect sizes (ESs) and 95% confidence intervals (CIs) will be excluded. When two SRs presented overlapping datasets on the same comparison, the SR with the largest number of component studies providing study-level ESs will be retained for the main analysis, in agreement with umbrella review methodology (Ioannidis, 2009; Aromataris et al., 2015; Fusar-Poli and Radua, 2018; Barbui et al., 2020). Participants/population We will consider SRs including participants of any age, gender, ethnicity, or religion. Study participants with any mental health condition will be included, irrespective of the inclusion criteria employed by primary studies. Intervention(s), exposure(s) SRs of studies assessing the efficacy of any type of non-pharmacological interventions will be included. Interventions with one or multiple components will be included. Comparator(s)/control Comparison groups will include no treatment, wait list controls, treatment as usual or any other types of inactive controls. Context SRs of studies conducted in any countries and settings will be included. Main outcome(s) • Compulsory psychiatric admissions • Physical restraint use Additional outcome(s) None Data extraction The selection of potentially relevant SRs will be made by carefully inspecting titles and abstracts. This will be done by two reviewers independently. In case of discrepancies, a third review author will be involved, and consensus will be reached. When titles and abstracts do not provide information on the inclusion and exclusion criteria, the full articles will be obtained to verify eligibility. The full text of potentially included SRs will be obtained and carefully appraised by at least two reviewers. The reference lists of included articles will be analysed for additional items not retrieved by the database searches. Risk of bias (quality) assessment SR methodological quality and bias will be appraised using the Assessment of Multiple Systematic Reviews (AMSTAR-II) tool (Shea, 2017). The AMSTAR-II instrument will be administered independently by two reviewers and discrepant scores will be resolved by discussion and consensus. Strategy for data synthesis Estimation of summary effect For each population/intervention/outcome, a summary ES and the 95% confidence interval will be calculated using random effects methods (DerSimonian & Laird 1986). Assessment of between studies heterogeneity Heterogeneity will be assessed by Cochran’s Q test and the I² statistic (Cochran, 1954; Higgins et al., 2003). I2 ranges between 0% and 100%, and it is considered low, moderate, large and very large for values <25%, 25–49%, 50–74% and >75%, respectively. Estimation of prediction intervals To further account for heterogeneity between studies we will calculate 95% prediction intervals (PIs) for the summary random effect estimates, which represent the range in which the effect estimates of future studies will fall (Inthout et al., 2016). Assessment of small study effects We will examine whether smaller studies provide higher efficacy estimates as compared with larger studies, which is an indication of publication bias, true heterogeneity, or chance (Sterne et al. 2011). An indication of small study effects will be evaluated by the Egger’s regression asymmetry test (P≤0.10) (Egger et al., 1997) and checking whether the random effects summary estimate will be larger than the point estimate of the largest study. Evaluation of excess significance The excess significance bias will be evaluated by calculating whether the observed number of studies with nominally statistically significant results (“positive” studies, P <0.05) is different as compared with the expected number of studies with statistically significant results (Ioannidis and Trikalinos, 2007). The expected number of statistically significant studies will be calculated from the sum of the statistical power estimates for each component study using an algorithm from a non-central t distribution (Lubin et al., 1990).The power estimates of each component study depend on the plausible effect size for the tested association, which will be assumed to be the effect of the largest study (that is, the smallest standard error) in each association. Excess significance for individual meta-analyses was determined at P≤0.10 (Ioannidis and Trikalinos, 2007). Assessment of strength of associations Strength of associations will be assessed by using established umbrella review criteria (Solmi et al., 2018; Barbui et al., 2020). Briefly, associations that will present nominally significant random-effects summary estimates (i.e., P-value≤0.05) will be classify as “convincing”, “highly suggestive”, “suggestive”, or “weak” evidence. Specifically, meta-analyses are not intended to be suggestive of bias (Class I) whenever they meet the following criteria: p value < 10-6 based on random effects meta-analysis; >1000 participants; low or moderate between-study heterogeneity (I² < 50%); 95% PI that excludes the null value; and no evidence of small study effects and excess significance. Highly suggestive evidence (Class II) requires >1000 participants, highly significant summary associations (p value < 10-6 by random-effects) and 95% PI not including the null value. Suggestive evidence (Class III) criteria require only >1000 participants and p value ≤ 0.001 by random-effects meta-analysis. Weak evidence (Class IV) criteria require only p value ≤ 0.05. Associations are considered non-significant if p 0.05. Credibility of evidence As additional step, we will use the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) methodology for evaluating the credibility of evidence for each outcome measure (Guyatt et al., 2008). The GRADE rating will allow to determine whether the credibility for a given outcome may be considered high (further research is very unlikely to change our confidence in the estimate of effect), moderate (further research is likely to have an important impact on our confidence in the estimate of effect and may change the estimate), low (further research is very likely to have an important impact on our confidence in the estimate of effect and is likely to change the estimate), or very low (very uncertain about the estimate of effect). Analysis of subgroups or subsets None. Contact details for further information Corrado Barbui corrado.barbui@univr.it Organisational affiliation of the review WHO Collaborating Centre for Research and Training in Mental Health and Service Evaluation, University of Verona, Verona, Italy Review team members and their organisational affiliations Professor Corrado Barbui, Cochrane Global Mental Health and WHO Collaborating Centre for Research and Training in Mental Health and Service Evaluation, Department of Neuroscience, Biomedicine and Movement, Section of Psychiatry, University of Verona, Verona, Italy Doctor Marianna Purgato, Cochrane Global Mental Health and WHO Collaborating Centre for Research and Training in Mental Health and Service Evaluation, Department of Neuroscience, Biomedicine and Movement, Section of Psychiatry, University of Verona, Verona, Italy Doctor Michela Nosè, Cochrane Global Mental Health and WHO Collaborating Centre for Research and Training in Mental Health and Service Evaluation, Department of Neuroscience, Biomedicine and Movement, Section of Psychiatry, University of Verona, Verona, Italy Doctor Giovanni Ostuzzi, Cochrane Global Mental Health and WHO Collaborating Centre for Research and Training in Mental Health and Service Evaluation, Department of Neuroscience, Biomedicine and Movement, Section of Psychiatry, University of Verona, Verona, Italy Doctor Federico Tedeschi, Cochrane Global Mental Health and WHO Collaborating Centre for Research and Training in Mental Health and Service Evaluation, Department of Neuroscience, Biomedicine and Movement, Section of Psychiatry, University of Verona, Verona, Italy References AROMATARIS E, FERNANDEZ R, GODFREY CM, HOLLY C, KHALIL H, TUNGPUNKOM P. Summarizing systematic reviews: methodological development, conduct and reporting of an umbrella review approach. Int J Evid Based Health 2015;13:132-40 BARBUI C, PURGATO M, ABDULMALIK J, ACARTURK C, EATON J, GASTALDON C, GUREJE O, HANLON C, JORDANS M, LUND C, NOSÈ M, OSTUZZI G, PAPOLA D, TEDESCHI F, TOL W, TURRINI G, PATEL V, THORNICROFT G. Efficacy of Psychosocial Interventions for Mental Health Outcomes in Low-Income and Middle-Income Countries: An Umbrella Review. Lancet Psychiatry. 2020;7(2):162-172. COCHRAN WG. The combination of estimates from different experiments. Biometrics. 1954;10:101-29. DERSIMONIAN R, LAIRD N. Meta-analysis in clinical trials. Controlled clinical trials. 1986;7:177-88. EGGER M, DAVEY SMITH G, SCHNEIDER M, MINDER C. Bias in meta-analysis detected by a simple, graphical test. BMJ 1997;315:629-34. FUSAR-POLI P, RADUA J. Ten simple rules for conducting umbrella reviews. Evid Based Mental Health 2018;21:95-100. GUYATT GH, OXMAN AD, VIST GE, et al. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ 2008;336:924-6. HIGGINS JPT, THOMPSON SG, SPIEGELHALTER DJ. A re-evaluation of random-effects meta-analysis. J R Stat Soc Ser A Stat Soc. 2009;172:137-59. INTHOUT J, IOANNIDIS JP, ROVERS MM, GOEMAN JJ. Plea for routinely presenting prediction intervals in meta-analysis. BMJ Open 2016 Jul 12;6(7):e010247. IOANNIDIS JP. Integration of evidence from multiple meta-analyses: a primer on umbrella reviews, treatment networks and multiple treatments meta-analyses. CMAJ 2009;181:488-93. IOANNIDIS JP, TRIKALINOS TA. An exploratory test for an excess of significant findings. Clin Trials 2007;4:245-53. LUBIN JH, GAIL MH. ON POWER AND SAMPLE SIZE FOR STUDYING FEATURES OF THE RELATIVE ODDS OF DISEASE. Am J Epidemiol 1990;131:552-66. MOHER D, LIBERATI A, TETZLAFF J, ALTMAN DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA Statement. BMJ 2009;339. SHEA BJ, REEVES BC, WELLS G, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ 2017;358:j4008. SOLMI M, CORRELL CU, CARVALHO AF, IOANNIDIS JPA. The role of meta-analyses and umbrella reviews in assessing the harms of psychotropic medications: beyond qualitative synthesis. Epidemiol Psychiatr Sci 2018;27:537-42. STERNE JA, SUTTON AJ, IOANNIDIS JP, et al. Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomised controlled trials. BMJ 2011;343:d4002.
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.