Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
## Purpose The purpose of this study is to assess the use and short term impact of systematic reviews, network meta-analyses, and 'overviews of reviews' on clinical practice guidelines. We will be using principles of open science - using this open science framework, i.e. registering our protocol of the study online in a preprint journal, sharing our data sets publically at the end of the study, and publishing our final manuscript in an open access journal. ## Objectives - Examine the prevalence and use of systematic reviews, overviews of systematic review, and network meta-analyses in clinical practice guideline recommendations - Examine and compare the levels of evidence and strength of the evidence systems used in our sample and compare to the GRADE approach - Report on the benefits and challenges of using the Open Science Framework as a platform to manage research workflow and promote open science principles ## Background Systematic reviews, overviews of reviews, and network meta-analyses are an important study design in the practice of evidence based medicine. They are often cited as being performed to inform the development of clinical guidelines; however the extent of this practice is unknown. One of the ways in which systematic reviews (SRs), overviews of reviews (‘overviews’), and network meta-analyses can inform and influence practice is through contribution to the evidence base supporting CPGs (Heffner, 1998; Aldrich et al, 2003). Guidelines can be seen an outcome indicator of the clinical impact of the research they cite. Frameworks or models of research impact Methodological reviews have analysed models and frameworks to assess the impact of health research (e.g. Boaz 2009, Greenhalgh 2016, Raftery 2016; Rivera 2017). A recent meta-synthesis of studies of research impact by the UK Health Technology Assessment Programme (HTA review) was based on a systematic search of eight databases and identified over 20 different impact frameworks and 110 empirical studies assessing the effectiveness of the 20 frameworks (as single or multiple case studies) (Raftery 2016). Outcomes from the frameworks were grouped as short, medium and long term (Rivera 2017). Short-term outcomes were measured by number of publications, citations, and peer-reviewed articles. Mid-term outcomes were defined as ‘influencing policy making’, specifically: (i) changes to clinical or healthcare training, practice, or guidelines; (ii) influence and involvement in decision-making processes; and (iii) changes to legislation, regulations, and government policy. Long-term impact outcomes were changes in the health care system in terms of ‘quality of care and service delivery’. Methods used to assess short and medium term impact Short-term outcomes of research are measured by assessing bibliometric indicators which include the number of publications; forward citation rates in peer reviewed journals; journal impact factor; requests for reprints; article download rate and number of journal webpage visits; citations rates in non-journal media such as newspapers and mass and social media (i.e., Twitter and blogs); number of reviews or guidelines including and synthesising the research; and new (or changes to) interventions or technology, patents, and research (Rivera 2017). Dissemination and knowledge transfer indicators of short- to mid-term outcomes can be measured through number of conferences, seminars, workshops, and presentations disseminating the research; teaching and training output (i.e., number of lectures given to disseminate the research findings). Other short-term outcomes of research impact include academic collaborations, research networks, and data sharing. Methods to assess impact can also be in the form of consultation with stakeholders through quantitative surveys, qualitative face-to-face or phone interviews, and focus groups. Raftery 2016 also recommend undertaking case studies which might contrast the tracing forward/backward methods of linking particular research projects to policy changes. Evidence assessing the short-term impact on guideline development A few identified studies have assessed the impact of reviews on guidelines and policy (Bunn 2014, Grant 2000a, Grant 2000b, Lewison 2003), and two case studies have identified overviews as influencing policy guidelines (refs). Grant and colleagues (2000a) evaluated the impact of studies cited in clinical guidelines and found that only 3% of cited references were systematic reviews (2000b). However, their approach did not assess whether the evidence was cited in specific recommendations within a guidelines document (McDonald 2000). Silagy 2001 examined the use systematic reviews by developers of national guidelines to provide evidence for recommendations, and the proportion of recommendations that could used the evidence from Cochrane reviews (Silagy 2001). They identified that systematic reviews supported the recommendations in 68% UK, 89% New Zealand, 98% US, and 100% of Canadian guidelines. ## Broad steps in the research process - screening of the guidelines against eligibility criteria by 2 people independently; - retrieve full text of each guideline; - data extraction by 2 people independently; - analyse results and present in tables and figues.
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.