Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
# A Systematic Review on the Implementation of AI-aided Systematic Reviews in Clinical Guideline Development ## Goal This systematic review focused on synthesizing information on the use of machine learning and or AI (Artificial Intelligence) to speed up the screening of abstracts within clinical guideline development. Specifically, the goal was to create an overview of the limitations within the literature and what the suggestions are to improve AI-aided systematic reviews for clinical guideline development. ## Methods To retrieve the articles for this systematic review, two search databases were queried: - Pubmed - Web of Science The specific search strings can be found within the `methods\search_strings.txt` file. After deduplication, this search yielded a total of 567 articles. Title and Abstract screening of these articles was done using ASReview with the default settings<sup>1</sup>. A list of articles that were used as prior knowledge can be found within the `methods\prior_knowledge.txt`. Articles were marked as relevant when they contained information on using either "AI", "Machine Learning", or some other kind of automation for systematic reviews of medical guideline development. The stopping rule for screening was the following: After screening 150 papers (26.5%), stop when finding 30 irrelevant papers in a row. In total 180 records were screened (31.75%). An overview of the progress in screening can be seen in `Methods\tiab_screening_statistics.png`. The .asreview file containing all decision information can be found in the methods folder as well. A total of 25 articles were identified as relevant. The specific results of title and abstract screening can be found within `Methods\asreview_result_general_overview_of_the_existing_literature_on_AI-aided_systematic_reviews_for_clinical_guideline_development.xlsx`. These articles were then assessed based on the full-text, resulting in 7 final inclusions. It was in this stage decided to exclude any articles before 2005, because the first proposed application of text mining to screening in systematic reviews only dated from 2005<sup>2</sup>. In the `Methods\fulltext_exclusions.docx` a short overview of articles that were excluded and their reason for exclusion can be found. Finally, from the reference lists of the final inclusions another 49 references were identified. After full-text assessment 40 were finally included as well, resulting in a total of 47 final inclusions. These final inclusions can be found in `Results\final_inclusions_overview_ai-aided_SR_CGD.ris`. ## References 1. Van de Schoot, Rens, De Bruin, Jonathan, Schram, Raoul, Zahedi, Parisa, De Boer, Jan, Weijdema, Felix, Kramer, Bianca, Huijts, Martijn, Hoogerwerf, Maarten, Ferdinands, Gerbrich, Harkema, Albert, Willemsen, Joukje, Ma, Yongchao, Fang, Qixiang, Tummers, Lars, & Oberski, Daniel. (2021). ASReview: Active learning for systematic reviews (v0.17.1). Zenodo. https://doi.org/10.5281/zenodo.5126631 2. O’Mara-Eves, A., Thomas, J., McNaught, J. et al. Using text mining for study identification in systematic reviews: a systematic review of current approaches. Syst Rev 4, 5 (2015). https://doi.org/10.1186/2046-4053-4-5
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.