Home

Menu

Loading wiki pages...

View
Wiki Version:
<h1>A Systematic Review on Studies Evaluating the Performance of Active Learning Compared to Human Reading for Systematic Review Data</h1> <h2>Goal</h2> <p>This systematic review focused on synthesizing information on studies that evaluated the performance of Active Learning compared to human reading. </p> <p>Specifically, the goal was to create an overview of studies and or reports on simulation studies which investigate the benefit of Active Learning to accelerate systematic reviews within the screening of title and abstract phase compared to human screening . </p> <h2>Methods</h2> <p>To retrieve the articles for this systematic review, three search databases were queried: - Web of Science - Scopus - Embase</p> <p>It was in this stage decided to exclude any articles before 2006, because in a review by O'Mara-Eves<sup>1</sup>, the first publication actually applying some sort of automation to title and abstract screening was found in 2006. The specific search strings can be found within the <code>Methods\search_strings.txt</code> file. After deduplication, this search yielded a total of 1290 articles. Title and Abstract screening of these articles was done using ASReview with the default settings<sup>2</sup>. A list of articles that were used as prior knowledge can be found within the <code>Methods\prior_knowledge.txt</code>. </p> <p>Articles were marked as relevant when they contained information on using some type of Active Learning within an experiment or simulation study to accelerate the screening phase within systematic reviews.</p> <p>The stopping rule for screening was the following: After screening 323 papers (25%), stop when finding 30 irrelevant papers in a row. In total 353 records were screened (27.36%). An overview of the progress in screening can be seen in <code>Methods\tiab_screening_statistics.png</code>. The .asreview file containing all decision information can be found in the methods folder as well.</p> <p>A total of 66 articles were identified as relevant. The specific results of title and abstract screening can be found within <code>Methods\asreview_result_current-efforts-of-testing-active-learning.xlsx</code>. These articles were then assessed based on the full-text, resulting in 38 full-text inclusions. In the <code>Methods\fulltext_exclusions.docx</code> a short overview of articles that were excluded and their reason for exclusion can be found. </p> <p>Finally, from the reference lists of the final inclusions another 4 references were identified as potential inclusions. After full-text assessment 2 were finally included as well, resulting in a total of 40 final inclusions. </p> <p>All steps above are also systematically depicted in the PRISMA flowchart: <code>Methods\PRISMA_2020_Current_efforts_of_Using_Active_Learning_to_Accelerate_Systematic_reviews.docx</code></p> <p>These final inclusions can be found in <code>Results\final_inclusions_current_efforts_AL_SR.ris</code>. In the results folder, one can also find a frequency histogram depicting how many (non-unique) datasets were used in each study. Datasets used in one study, are possibly also used in another. In total 255 unique datasets were identified within 37 articles. </p> <p>Three articles<sup>3,4,5</sup> contained only databases or datasets that were excluded from the analysis. The following databases were excluded: - Epistemonikos: - 2/3 of the datasets contained less than 200 documents - The annotation of the datasets was conducted for testing purposes only, by senior medical students. </p> <ul> <li>Part of a combination between Limsi-Cochrane and the CLEF eHealth databases:</li> <li> <p>Part of this database consisted of meta-analyses, which were excluded from the analysis. The systematic reviews from the dataset were included. </p> </li> <li> <p>RCV1-v2:</p> </li> <li> <p>A database consisting of news articles and therefore excluded.</p> </li> <li> <p>TREC databases:</p> </li> <li>Multiple TREC databases were encountered, however upon inspection it became clear that these datasets were created for testing purposes and did not contain Systematic Reviews. </li> </ul> <p>More information on both included and excluded datasets can be found with the <code>Results\Datasets overview.docx</code> file. </p> <h2>References</h2> <ol> <li>O’Mara-Eves, A., Thomas, J., McNaught, J. et al. Using text mining for study identification in systematic reviews: a systematic review of current approaches. Syst Rev 4, 5 (2015). <a href="https://doi.org/10.1186/2046-4053-4-5" rel="nofollow">https://doi.org/10.1186/2046-4053-4-5</a></li> <li>Van de Schoot, Rens, De Bruin, Jonathan, Schram, Raoul, Zahedi, Parisa, De Boer, Jan, Weijdema, Felix, Kramer, Bianca, Huijts, Martijn, Hoogerwerf, Maarten, Ferdinands, Gerbrich, Harkema, Albert, Willemsen, Joukje, Ma, Yongchao, Fang, Qixiang, Tummers, Lars, & Oberski, Daniel. (2021). ASReview: Active learning for systematic reviews (v0.17.1). Zenodo. <a href="https://doi.org/10.5281/zenodo.5126631" rel="nofollow">https://doi.org/10.5281/zenodo.5126631</a></li> <li>Cormack, G.V. and M.R. Grossman. Scalability of Continuous Active Learning for Reliable High-Recall Text Classification. in CIKM'16: ACM Conference on Information and Knowledge Management. 2016. ACM.</li> <li>Zhang, H., et al., Evaluating sentence-level relevance feedback for high-recall information retrieval. Information Retrieval Journal, 2020. 23(1).</li> <li>Cormack, G.V. and M.R. Grossman, Autonomy and Reliability of Continuous Active Learning for Technology-Assisted Review. arXiv:1504.06868 [cs], 2015.</li> </ol>
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.