Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
Identifying the preferred reporting items for a concept mapping study: Protocol for a systematic review Introduction Concept mapping is widely used to understand complex phenomenon in Nursing and Public health research. In a concept mapping study, participants express their ideas on an identified problem and later rank and cluster the ideas generated from the brainstorming exercise. Trochim and Kane (2005) have detailed the steps on how to conduct a concept mapping study. Donnelly (2017) conducted a review of 104 concept mapping dissertations, published between 1974 and 2014. The author reported more than half of these dissertations did not report total sample size across all phases of the study (Donnelly, 2017). One of the limitations of this review is that the review included dissertations supervised by a principal supervisor. We anticipate that the number of concept mapping studies conducted for academic and research purpose can be numerous. This study highlights the importance of having a specific reporting guideline to report concept mapping study. It may be assumed that having a guideline in place could enhance the quality and accuracy of a research as the study is reported against specific parameters considered to be of paramount importance (More, 2010). It may help the readers to evaluate the methodological strength of a study by comparing it with the important parameters. Our study aims to address this gap in knowledge. We propose a systematic review to identify the items used to report a concept mapping study. Review question What are the most important items included in the report of a concept mapping study? Methods This protocol complies with the guidelines of the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P) 2015 statement. Eligibility Criteria We will include the studies based on the following criteria: • A primary concept mapping study. • Concept mapping study in clinical or public health research. Information source Three online databases - the Medical Literature Analysis and Retrieval System Online (MEDLINE), PsycInfo, and Cumulative Index to Nursing and Allied Health Literature (CINAHL). MEDLINE and PsycInfo will be accessed via Ovid. The EBSCOhost platforms will be used to access CINAHL. There will be no restriction for date or language. We will not search the grey literature because studies may not have been through an exhaustive peer review process (Adams et al., 2017). We will not search the references of included studies because this introduces considerable bias into the review methodology (Vassar et al., 2016). Search strategy We followed the Peer Review of Electronic Search Strategies (PRESS) for developing the search strategy. The PRESS guidelines suggest developing search strategy using text words (full and various truncations) and indexing terms for example, medical subject headings (MeSH) that can then be combined using the bullion operators (OR) (McGowan et al., 2016). We adapted the search strategy previously used for a systematic review on concept mapping studies (Donnelly, 2017). Our search strategy included concepts “concept map*”, “structured conceptualization”, “Ariadne”, and “concept systems”. The search strategy was developed for MEDLINE (Appendix 2) and then customized for other databases. Data Management We will export the output from each database to Endnote X9.2 (as a .enl file). We will use Covidence, an online software package, to undertake title and abstract and full-text screening. Covidence is an established package for managing systematic reviews (Babineau, 2014; Kellermeyer et al., 2018). References from the endnote files will be imported to Covidence (as a .xml file). Any duplicates will be identified by Covidence, and the multiples of the citation removed. We will document the number of papers at each stage of the review and report in a PRISMA flowchart (Liberati et al., 2009). We will report the date when an individual database was searched. Screening and selection of studies Study selection will follow a two-step process: 1. title and abstract screening and 2. full-text screening. At each stage, two reviewers will conduct screening studies against predefined inclusion and exclusion criteria. A third member of the review team will resolve any discrepancies between the two reviewers. The methodological quality (risk of bias) assessment We will use the Effective Public Health Practice Project (EPHPP) measure to determine the methodological quality of the included studies. The EPHPP measure has good psychometric properties (Thomas et al., 2004a) and can be used across experimental and observational (cohort, case-control) studies (Thomas et al., 2004b). Higher inter-rater agreement of the EPHPP tool compared to version 1.0 of the Cochrane collaboration risk of bias tool has been reported (Armijo‐Olivo et al., 2012). The EPHPP tool has eight sections each with between two and four items: 1. selection bias (two items), 2. study design (one item), 3. confounders (two items), 4. blinding (two items), 5. data collection tools (two items), 6. withdrawals and drop-outs (two items), 7. intervention strategy (three items), and 8. analysis (four items) (Thomas et al., 2004a). Each section is rated “strong”, “moderate” or “weak”. The final overall ranking is determined based on the number of strong and weak ratings in the first six items. Studies are rated: 1. strong “no weak ratings and at least four strong ratings”, 2. moderate “less than four strong ratings and one weak rating”, and 3. weak “two or more items rated weak”. We will report the findings of the risk of bias against the six core items in a table. We will check and report if ethical approval was obtained for the individual studies. A few concerns have been raised due to inclusion of retracted articles in a systematic review (Cosentino & Veríssimo, 2016; King et al., 2018; Moylan & Kowalczuk, 2016). The journal website will be checked to determine if any of the included papers have any concerns for retractions. Data extraction and management We will create a data extraction template in Microsoft Excel. Two reviewers will independently extract data. We will pilot test the template on two included studies and make any necessary refinements based on reviewer feedback. We will extract the following information from included studies: 1. Citation (surname and initial of first author, title, year of publication). 2. Address for correspondence 3. Country where fieldwork was undertaken. 4. Period of data collection. 5. Sampling strategy, participant recruitment process 6. Items reported in each phase of the concept mapping study We will collect the information on the address for correspondence as we aim to create a pool of the authors undertaking a concept mapping study to identify potential participants for our next phase of this study - developing a guideline to report a concept mapping study. If multiple studies are produced by using the same data set, we will extract the data from individual studies and then coalesce information across the studies (Li et al., 2019). We will produce a summary table to report the data from the included studies. Meta-analysis We will produce a narrative synthesis of the results from the review. We will report the frequency and percentage of the individual items reported in the included studies. We will also report yearly statistics of the concept mapping studies. We will not undertake any meta-analysis as the aim of our study is to identify what reporting items are most frequently reported in a concept mapping study. Discussion The aim of this systematic review is to identify the items used to report a concept mapping study. The findings from this review will help us to identify - 1. Preferred reporting items in a concept mapping study, and 2. To identify authors engaged in concept mapping studies. We are purposing a concept mapping study to determine the items that are important in reporting a concept mapping study. This study will help us to develop a reporting guideline. References Adams, R. J., Smart, P., & Huff, A. S. (2017). Shades of Grey: Guidelines for Working with the Grey Literature in Systematic Reviews for Management and Organizational Studies. International Journal of Management Reviews, 19(4), 432-454. https://doi.org/10.1111/ijmr.12102 Armijo‐Olivo, S., Stiles, C. R., Hagen, N. A., Biondo, P. D., & Cummings, G. G. (2012). Assessment of study quality for systematic reviews: a comparison of the Cochrane Collaboration Risk of Bias Tool and the Effective Public Health Practice Project Quality Assessment Tool: methodological research. Journal of evaluation in clinical practice, 18(1), 12-18. Babineau, J. (2014). Product review: covidence (systematic review software). Journal of the Canadian Health Libraries Association, 35(2), 68-71. Cosentino, A. M., & Veríssimo, D. J. C. B. (2016). Ending the citation of retracted papers. 30(3), 676-678. Donnelly, J. P. (2017). A systematic review of concept mapping dissertations. Evaluation and program planning, 60, 186-193. Kellermeyer, L., Harnke, B., & Knight, S. (2018). Covidence and Rayyan. Journal of the Medical Library Association : JMLA, 106(4), 580-583. https://doi.org/10.5195/jmla.2018.513 King, E. G., Oransky, I., Sachs, T. E., Farber, A., Flynn, D. B., Abritis, A., Kalish, J. A., & Siracuse, J. J. J. T. A. J. o. S. (2018). Analysis of retracted articles in the surgical literature. 216(5), 851-855. Li, T., Higgins, J. P., & Deeks, J. J. (2019). Chapter 5: Collecting dataCollecting data. In Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, & W. VA (Eds.), Cochrane Handbook for Systematic Reviews of Interventions version 6.0 (updated July 2019). Cochrane. https://training.cochrane.org/handbook/current/chapter-05 Liberati, A., Altman, D. G., Tetzlaff, J., Mulrow, C., Gøtzsche, P. C., Ioannidis, J. P., Clarke, M., Devereaux, P. J., Kleijnen, J., & Moher, D. (2009). The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. Journal of Clinical Epidemiology, 62(10), e1-e34. McGowan, J., Sampson, M., Salzwedel, D. M., Cogo, E., Foerster, V., & Lefebvre, C. (2016, 2016/07/01/). PRESS Peer Review of Electronic Search Strategies: 2015 Guideline Statement. Journal of Clinical Epidemiology, 75, 40-46. https://doi.org/10.1016/j.jclinepi.2016.01.021 More, S. J. (2010). Improving the quality of reporting in veterinary journals: How far do we need to go with reporting guidelines? The Veterinary Journal, 184(3), 249-250. Moylan, E. C., & Kowalczuk, M. K. (2016). Why articles are retracted: a retrospective cross-sectional study of retraction notices at BioMed Central. BMJ Open, 6(11), e012047. https://doi.org/10.1136/bmjopen-2016-012047 Thomas, B. H., Ciliska, D., Dobbins, M., & Micucci, S. (2004a). A Process for Systematically Reviewing the Literature: Providing the Research Evidence for Public Health Nursing Interventions. Worldviews on Evidence-Based Nursing, 1(3), 176-184. https://doi.org/10.1111/j.1524-475X.2004.04006.x Thomas, B. H., Ciliska, D., Dobbins, M., & Micucci, S. (2004b). A Process for Systematically Reviewing the Literature: Providing the Research Evidence for Public Health Nursing Interventions. 1(3), 176-184. https://doi.org/10.1111/j.1524-475X.2004.04006.x Vassar, M., Atakpo, P., & Kash, M. J. (2016, Oct). Manual search approaches used by systematic reviewers in dermatology. J Med Libr Assoc, 104(4), 302-304. https://doi.org/10.3163/1536-5050.104.4.009
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.