Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
**Research Methodology** The flowchart diagram of a 3-round Delphi method adapted for this study is illustrated in Wiki Image. The Delphi technique is a questionnaire-based approach to make a consensus-based on the fundamental principles of purposive sampling of experts in the field of interest, panelist anonymity, iterative questionnaire presentation, and feedback of statistical analysis (1–3). The study was designed, implemented and coordinated within the international network of tES-fMRI (INTF) and a steering committee that supervised the process of questionnaire (checklist) development, data analysis, and determining the initial criteria for item consensus and survey termination. The development of this checklist using the Delphi technique involved the four following steps: (1) shaping the steering committee, (2) selection of experts, (3) checklist design, and (4) data collection and analysis. **Steering Committee** The role of the international steering committee, comprising Marom Bikson, Michael Nitsche, Charlotte Stagg, Andrea Antal, Hartwig Siebner, Adam Woods, Axel Thielscher, Marcus Meinzer, Lucia Li, Duke Shereen, Ines Violante, Jorge Almeida, and Hamed Ekhtiari, was to define the problems in this field, elaborate items to handle these problems and select the experts for the Delphi checklist to rate and improve the it. The steering committee grew out of the INTF collaborative group after a series of webinars (28 March 2019, 27 June 2019, and 26 September 2019; recorded videos of the webinars are available on the YouTube website https://youtube.com/channel/UCKcEYDmyqTipDW7OzuoVSlg) in which considerable heterogeneities in technical/methodological aspects required for combining tES with neuroimaging were discussed along with strategies to help bridge the knowledge gaps. As the first step of the Delphi process, an initial email circulation started within the steering committee to ask each member of the steering committee to suggest a list of the specific technical/methodological issues in the interaction between fMRI and tES that they considered very likely to influence a CTF study and its report. An inclusive list of items was produced based on suggestions derived from the committee members’ responses. The steering committee was asked to suggest items which match these three domains: (1) technological factors; (2) safety and noise tests; and (3) methodological factors. **Selection of Experts Panel** The research involved the recruitment of a group of experts based on an accomplished systematic review of 54 CTF studies (till the end of December 2019). The main criteria used to recruit the experts were being the first, last or corresponding author in at least one of the published studies in the field. In addition, the members of the steering committee were asked to nominate additional experts in the field of CTF to join the expert panel. All the steering committee members agreed on the list of experts before the invitation process. The systematic study selection was initiated with the use of the PubMed research database from inception up to January 1, 2020. The search included the terms (tDCS OR transcranial direct current stimulation OR tACS OR transcranial alternating current stimulation) AND (functional magnetic resonance imaging OR fMRI OR functional MRI OR fcMRI OR functional connectivity MRI OR rsfMRI OR resting-state fMRI). Potential candidates (n=54) were invited to participate in the Delphi study using the contact information provided on each publication (the e-mail address). Furthermore, the committee invited 21 additional experts to join the expert panel. The final expert panel included 75 potential candidates with expertise across a range of backgrounds (general medical practitioner, neuroscientist, biomedical engineer, electrical engineer) and geographical areas (USA, UK, Germany, Denmark, Iran and Canada). **Checklist Development** The checklist's ultimate purpose was to facilitate in-depth consensus among tES-fMRI experts regarding the technical/methodological aspects necessary to safely and successfully to perform tES within the MRI scanner, and to perform concurrent fMRI during tES delivery. The checklist was formulated based on currently available evidence in the field. The ultimate aim was that the resulting checklist could be used by researchers in the future studies to enhance both the study’s quality and reporting. The scope of the checklist is any study which applies tES in the bore of the magnet, irrespective of the relative timing of fMRI acquisition and tES delivery. The use of tES-fMRI in offline or a sequential approach to evaluate the short- and long-term after-effects of brain stimulation was not included. Upon completion, the checklist was pilot tested by five expert panel members. Using data from the pilot testing analysis, the steering committee reworded and/or combined items that were deemed unclear for inclusion in round one. **Data Collection and Analysis** The checklist was sent to 75 experts previously accepted to contribute to this study. There were three sequential rounds of anonymous checklists. After the preparation of this initial 16-item checklist, in the first round, expert panel members were sent the checklist via an email. Two consecutive follow-up reminders were emailed to experts who were not responding after 7 and 14 days following the initial invitation. Experts who completed the first round before the deadline were recruited in the subsequent round (n=49). The development of the first-round checklist involved self-reporting the demographics gleaned from the expert panel members and comment on items developed by the investigators. The experts were provided a series of open-ended questions specific to CTF studies. More specifically, round 1 included a definition of the purpose of the consensus study and an operational definition of a prescriptive standard protocol for CTF trials, the presentation of the initial checklist, followed by the opportunity to modify and remove the items, revise current language of the checklist, merge selected items, and propose new items for each subsection. Data obtained from round I were summarized into a modified checklist by the steering committee. In round 2, the expert panel members were sent a feedback document, which summarized the results of the checklist modifications. It included the clarification and correction of terminology, as well as a summary of experts’ comments. Each expert was given the opportunity to comment and score on each of the items of the modified checklist. The experts were requested to rate the factors in the technological, safety and noise tests, and methodological class based on their relative importance, which included (1) unimportant, (2) somewhat important, (3) moderately important, (4) very important, and (5) essential. A “recommendations” column was added for each item so experts could make recommendations for experimental parameters and practices, which provide guidance on the requirements for optimum simultaneous acquisition of fMRI with tES. Experts were also asked to comment on any ambiguity or wording of the items elaborated and to suggest new items for each subsection. Any item that was judged by the steering committee to be an original idea was included as a new factor in subsequent rounds. Upon completion, data collected from round 2 were summarized into a modified checklist by the steering committee. Once the ratings of experts were prepared, the factors were associated in each round to one of the three categories. • If between 70-100% of the experts rated a factor as 4 or 5, the factor was endorsed as a checklist item. • If between 70-100% of the experts rated a factor as 1 or 2, the factor was excluded from the checklist. • If between 30-100% of the experts rated a factor as 3 and/or generated more than 10 comments, then the factor was entered into the next round to be re-rated. In round 3, the expert panel members were sent a feedback document, which summarized the results of the checklist rating and modifications. It included a full description of the results for each item including whether or not it fulfilled criteria for inclusion or exclusion, or a list of factors that were to be re-rated in round 3, as well as a summary of experts’ comments. Round 3 comprised a list of endorsed items from rounds 1 and 2, a list of endorsed items from round 2 which required further clarification for inclusion, and a list of items that had been entered as a new item in round 2 and afterward fell into the re-rate category. At the end of the third round, consensus decisions and conclusions were made by the experts. At the end of the third round, the expert panel was able to reach a consensus about appropriate factors to evaluate the methodology and reporting of studies that applied tES in the bore of the magnet while fMRI is recorded concurrently with tES. All quantitative analyses were conducted using Excel (Microsoft Office 2016). To designate an item as finalized, the average rating and the number of responses were calculated. Also, for items that were passed to the subsequent round, to evaluate the stability of each item, the percentage of change was calculated through the rounds (4). **References** 1. Vernon W, Vernon W. The Delphi technique: A review. Int J Ther Rehabil. 2009;16(2):69–76. 2. Hsu C-C, Sandford BA. The Delphi Technique: Making Sense of Consensus. Vol. 12, Practical Assessment, Research, and Evaluation. 2007. 3. Keeney S, Hasson F, McKenna HP. A critical review of the Delphi technique as a research methodology for nursing. Int J Nurs Stud. 2001;38(2):195–200. 4. Scheibe M, Skutsch M, Schofer J. IV.C. Experiments in Delphi Methodology *.
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.