Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
Implementation Details ---------------------- This page describes how our lab implemented the procedures required by the official protocol for the RRR. It also describes and justifies any additions to or departures from that protocol. You can view the official protocol and the main project page for this RRR using these links: - Official Protocol: [https://osf.io/ypd78/][1] - Main RRR project page: [https://osf.io/scu2f/][2] ---------- #### Experimenters #### Tess Neal has conducted and published several experimental studies, and has a Ph.D. in psychology. She is an assistant professor of psychology in the School of Social & Behavioral Sciences at Arizona State University. Megan Warner is pursuing her master’s degree in psychology at Arizona State University, and currently serves as a lab manager for a large research lab at the ASU Tempe Campus for another faculty member. She has a lot of research experience, and is eager to conduct this replication study with Dr. Neal. ---------- #### Setting/Lab/Equipment #### The participants will arrive at a computer lab/classroom in our campus's library, where they will be seated individually at computers and will conduct the qualitrics survey on the computer. This classroom (classroom #101 at the Fletcher Library on ASU West Campus) is a nice, clean, spacious room with comfortable chairs and individual computers for participants. The testing space is arranged with two-three computers per "table," with a nice amount of space in between each table. The computers tables are built for this arrangement, with the monitors seated down into those tables so that the screen is angled toward the participant's face when they sit in the chair. There are about 15 separate "tables," and a total of 42 different computer stations in this room. All of the tables/chairs face the front of the room. The computers are angled in such a way that participants seated behind each other cannot see the computer screen at the table in front of their own, and the tables are big enough that participants cannot see the screens on tables other than their own, or computers that "skip one" in between if they are at the same table (like, the end two computers but not the middle computer in a 3-computer table). We will run groups of participants in groups of 20 people (a multiple of 4 as per the protocol and a number that ensures that each participant will be seated at their own "table" or at least with the computer next to them empty to create space between people). The experimenter will remain at the front of the room, but all of the participants computer screens face the opposite direction (i.e., so that the participants can see the screen), and the experimenter at the front of the room will not be able to see participants' screens as they complete the study. Please see attached for picture. ---------- #### Sample, subjects, and randomization #### **Target sample size:** We will schedule a total of 8 sessions with 20 participants in each session, for a target sample of 160 participants. **Target sample demographics:** We will use the student subject pool at Arizona State University, West Campus. The students in this pool fall within the sample requirements according to this protocol – that is, the participant pool at this campus is about 60% female, average age is about 19 (with few over the age of 35, and we will use “age less than 35” as a screening criteria for participation). They will be compensated 2 research credits in the subject pool (per our university's policy for the amount of time this in-person study will take), and they also have the opportunity to earn up to $10 as part of their participation in this study. **Minimum sample size after exclusions:** The minimum sample size is 152 people. **Stopping rule(s):** If, after completing all of the scheduled testing sessions and after any exclusions, we have fewer than 76 subjects in either condition, we will schedule additional sessions of 20 participants until we have usable data from at least 76 participants in each condition. **Randomization to conditions:** Participants will be randomly assigned to conditions by the provided Qualtrics Script. **Blinding to conditions:** We will ensure that participants are unaware that other participants received different instructions about the time constraints by not discussing this aspect of the study beforehand, and by asking participants after they have participated to refrain from discussing this study with other students. We will indicate that if participants were to talk about this study with other students, it could ruin the study and make the data unusable. **Exclusion rules:** We will be using the same exclusion rules required by the offical protocol, such as failure to complete all tasks or incorrect administration of the tasks by the experimenter. We will retain the data from excluded participants, and mark their data for exclusion. Exclusion decisions will be made by a research administrator that is blind to condition assignment. **Procedures for handling testing sessions for which the number of participants is not a multiple of 4:** We plan to test participants in groups of 20 (a multiple of 4). If not all 20 people show up, the "extra" participants beyond the closest multiple of 4 (for instance, if only 18 of the 20 show up, there are 2 "extra" people above 16, the nearest multiple of 4), we will assign them the 2 research credits through the system here on our campus and release them with an apology and explaination that we have to do this study in multiples of 4. ---------- #### Software/Code #### We will be using the provided materials, including the Qualtrics scripts, and we have verified that they work in our laboratory. ---------- #### Differences from the official protocol #### We experienced difficulty recruiting participants to participate in this study. We are at a smaller campus with fewer psychology majors (and a smaller subject pool) that I have experienced at other large universities. As such, we have not been able to run sessions of 20 people. Instead, our breakdown across the fall 2015 semester of data collection is below. When we first started data collection, we were turning people away, as per the protocol (e.g., we had 15 people for the 9/22/15 and 10/6/15 sessions, but turned 3 away each day to stay in multiples of 4). However, once we realized we might not be able to finish data collection in the time allotted (by May 1, 2016), we decided to let people participate even if in odd multiples, and even if we dipped below 8 people at a time [so long as we were at 4]. The justification for our decision to move forward with this plan, and how we did so while still staying within the primary constraints of the protocol, is described in detail below. 9/22/2015 12 people; 10/6/2015 12 people; 10/20/2015 13 people; 11/2/2015 16 people; 11/10/2015 7 people; 11/16/2015 9 people; 11/17/2015 10 people; 11/18/2015 8 people; 11/23/2015 6 people; 11/24/2015 6 people; 11/30/2015 4 people; 12/1/2015 8 people; 12/2/2015 13 people; 1/20/16 8 people; 2/3/16 11 people; 2/17/16 7 people; 3/2/16 10 people; 3/16/16 9 people. Range of people per session = 4 to 16, mean = 9.39. When we had non-multiples of 4, we changed the calculation for payment as follows: After participants made their contributions and were progressing with the remainder of the survey, we used the qualtrics "responses in progress" page to identify each participant's contribution. We logged each contribution into our excel spreadsheet for that day's data collection, and then once we had all contributions entered, we sorted the data by a random number associated with each person in the excel file. Then, we grouped people into groups of 4, and calculated their payment according to the original equation: 4 - (their contribution) + Total Group Contribution / 2. This equation is a reduction of the full equation, which is actually: 4 - (their contribution) + (Total Group Contribution * 2) / 4 We used the original equation with as many groups of 4 as were present that day. For the remaining people, we edited the quation appropriately. For example, for a group with 3 people: 4 - (their contribution) + (Total Group Contribution * 2) / 3. And for example, for a group of 5: 4 - (their contribution) + (Total Group Contribution * 2) / 5. This slight alteration of the protocol allowed us to collect more data than we otherwise would have been able to, and to finish data collection on time. We made one other alteration to the protocol (although it is condoned in the official protocol) in an effort to solicit higher numbers of participants. Specifically, we began posting fliers in the dorm and classroom building hallways to encourage participation after our 11/10/2015 collection session. We made it clear on the fliers that students did not have to be psychology majors to participate. We indicated that we would pay non-psychology majors an extra $5 show-up fee in lieu of the psychology research credits that psych majors received. We finished data collection by the May 1 deadline, using the procedures with slight modifications as described above in this "differences from official protocol" section. I also just uploaded on this implementation page the "Procedure for data collection" document we used in the implementation of this study (see attached). There was one other difference from the official protocol to note. It is that the room in which we collected data used a networked computer security scheme (an ASU University Technology Office protocol) that made it impossible for us to use the computer IP addresses as specified in the original protocol. The IP addresses were dynamic in this environment, and we were unable to use the IP addresses to identify individual participants in a given session in order to calculate payments per the official protocol. After consultation with the ASU University Technology Office and Dr. Dan Simons (email dated 10/6/2015), we implemented a slightly different procedure for identifying participants in the "surveys in progress" page of qualtrics for a given session in order to calculate payments per the protocol. Our solution was to tape post-it notes with "computer number 1," "computer number 2," "..3," "...4" etc. to the desk at which the computers were placed. We altered the qualtrics survey so that the first question on the opening page asked "What is your computer number?" and for each session, we entered the computer number on that first screen and then had the computer open to the informed consent page for participants as they arrived. This solution enabled us to see the number of the computer for each participant in the active sessions so we could identify people to pay specific amounts, per the formulas in the official protocol and our slight deviations as described above in this "differences" section. [1]: https://osf.io/ypd78/ [2]: https://osf.io/scu2f/
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.