Main content



Loading wiki pages...

Wiki Version:
## Research Questions ## In educational settings like ours, students and professors alike are keen to facilitate student learning. One common strategy adopted by students, and expected by professors, is to take notes during class. In years past, the only option students had was to buy a notebook and a pen and take their notes longhand. More recently, of course, many students opt instead to use a laptop to take notes. Which promotes better learning? [Mueller and Oppenheimer (2014)][1] conducted a clever set of experiments to answer this question. In each experiment, participants watched a prerecorded lecture. Prior to watching the lecture, students received either a laptop or pen and paper so they could take notes. They subsequently took a quiz that assessed their factual and conceptual understanding of the lecture material. Results indicated in two of three experiments that longhand note taking resulted in better conceptual understanding than laptop note taking. In a third experiment, the difference was found only amongst students given an opportunity to study their notes prior to taking the quiz. In addition, in all three studies, participants exhibited greater verbatim overlap between their notes and the recorded lecture and a higher number of words in the laptop condition than in the longhand condition. There are three reasons why this work is a great candidate for replication. First, it generated significant media attention; pieces were published by the [Washington Post][2], [NPR][3], [Scientific American][4] and other outlets. The presence of media attention indicates that the topic was perceived to be of broad interest to the public. Second, the research has obvious relevance to students and thus has high potential to inspire curiosity and motivation in the research team, most of whom are students in an experimental psychology course at Tufts University in the Spring, 2017 semester. Third, a replication that draws on a larger sample size will be useful to the field; it will enable us to estimate the size of the effect with greater precision. **Hypotheses** This replication will test the same hypotheses examined by Mueller and Oppenheimer (2014). They reported their hypotheses on page 1160 as follows: to investigate whether taking notes on a laptop versus writing longhand affects academic performance, and to explore the potential mechanism of verbatim overlap as a proxy for depth of processing. If we obtain the same pattern of results the original authors obtained, we will find that longhand note taking leads to better conceptual-application performance than laptop note taking. In addition, we will find greater verbatim overlap between participant notes and the lecture watched and more words in the laptop condition than in the longhand condition. **TIER Protocol** We thank the members of Project TIER for providing a project template. As they say, it "was constructed to follow the TIER Protocol for conducting and documenting an empirical research project. Information about the [TIER Protocol][5], and in particular about [how to use this template][6], can be found on the [Project TIER website][7]." We have adapted that structure using OSF components in this project. ## Sampling Plan ## We will register this study information document on the Open Science Framework prior to creation of data. **Data collection procedures** Data will be collected in various locations on the Tufts campus. Experimenters will reserve a room (e.g., at the Campus Center, 574 Boston Ave) and instruct participants to meet them at that location and to bring a pair of headphones, if they have some. After providing informed consent, participants will take notes as they watch one of the five prerecorded lectures that Mueller and Oppenheimer (2014) presented to their Study 1 participants (see [][8]). Participants will be randomly assigned to take notes with a laptop or on paper in a notebook with a pen or pencil (longhand). Per the original authors, participants will be “instructed to use their normal classroom note-taking strategy, because experimenters [are] interested in how information [is] actually recorded in class lectures” (p. 1160). Participants will view the lecture on a computer monitor. If available, participants will wear their own headphones/earbuds to minimize distraction. After watching their assigned video, participants will complete some distractor tasks (a typing test, the Need for Cognition scale, and a reading span task) for about 30 minutes (see example survey). They will then provide answers to both factual-recall and conceptual-application questions about the material presented in the video lecture they watched. Following these items, participants will respond to items asking about their expertise on the topic of the lecture, demographic information, and information about typical use of a laptop versus notebook to take notes, their opinions therein, and study habits. At the conclusion of these procedures, participants will be debriefed, compensated, and excused. All procedures will be presented and the data collected via a Qualtrics survey without identifying information. At the conclusion of the session, an experimenter will immediately transcribe the notes taken by participants into the same Qualtrics survey for later analysis. **Sample size** Up to 250 participants will be recruited for this study during the Spring 2017 semester. Participants will be recruited through online sources, including social media posts via researcher accounts on Twitter, Facebook, etc.,, and the Psychology Department’s SONA credit and paid sites. In addition, flyers will be posted in the community, including on the Tufts campus. Please see recruitment materials for specific wording of advertisements/flyer. Participants will receive USD $15 as compensation. Participants will be adults ages 18 years and up. Recruitment efforts will specifically target college undergraduates as a reflection of the sample studied in the original experiment by Mueller and Oppenheimer (2014). Our recruitment materials will provide a link to a Qualtrics survey that will anonymously ask for age and whether they are a college student; respondents who are eligible will then see a page that will allow them to schedule a session if desired. **Sample size rationale** We will recruit as many participants as we can (up to N = 250) within the time available in the Spring 2017 semester. At a minimum, we seek to recruit N = 67 participants, the number of participants originally studied. Our hope, though, is to recruit at least N = 200 participants. With N = 100 in each of two experimental conditions and one covariate (which video), we will have 80% statistical power to detect a condition effect as small as Cohen’s f = .20, which is equivalent to eta-squared = .04 or d = .40. The original effect was eta-squared = .13 , which is equivalent to Cohen’s f = .39 or Cohen’s d = .77. **Stopping rule** Data will be collected until we reach our target N = 200 or until the end of the Spring 2017 semester, whichever comes first. ## Variables ## **Manipulated variables** We will manipulate note taking condition on a between-subjects basis. The two levels of this categorical variable are 1) longhand, and 2) laptop. We will also manipulate which video participants watch on a between-subjects basis. The five levels of this condition (by abbreviated label) are 1) islam, 2) inequality, 3) ideas, 4) indus, and 5) algorithms. Here is a list of the videos and links to their location on the website from which they were obtained: - Mustafa Akyol – [Faith versus Tradition in Islam][9] - Richard Wilkinson – [How Economic Inequality Harms Societies][10] - Matt Ridley – [When Ideas Have Sex][11] - Rajesh Rao – [Computing a Rosetta Stone for the Indus Script][12] - Kevin Slavin – [How Algorithms Shape Our World][13] **Measured variables** Participants will complete both factual-recall and conceptual-application items after watching the videos. These will be the primary dependent variables. Please see survey for specific wording of these items. We will also measure the number of words in the notes that participants take and, if the proper software can be obtained, level of verbatim overlap with the assigned lecture. **Indices** Longhand notes will be transcribed for coding. All notes will be scored by the researchers who will be blind to condition assignment. We will calculate one score for each participant that captures their factual-recall and conceptual-application performance (averages across all items of each type). ## Design Plan ## **Study type** Experiment **Blinding** No blinding. Experimenters will be aware of the hypotheses and the levels of each condition (note taking condition and which video) to which a participant is assigned. To avoid experimenter bias, all experimenters will follow a standardized script; they will also remove themselves from participants’ line of sight during data collection. Participants will be aware that they are taking notes longhand or using a laptop and which video they watched. They will not be told that note taking condition and which video are factors of interest to the researchers. **Study design** The study uses 2 X 5 between-subjects factorial design. Mirroring the original authors, our primary interest is in the effect of the experimental manipulation of note taking condition (2 levels). The manipulation of which video participants watch (5 levels) will be treated as a random factor; it was not described as a factor of primary interest to the original authors. **Randomization** Participants will be randomly assigned to one of ten design cells represented by the cross between note taking condition and which video. ## Analysis Plan ## **Statistical models** Our confirmatory analysis plan is as follows: 1. To test whether note taking condition influences factual-recall or conceptual-application scores, we will compute two analyses of variance with one fixed factor, note taking condition (longhand, laptop), and one random factor (which video). 2. To test whether note taking condition influences word count, we will compute one independent samples t-test to compare the two note taking conditions (longhand, laptop) across video conditions. 3. If we are able to obtain the relevant software, we will test whether note taking condition influences level of verbatim overlap between participant notes and the original lectures (one-gram, two-gram, three-gram). To do so, we will compute three independent samples t-tests to compare the two note taking conditions (longhand, laptop) across video conditions. **Transformations** Raw values for the factual-recall and conceptual-application scores will be converted to Z scores for analysis. If inspection of variables indicates a substantial violation of analytic assumptions, we will report the results of the original proposed confirmatory analyses as well as the results based on a transformation or appropriate alternative analysis. **Follow-up analyses** We will calculate Cohen’s d effect sizes for differences between note taking conditions for the word count, verbatim overlap, factual-recall, and conceptual-application variables. We will calculate eta-squared to capture the effect of condition on factual-recall and conceptual-application performance, taking into account which video participants watched. We will use a Z test to compare the replication effect sizes to the original effect sizes. If effects are potentially leveraged by outlying values, we will repeat the analysis without the outlying values. We will report results with and without such outlier exclusions. **Inference criteria** We will use null hypothesis significance testing with alpha = .05. We do not plan to adjust our alpha for multiple comparisons. A successful replication will be defined in these ways: 1) Finding a statistically significant effect of note taking on the following: • conceptual-application performance (longhand > laptop). • number of words (laptop > longhand) • verbatim overlap (laptop > longhand) 2) Finding a replication effect size that is: • not significantly different from the original effect size • not statistically equivalent to 0 (lower and upper equivalence bounds: +/- d = .40, or whatever effect size is detectable with 80% power given the final sample size). **Data exclusion** Following the strategy reported by Mueller and Oppenheimer (2014), the data collected from participants for whom one or more of the following are true will be excluded from our analyses: - the participant reports having previously seen the lecture to which they were assigned - there was a data recording error **Missing data** Our analyses will automatically exclude subjects missing one or more observations. If we have too many missing observations due to listwise deletion, we will run a parallel analysis in Mplus using full information maximum likelihood estimation with robust standard errors such that all available data will be included. **Exploratory analysis** In exploratory analyses, we may do the following: 1. Repeat our confirmatory analyses after excluding participants who do the following: - correctly infer that the purpose of the study is to determine the effect of note taking condition on performance - provide responses to less than half of the factual-recall or conceptual-application items 2. Conduct a linear regression analysis to determine whether word count and verbatim overlap variables predict factual-recall or conceptual-application performance. 3. Conduct a mediation analysis to determine whether word count and verbatim overlap variables transmit the effect of note taking condition on performance. **Known differences between this replication study and the original study** 1. We will not collect GPA or SAT score information from participants because this sensitive information is not critical to replicating the key findings of the original study. The omission of these variables will not affect the key results because these variables were collected after the manipulations and measures of interest in confirmatory hypothesis testing in the original experiment by Mueller and Oppenheimer (2014). 2. We will administer all manipulations and measures via a Qualtrics survey. Doing so will facilitate our ability to collect the data in a standardized way for every participant and will minimize the risk of data loss given the number of experimenters who will be involved in collecting these data. It is possible that this change could affect the key results. 3. We will add a question asking participants to tell us in an open-ended fashion what they think the study is about. The addition of this variable will not affect the key results because it will be collected after the manipulations and measures of interest in confirmatory hypothesis testing. 4. The study will recruit college undergraduates, primarily from Tufts University, rather than Princeton University. These are both selective private institutions thus the populations of interest are similar; nevertheless, it is possible that drawing from a different population may affect the key results. 5. In the original Study 1 experiment, the authors indicated that participants completed the study in a classroom, generally in groups of two, and the video lecture was presented via a projector on a screen at the front of the room. We cannot ensure that a classroom setting will always be accessible to our experimenters. As such, participants will view the lecture on a computer monitor. If available, participants will wear headphones/earbuds to minimize distraction. This is the same procedure the original authors adopted in their Study 2. As such, we do not think this change will affect the key results of interest for our replication of Study 1. 6. The original authors did not indicate in the published paper what the two 5-minute distractor tasks entailed. The first author kindly provided them on request but these materials were not amenable to administration via Qualtrics. Thus, we will administer a 5-minute typing test and the Need for Cognition Scale, which are the same distractors used by the original authors in Study 2. As such, we do not think this change will affect the key results of interest for our replication of Study 1. [1]: [2]: [3]: [4]: [5]: [6]: [7]: [8]: [9]: [10]: [11]: [12]: [13]:
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.