Main content

This registration is a frozen, non-editable version of this project

Home

Menu

Loading wiki pages...

View
Wiki Version:
**Note: the text below will be used for the introduction of the to-be-submitted paper. The text in that paper will be largely identical to the text below with the only exception that it may include additional references and minor textual adaptations.** According to Steel (2007), procrastination can be been defined as irrationally delaying the start or completion of an intended action. In higher education, procrastination is extremely prevalent with estimates ranging from about 30% to 95% depending on the academic task and the severity of procrastination (e.g., Day, Mensink & O’ Sullivan, 2000; Ellis & Knaus, 1977; Solomon & Rothblum, 1985). Furthermore, Steel’s meta-analysis (see Table 6) demonstrated that procrastination has been consistently associated with poor academic performance, such as a lower Grade Point Average (GPA), course GPA, final exam grade and assignment grade. In addition, Semb, Glick and Spencer (1979) found that compared to their non-procrastinating peers, students who procrastinate are more likely to drop out or spent more years in college. Moreover, there are reasons to assume that academic procrastination may be causally related to poorer performance. First, when students postpone their study efforts until a few weeks before the end-of-course exam, this will lead to cramming and that is far less conducive to various forms of learning than spacing study efforts (e.g., Delaney, Verkoeijen & Spirgel, 2009; Hintzman, 1974; Toppino & Gerbier, 2014). In addition, classes and lectures feature tasks and exercises designed to promote learning strategies that produce meaningful knowledge structures (e.g., Mayer & Fiorella, 2015; 2016). However, these tasks and exercises will be considerably less effective when students are unprepared due to procrastination. Considering the prevalence of procrastination and the associated problems, researchers and teachers have come up with several approaches to prevent or reduce procrastination in higher-education students. One of these approaches is to intersperse summative assessments throughout a course (e.g., Brook & Ruthven, 1984; Roberts & Semb, 1980; Tuckman, 1998; Wesp, 1986). For example, Tuckman (1998) compared two matched classes enrolled in a 6-week educational psychology course required for teacher certification. In one class, Tuckman administered weekly quizzes on course content of the pre-ceding week. The other class received weekly homework assignments that required students to make an outline of the content covered in the pre-ceding week. Students in both classes received feedback on their performance. Furthermore, both the quizzes and outlines were graded and their average counted as much for the final course grade as the achievement on the final-course exam. The results showed that the weekly quizzes enhanced the final-course exam achievement relative to making outlines, and the positive effect was strongest for students who procrastinated most. Wesp (1984) obtained a similar effect, demonstrating that the achievement on the final-course test in an introductory psychology class was higher for students who were required to take daily quizzes on each of the 12 course units than students who were allowed schedule the unit quizzes on their own (but see Moris, Surber & Bijou, 1978). Probably the most recent attempt to prevent academic procrastination though interspersed testing was proposed by Kerdijk and colleagues (Kerdijk, Cohen-Schotanus, Mulder, Muntinghe, & Tio, 2015; Kerdijk, Rio, Mulder, & Cohen-Schotanus, 2013) in the form of cumulative compensatory assessment (CA). In this approach, several summative assessments are administered throughout a course. Most importantly, however, and contrary to the approaches described in the previous paragraph, each assessment covers all previous content that has been offered in the course until the point of the assessment administration. Due to the cumulative nature of the assessments, students are encouraged to test themselves repeatedly in spaced manner, thereby promoting retrieval practice and spaced repetition, both of which are known to enhance learning (e.g., Karpicke & Roediger, 2006b; Carpenter, 2012; Rowland, 2014; Delaney et al., 2009). Furthermore, in cumulative assessment, the scores of multiple test are combined to a single score that weighs in for the final course grade. As a result, students can compensate poor performance on one test with better performances on others. This in turn is assumed to help students maintain their study efforts at a high-level throughout the course because compensation provides them with the opportunity to repair a poor test result. However, the (frequent) use of summative assessment during the learning process has been criticized as it may (1) stimulate students to engage in activities that maximize their chances of passing the test rather than activities that help them to achieving meaningful learning goals, (2) promote a teaching style direct at knowledge transmission rather than knowledge construction and creativity, (3) lower the self-esteem of poorly performing students and (4) lead to tests becoming the rationale for things that are being done in the classroom (e.g., Harlen, & Deakin Crick, 2002; McLachlan, 2006). To prevent these negative consequences of summative assessment from occurring, researchers have proposed to use assessment in a formative manner. Where summative assessment takes places at the end of an instructional unit to categorize students or to inform certification, formative assessment takes place during the learning with the purpose of generating feedback that helps students to monitor, improve and accelerate their learning (e.g., Harlen & James, 1997; Sadler, 1989; 1998). Meaningful learning, self-efficacy and self-regulated learning are assumed to be fostered through formative assessment because the provided feedback informs students about the learning goals and criteria that will be used on a summative end-of-unit assessment, because formative assessment informs students about their current level of performance and because formative assessment helps them to progress towards the learning goals (Harlen & James, 1997). **The present study** Returning to the innovative summative cumulative assessment approach developed by Kerdijk and colleagues (2015) and considering the drawbacks associated with employing summative assessment during learning as well as the presumed positive effects of formative assessment, the question emerges whether a formative variant of cumulative assessment may produce different outcomes on motivation and learning than a summative variant. In the present study, we address this question in a field experiment conducted in an engineering course for first-year higher education students. Apart from being theoretically relevant, our study is practically relevant because the administration of summative assessment puts more pressure on the teaching staff and examination bureaucracy than formative assessment due to exam regulations that apply to all tests that add to certification (for example, taking measures to prevent fraud, grade registration, and providing retake opportunities). Hence, if formative cumulative assessment produces at least the same outcomes as summative cumulative assessment, the former variant will be easier to implement in educational practice. The present study will be conducted in the first-year undergraduate course “materials science 1”, which is part of the mechanical engineering program at a Dutch University of Applied Sciences. All students form the cohort 2017-2018 will take part in the field experiment, and each of them will take three cumulative assessments before the final course exam. For a random half of the students, the performance on these cumulative assessments adds to the final course exam grade, i.e., the summative cumulative assessment condition where the average of the three cumulative assessment makes up 30% of the final grade, whereas for the other half of the students the performance will not add to the final course grade, i.e., the formative cumulative assessment condition. All participants will be tested immediately after the course, i.e., the final course exam, and after a 10-week delay prior to the start of “materials science 2”, which follows up on “materials science 1”. We decided to include a delayed test since Kerdijk and colleagues (2015) suggest that the effect of summative cumulative assessment may only lead to better achievement in the long term. The treatment effect, which is defined as the mean performance difference between the summative cumulative assessment condition and the formative cumulative assessment condition is the primary outcome in the present field experiment and we will examine the treatment effect at the two already mentioned retention intervals. Furthermore, the nature of the cumulative assessment (formative vs. summative assessment) may influence study effort, self-efficacy and perceived competence (e.g., Harlen, & Deakin Crick, 2002; McLachlan, 2006. Therefore, as a secondary outcome, we will compare the two conditions on these measures. **Hypotheses** The primary goal of the present experiment is to examine whether using summative cumulative assessment versus formative cumulative assessment in a first-year undergraduate engineering course leads to a difference on the final course exam and/or on the delayed test. For each of these measures we will employ a frequentist approach to test the null hypothesis that the mean performance will be the same in the summative cumulative assessment condition and the formative cumulative assessment condition. In addition, for each of these measures we will employ a Bayesian approach to compare the null hypothesis of no mean difference against the alternative hypothesis of an absolute difference of a medium effect. According to Hattie (2017), a medium effect is required to favor one educational intervention over the other. The second goal is to compare the summative cumulative assessment condition and the formative cumulative assessment condition on cumulative assessment grades, self-efficacy and perceived competence. Regarding the cumulative assessment grades, we will use a frequentist approach to test the null hypothesis that (1) the mean cumulative assessment grade does not differ between the two conditions; (2) the mean cumulative assessment grade does not differ per session (see the method section for details) and (3) the mean cumulative assessment grade is not affected by the interaction between session and condition. Finally, we will test the null hypotheses that mean self-efficacy and mean perceived competence do not differ between the summative cumulative assessment condition and the formative cumulative assessment condition
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.