Main content

Training Data

Menu

Loading wiki pages...

View
Wiki Version:
## Training Data *Information from the manuscript about the data used in this paper. Our exact training data and splits is included in [this archive](https://osf.io/vf9jt/files/osfstorage).* The data used in this study was compiled from several prior studies with available item-level human judgments of AUT responses as well as recent research with elementary-aged participants from the Measure of Original Thinking in Elementary Students (MOTES) project (Dumas et al. 2023; Acar et al. 2023). The federated data in this study has 27,217 responses from 2,039 participants across nine datasets. The overview of the datasets is as follows, organized by the identifier for each that is used for reporting. 1. betal18 (Beaty et al., 2018): This dataset used AUT prompts for box and rope, administered to 171 adult participants, resulting in n = 2,918 total responses. Responses were judged by four raters, with an averaged random Intraclass correlation coefficient (ICC2k) of .81. 2. bs12 (Beaty & Silvia, 2012): This dataset used a single prompt, brick, with 133 college-aged adults. Responses were judged by three raters (n = 1,807, ICC2k = .72).2 3. dod20 (Dumas et al., 2020): This dataset consists of 10 AUT prompts - book, bottle, brick, fork, pants, rope, shoe, shovel, table, tire—administered to 92 participants, and comprises 5,435 total ratings. It was scored by three raters (n = 5,435, ICC2k = .85). 4. hmsl (Hofelich Moer et al., 2016): This data comprises 638 participants and two AUT prompts, for paperclip and brick. Four judges rated the responses (n = 3,843, ICC2k =.67). 5. motesf: This dataset is a previously unreleased dataset associated with the Measuring Original Thinking in Elementary Students (MOTES) project, a study developing a DT test for elementary-aged students. The data used here is spelling-corrected data from an AUT portion of the measure, with 8 prompts administered to 385 participants and judged by 4 raters (n = 2,924, ICC2k = .73). 6. motesp: This data corresponds to a pilot version of the motesf data, with 35 participants and the same prompts, as well as backpack and shoe. (n = 339, ICC2k = .81). 7. setal08 (Silvia et al., 2008): This research studied DT through six tasks, including consequences, instances, and AUT. This study uses the AUT, which asked 241 participants for creative uses for a brick and a knife. Three judges rated the originality of responses, with (n = 3,425, ICC2k = .48). 8. snb17 (Silvia & Beaty, 2017): In this data, 142 college students were administered two AUT prompts: box and rope. Responses were judged by three raters (n = 2,272, ICC2k = .67).2 9. snbmo09 (Silvia et al., 2019): Finally, in this dataset, 202 college-aged students were asked to develop alternate uses for three tasks: brick, knife, and box. In the originating study, 13 participants were removed for low engagement; this study uses all data available. Responses were judged by four raters (n = 4,099, ICC2k = .69). ## References Acar, S., Dumas, D., Organisciak, P., & Berthiaume, K. (2024). Measuring original thinking in elementary school: Development and validation of a computational psychometric approach. Journal of Educational Psychology. https://doi.org/10.1037/edu0000844 Beaty, R. E., Kenett, Y. N., Christensen, A. P., Rosenberg, M. D., Benedek, M., Chen, Q., Fink, A., Qiu, J., Kwapil, T. R., Kane, M. J., & Silvia, P. J. (2018). Robust prediction of individual creative ability from brain functional connectivity. Proceedings of the National Academy of Sciences, 115(5), 1087–1092. https://doi.org/10.1073/pnas.1713532115 Beaty, R. E., & Silvia, P. J. (2012). Why do ideas get more creative across time? An executive interpretation of the serial order effect in divergent thinking tasks. Psychology of Aesthetics, Creativity, and the Arts, 6(4), 309–319. https://doi.org/10.1037/a0029171 Dumas, D., Organisciak, P., & Doherty, M. D. (2020). Measuring divergent thinking originality with human raters and text-mining models: A psychometric comparison of methods. Psychology of Aesthetics, Creativity, and the Arts. https://doi.org/10/ghcsqq Hofelich Mohr, A., Sell, A., & Lindsay, T. (2016). Thinking Inside the Box: Visual Design of the Response Box Affects Creative Divergent Thinking in an Online Survey. Social Science Computer Review, 34(3), 347–359. https://doi.org/10.1177/0894439315588736 Silvia, P. J., Nusbaum, E. C., & Beaty, R. E. (2017). Old or New? Evaluating the Old/New Scoring Method for Divergent Thinking Tasks. The Journal of Creative Behavior, 51(3), 216–224. https://doi.org/10.1002/jocb.101 Silvia, P. J., Nusbaum, E. C., Berg, C., Martin, C., & O’Connor, A. (2009). Openness to experience, plasticity, and creativity: Exploring lower-order, high-order, and interactive effects. Journal of Research in Personality, 43(6), 1087–1090. https://doi.org/10.1016/j.jrp.2009.04.015 Silvia, P. J., Winterstein, B. P., Willse, J. T., Barona, C. M., Cram, J. T., Hess, K. I., Martinez, J. L., & Richard, C. A. (2008). Assessing creativity with divergent thinking tasks: Exploring the reliability and validity of new subjective scoring methods. Psychology of Aesthetics, Creativity, and the Arts, 2(2), 68–85. https://doi.org/10.1037/1931-3896.2.2.68
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.