Main content



Loading wiki pages...

Wiki Version:
This repository corresponds to the Lancaster University branch (Lab GBR-014) for the Project 002 of the Psychological Science Accelerator (PSA), which investigates object orientation effects in language comprehension (general project site: The first file to consider is the log sheet, which contains the randomised task order and the testing date for each participant. This file was assigned and uploaded herein by a general lead of the PSA project on 9 October 2019, and it was regularly updated. The raw data was collected and uploaded herein from 11 November 2019. A video explaining the experimental procedure is available at ## Lab log ### Minor procedural issues and conclusions This is an internal record of procedural issues, all of which have been minor so far. They were related to the possibilities for human error, out of lower attention, when the experimenter had to switch tasks or enter the number of a participant. The errors tended to occur when more than two participants were tested at once. As a take-home message for the future, the experimenter find it advisable to take into account the aforementioned occasions for error, as well as--on the experiment design side--to reduce the occasions for human error and to facilitate their resolution. To reduce human error, software integration of the experiments could be applied, and the number of entries of participant number and file names in each experiment could be reduced. To facilitate the resolution of errors, experimenter visibility on all data recorded (including Study 003) would help. ### Issue log These issues are not foreseen to influence the final data, and so they may be regarded as expendable. They may be nonetheless useful for the experimenter. Whenever a participant's data were discarded, as indicated below, all their data for both Studies 002 and 003 were discarded. - _Original_ Participant 17: A participant that was originally tested under this number was not administered the *PP* part of Study 002. Upon realisation, participant's data was discarded altogether for Study 002 (directly by experimenter by recording another participant with the same ID later) and for Study 003 (by recording another participant with the same ID later and noting issue to managers of Study 003); - Participant 19: Arising from the above issue with Participant 17, a Participant 19 was tested on November 19th, and another one was tested on November 20th again with ID 19. As a result, not having access to fix error in part of the data (Study 003, which is handled by the heads of that study), the data of the initial Participant 19 was altogether replaced with the data of the participant tested later. Furthermore, on another issue with the latter participant, the csv data file for the SP part of Study 002 was strangely saved as defaultlog.csv by OpenSesame, and was then properly renamed; - Participant 20: *PP* part of Study 002 was recorded with ID 19 instead of 20, and was then properly renamed; - _Original_ Participants 26 and 37, Participant 49: Wrong seed number (the seed number is used to assign a stimulus list). New participants tested. With help from the head of Study 002, Sau-Chin Chen, the data of the *initial* Participant 26 were used for Participant 49; - _Original_ Participant 44: Not administered PP task. New participant tested. ### Participant feedback - Some items in the 003 survey seemed extremely similar to each other; - Regarding the screens that set a pause, especially an important one such as the one between the 003 survey and the demographic questions, some participants suggested these screens could be made more noticeable.
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.