Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
This is a dataset of video and annotation files contributed to the automated eye gaze coding project iCatcher+ (in partnership with MIT Quest). The PI for these data is Laura Schulz (lschulz@mit.edu) at Massachusetts Institute of Technology. Junyi Chu is the current administrator of the data and the OSF repository. She can be contacted with questions at junyichu@mit.edu. ## Data access Because the primary data are videos of children’s faces, only those files which have explicit parental consent for public sharing are freely available online. To request full access to videos with scientific-use permissions, please contact Junyi Chu (junyichu@mit.edu). We will require two documents to grant access: 1. A current research ethics training certificate for conducting human subjects research. For example, from [CITI][3], [TCPS 2: CORE][4], or equivalent. We will need an up-to-date certificate for each researcher who will have access to the raw video data and this certificate should be valid for the duration of the access agreement. 2. A completed dataset access agreement ([template here][5]). ## Data Components To protect participant privacy and reduce the risk of reidentification, we have generated the following sets of IDs: - `videoID`: A unique identifier for each session, used to identify video and annotation files. - `childID`: A unique identifier for each participant, used to identify children in video and annotation files. Demographic metadata associated with this identifier only include age rounded to the nearest month and binarized parent race categories (White only or not). - `childID.demohash`: A unique identifier for each participant, used to identify children in more detailed demographic data. This allows us to share demographic metadata at more fine-grained levels (e.g., age in days with some jitter; original parent racial/ethnic categories) without connecting this information to individual video files. ### (1) Annotations For each video file, we have annotations from up to three coders. For the iCatcher+ project, we only used data from coder1 (primary) and coder2 (reliability). ### (2) Video This includes video files (.mp4) which are available for either public use or scientific use. Videos with permission for only scientific-level sharing can be accessed by submitting a request to the data administrator (see contact information in top paragraph). ### (3) Metadata This includes metadata for each video, reporting both video and participant characteristics. See the codebook to understand what the reported variables mean. ### (4) Other This includes documentation and other project materials: - Video annotation manual - Preregistration, which includes description of the study design and stimuli shown [3]: http://citiprogram.org [4]: https://tcps2core.ca/welcome [5]: https://osf.io/rzkv6
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.