Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
# README This is the repository associated with the paper "iCatcher+: Robust and automated annotation of infant gaze from videos collected in laboratory, field, and online studies" by Erel et al., in press at *AMPPS*. For author affiliations, and full and detailed description of author contributions, please see spreadsheet publicly available at [Catcher+_Author_Contributions][1]. ## Code To use the live and most current version of iCatcher+, which may differ from the publication version, please visit https://github.com/yoterel/icatcher_plus All code necessary to reproduce the results and figures in the paper, as of the time of publication, are openly available at: https://doi.org/10.5281/zenodo.7226571 To see all current and past versions of the code, visit https://doi.org/10.5281/zenodo.7226570 ## Data Because the primary data are videos of children’s faces, only those files which have explicit parental consent for public sharing are freely available online. The public videos from the Lookit dataset, along with human annotations and group-level demographics for all datasets, are available at https://osf.io/ujteb/ (with a [wiki][2] that describes the data and access instructions). In particular, videos from the Lookit dataset with permission granted for scientific use are available at https://osf.io/5u9df/. To request full access, please contact Junyi Chu (junyichu@mit.edu). We will require two documents to grant access: 1. A current research ethics training certificate for conducting human subjects research. For example, from [CITI][3], [TCPS 2: CORE][4], or equivalent. We will need an up-to-date certificate for each researcher who will have access to the raw video data and this certificate should be valid for the duration of the access agreement. 2. A completed dataset access agreement ([template here][5]). Note that access to raw video files from the California-BW and Senegal datasets is not openly available due to restricted participant privacy agreements. Also to protect participant privacy, participant identifiers for the video and demographic data are not linked to each other. However, this information is available upon reasonable request to Katherine Adams Shannon (kat.adams@stanford.edu). ## Directory structure `OSF Storage / CITI_Certificates`: Human subjects ethnics training certificates for people currently on the team `Public Access Data/Meta_data`: Contains meta-data (high-level demographic information like binned age, as well as assignment of videos to train/validation/test splits) and human annotations for all four datasets (`Cal-BW`, `Senegal`, `Lookit`, `Zoom`) `Public Access Data/Other`: Contains our data management plan, coding guidelines for all four datasets, and raw data from our qualitative error analysis. `Public Access Data/Videos`: Contains videos cleared for public sharing (for `Lookit` only; the other three directories, for the `Cal-BW`, `Senegal`, and `Zoom` datasets are intentionally empty) [1]: https://docs.google.com/spreadsheets/d/1ZwKQWnCZiUg_WYkj2LAYLGZ52xkdKSMLbYres1Vp4gE/edit#gid=2131662387 [2]: https://osf.io/ujteb/wiki/home/ [3]: http://citiprogram.org [4]: https://tcps2core.ca/welcome [5]: https://osf.io/rzkv6
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.