Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
## README This repository includes analysis scripts, data, and codebooks required to reproduce the results of the paper "*Stimulus and expectation-driven novelty independently guide infant looking behaviour: a systematic review and meta-analysis*", forthcoming in *Nature Human Behaviour.* ## Pre-registration Available under `Pre-registration Documents` (https://osf.io/942a6/). If you want to look at the pre-registration document for this project, it probably makes sense to look at the final revised pre-registration document, `meta-analysis_PN_VOE_prereg_21mar23.pdf` (https://osf.io/zgdju), or the official registry (https://osf.io/jghc3). We have highlighted and justified all changes for full transparency. Below is a description of the other pre-reg documents in `older_versions`. ### Small case study, N = 60 - `voe_initialanalysis_prereg.pdf` pre-registers a small case study (N=60 infants from Liu & Spelke, 2017) which we pursued first - `voe_analysisroun2_prereg.pdf` is an update to the above, again focusing on the dataset of 60 ### Full meta-analysis, N = 1899 - `meta-analysis_PN_VOE_prereg_21july22.pdf` pre-registers the screening procedure and confirmatory analyses of the bigger dataset that is the focus of the paper - `meta-analysis_PN_VOE_prereg_1sept22.pdf` and `meta-analysis_PN_VOE_prereg_21mar23.pdf` report updates to the pre-registration, both of which were submitted before the data were visualized or analyzed. ## analysis_scripts This sub-dir contains all the code required to clean, process, and execute the analyses reported in the paper. They should be run in the order they are listed, i.e. first `0.1_common_format`, then `0.2_join_data`, etc. HTML outputs for each file are also included in this folder. In `0.1_common_format.Rmd`, we organize the datasets of individual infant level data from each paper (found in `orig_data_(1)` folder) in such that they are in a rectangular format (saved in `orig_data_cleaned_(2)`), and then modified so they share the same column names (saved in `outputs_(3)`). In `0.2_join_data.Rmd`, we combine the datasets of infant level data which are in a common format (found in `outputs_(3)`) to create the meta and mega-analysis data csv. To get meta-analysis data (saved in `processed_data` under `meta-analysis_data.csv`), we get summary statistics from each experiment, and add to `meta-analysis_only_paper_data.csv` (found in `processed_data`), which holds characteristics of experimental design and summary statistics found within the papers. To get mega-analysis data (saved in `processed_data` under `mega-analysis_data.csv`), we take infant level data and add experimental design information found in `meta-analysis_only_paper` to it. In `1_estimating_effects_meta.Rmd`, we take condition-level data (`processed_data/meta-analysis_data.csv`), and estimate the size of perceptual novelty (PN) and violation of expectation (VOE) in experimental conditions to see how the size of these effects compare to each other. We also visualize effect sizes, investigate publication bias, determine whether the effects are multiplicative or additive, and do a power analysis. In `2_moderators_meta.Rmd`, we take condition-level data (`processed_data/meta-analysis_data.csv`) and 1) find the moderators of PN and VOE in experimental conditions and 2) find the moderators for PN and VOE in experimental and control conditions. In `3_mega_analogs.Rmd`, we take individual infant-level data (`processed_data/mega-analysis_data.csv`) and repeat analyses in scripts 1) and 2) on infant-level data. This includes estimating the size of PN and VOE in experimental conditions to see how the size of these effects compare to each other, finding the moderators for PN and VOE in experimental conditions, and finding the effect of moderators in both experimental and control conditions for both effects. In addition, we also explore the differential effects of habituation on PN and VOE in individual infant data. **The .html outputs of this script contain checks of model assumptions; open in any web browser to view.** ## Data ### orig_data_(1) This sub-dir contains all the original infant-level data (and codebooks) as provided by the authors. ### orig_data_cleaned_(2) This sub-dir contains infant level data for each paper in a rectangular format. ### outputs_(3) This sub-dir contains the infant-level data for each paper in the same format (including the same column names and categorical values). ### processed_data This sub-dir contains the datasets and their corresponding codebooks that are used in the analysis. `Meta-analysis_data.csv` includes summary statistics and information on the experimental design of each paper, used in the analyses in `1_estimating_effects_meta.Rmd` and `2_moderators_meta.Rmd`. Its corresponding codebook is `meta_analysis_codebook.csv`. `Mega-analysis_data.csv` includes individual infant level data and information on the experimental design of each paper, used in the analyses in `3_mega_analogs.Rmd`. Its corresponding codebook is `mega_analysis_codebook.csv`. `Meta-analysis_only_paper_data.csv` is the same as `meta-analysis_data.csv`, except that it only includes data recovered from the papers, so it is missing some information for some conditions like infant age and gender. It is called in `0.2_join_data.Rmd` to combine data recovered from papers with summary statistics calculated using raw infant-level data. The dataframe including all values necessary for the analyses is `meta-analysis_data.csv`. Its codebook is also `meta_analysis_codebook.csv`, as it shares the same column names as `meta-analysis_data.csv`. `pretty_study_names.csv` includes more readable unique study names used in `0.2_join_data.Rmd` to add an additional column to the meta-analysis data with better labels for each study. These labels are used in forest plots. ## screened_papers This sub-dir contains our paper screening and data collection process. `paper_screening_3jul24.xlsx` includes our search criteria and protocols, screened papers by title, abstract and full paper, availability of data for papers that passed our screening process, and a log of authors contacted. Last updated 07/03/2024. A dynamic version of this process is available at https://docs.google.com/spreadsheets/d/1uWrqDiAvPnEynbNppG3FucAPtzHQ-tLDF-2FnJB4zPI/edit?usp=sharing `paper_data_3jul24.xlsx` includes details on methods, experimental design, and data from papers, coded independently by two team members. Last updated 07/03/2024. A dynamic version of this process is available at https://docs.google.com/spreadsheets/d/150i1yBv8_KHdI-A62zEc4F_TeSpavtjYSseq7xQYtqM/edit?usp=sharing
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.