Main content



Loading wiki pages...

Wiki Version:
This experiment follows the in-lab pilot experiments replicating Strickland & Keil (2011) exploring the role of category and continuity violations in the "filling-in" effect. ## Design This experiment has a straightforward 2x2 between-subjects design. The two factors of interest are "category violations", in which the target object changes before and after the cut (e.g., from a dart to a rolled-up piece of paper), and "continuity violations", in which the object appears 'too far' along its trajectory following the cut. The presence of each type of violation is manipulated independently, yeilding a "no-violation" condition, a "category violation only" condition, a "continuity violation only" condition, and a "both violations" condition. ## Sample Based on the observed effect sizes in the pilot experiments (Cohen's *d* = .81), we estimated we would need 25 participants per group to reach 80% power on a minimal paired comparison between any two conditions. In order to be sufficiently powered to detect any possible interactions, we doubled this to 50 participants per group. This leaves us with a design with 80% power to detect a Cohen's *f* <sup>2</sup> of .04, corresponding to a small-to-medium effect size. ## Exclusions Participants with less than 50% accuracy overall will be excluded and replaced until we have achieved the target sample size. This requires partially analyzing the data prior to exclusion (as the DV of interest is part of this accuracy measure), but this objective cutoff (used in Experiments 1 and 2) unambiguously identifies participants who either did not understand the experiment or failed to pay attention to the video. POST-DATA-COLLECTION MODIFICATION: Prior to conducting the analysis or looking at the data (indeed before the first round of exclusions), we decided to use an exclusion criterion of 50% accuracy on all NON-TARGET items. However, post-hoc analyses using the original exclusion criterion that includes the target item (which adds 5 exclusions and makes the samples slightly uneven) are qualitatively identical to those using this modified exclusion criterion in all of the key results. ## Predictions and analysis The primary DV will be the rate of "yes" responses to each type of image (target, seen-image, and lure). The initial analysis will be a 2 (Category violation; present vs. absent; Between) x 2 (Continuity violation; present vs. absent; Between) x 3 (Item type; Target vs. Seen-image vs. Lure; Within) mixed-model ANOVA, from which we predict significant interaction between item type and continuity violation. Regardless of the outcome of the initial analysis, we will follow this analysis with separate 2x2 between-subjects ANOVAs of the effects of the two violations on each item type. The primary DV of interest is the rate of "yes" responses to the target item, i.e. the false alarm rate to the moment of release. Our primary prediction is a main effect of continuity, but may also observe a main effect of category violation or an interaction. We will also examine accuracy on "Category check" items in the category violation conditions, that is, both the seen-image and lure items from the second half of the video, which either show the transformed object or the object that was present in the first half of the video. This is to determine whether participants accurately detected the category change. We will conduct a single-sample *t*-test against chance (50%) for each condition.
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.