Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
### Demo A demonstration of the task, exactly as subjects would experience it but without data collection, is available [here.][1] ### Subjects Based on pilot data, we expect to exclude roughly 50% of the data we collect. We will thus recruit 500 subjects in order to finish with roughly 100 subjects in each of the unexpected object conditions, and 50 subjects in the condition with no unexpected object. We will recruit and screen subjects via Amazon's Mechanical Turk. In order to accept the HIT, subjects will have to be based in the US, have a HIT approval rating of 95% or higher, and have completed at least 100 HITs. Subjects also must not have done a previous experiment in the lab; we will screen for this using TurkGate. ### Materials and stimuli This experiment is programmed in Javascript and is hosted on an external website. The primary task is a multiple object tracking task. The display window is a 700 by 600 pixel rectangle with a light blue (#58ACFA) background. A dark blue (#0000FF), 10 by 10 pixel fixation square is centered in the display. There are two sets of objects, one white and one black. Each set has a square (40 by 40 pixels), a triangle (50 pixel base, 50 pixel height), a diamond (56 pixel width by 56 pixel height), and a circle (46 pixel diameter). These objects can move between 66 and 198 pixels per second, and randomly increase or decrease their velocity by 66 pixels per second randomly every 300 to 1000 milliseconds (without exceeding 198 pixels per second or falling below 66 pixels per second). The unexpected object is a 40 by 40 pixel cross with 14 pixel wide arms that moves at 132 pixels per second when it is on screen. ### Procedure Subjects are taken to the experiment after accepting the HIT on Mechanical Turk. An instruction screen informs them how to do the task, and which set of objects they should monitor during the task. Subjects are assigned to track either the white or black objects at random. The experiment proceeds to the tracking task. All objects remain stationary for 1 second to allow subjects to prepare, then move around randomly for 15 seconds. The objects occlude each other as they pass, but always pass behind the fixation square. After each trial, subjects enter their count of the bounces. In the "no unexpected object condition," subjects complete a third trial identical to the first two. In the conditions that do feature an unexpected object, the it passes through the display on the third trial. It travels at a fixed vertical distance in the middle of the display, passing behind fixation. The unexpected object emerges behind an invisible occluder set either 70 pixels in from the edge, in the 80% window size condition, or 210 pixels in the 40% window size condition. It disappears behind an invisible occluder on the other side of the display, positioned the same distance from the opposite edge. Whether the unexpected object travels left-to-right or right-to-left is random for each subject. The unexpected object always exits the display 2 seconds before the end of the trial, and so onsets at a different point in the trial. The 80% condition is visible for 5 seconds, and onsets 8 seconds into the trial. The 40% object is visible for 2.67 seconds, and onsets 10.33 seconds into the trial. Although the distance the object travels is halved between the two conditions, because the unexpected object emerges gradually from behind an occluder, there is a fixed period of time during which it is partially visible irrespective of the window size. The time in the 40% condition is therefore slightly more than half that of the 80% condition. After subjects enter their count for the critical trial, they are probed for noticing (regardless of whether an unexpected object was actually present). They are asked whether they noticed anything new on the previous trial, and then regardless of noticing they are asked to indicate on a scaled-down version of the display where the object was when they first noticed it. ![Location of object at various time points in the 40% condition.][2] Above is an example of where the unexpected object would be at different time points in the 40% condition. ![Location of object at various time points in the 80% condition.][3] Above is an example of where the unexpected object would be at different time points in the 80% condition. Because the velocity, onset point, and trajectory of the unexpected object is known and fixed, there is a perfect correspondence between the unexpected object's position and how long it has been on screen. We can therefore use the reported location data as a proxy for when subjects noticed the unexpected object; if they place it near onset, then noticing may have occurred rapidly (less than a second). If they place it further into the display, we can work backward and determine roughly how many seconds that position translates to. They are then asked to indicate the unexpected object's shape and color, and answer a series of demographic questions. At the end of the experiment, they are given a completion code they enter on MTurk to complete the HIT. [1]: http://simonslab.com/mot/temporal_mot_demo.html [2]: https://mfr.osf.io/export?url=https://osf.io/npgc9/?action=download&mode=render&direct&public_file=False&initialWidth=684&childId=mfrIframe&parentTitle=OSF%20%7C%20fig_40_pct.png&parentUrl=https://osf.io/npgc9/&format=2400x2400.jpeg [3]: https://mfr.osf.io/export?url=https://osf.io/gqbr2/?action=download&mode=render&direct&public_file=False&initialWidth=684&childId=mfrIframe&parentTitle=OSF%20%7C%20fig_80_pct.png&parentUrl=https://osf.io/gqbr2/&format=2400x2400.jpeg
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.