Main content

Stimuli

Menu

Loading wiki pages...

View
Wiki Version:
**Repositories for various stimuli** ------------------------------------------ This [website][1] lists a bunch of different stimulus databases **Scenes** <br> [VAMOS][2] (Valence, Arousal, and Memorability of Scenes; from [Wakeland-Hart & Aly, 2023][3]) [Places database][4] [SUN database][5] [FIGRIM Dataset][6] (images from SUN database, with memorability scores) [FIGRIM Fixation Dataset][7] (images from FIGRIM dataset, with eye fixation data) [Large-scale Image Memorability][8] ( 60,000 images from diverse sources) [UC Davis Visual Cognition Lab Meaning Map Repository][9] (scenes and their 'meaning maps', which show spatial distribution of semantic information within each scene) [Google Open Images][10] (object-oriented scenes) [ImageNet][11] [Art & Room stimuli][12] (from Aly & Turk-Browne, 2016, [Cerebral Cortex][13] and [PNAS][14] papers) [Scene perception stimuli][15] (from [Aly & Yonelinas, 2012, PLoS One][16] and [Aly et al., 2013, Neuron][17]) **Objects** [Object Memorability Image Normed Database Software][18] (From the Duncan lab; generates custom stimulus sets from a bank of 1748 normed images) [Objects and similar lures][19] (Mnemonic Similarity Task from Stark Lab) [Yet more objects][21] (from Wilma Bainbridge) [Virtual reality objects][22] [DinoLab Object Database][24] (1000 object images with visual & semantic norms) [Common Objects in Context][25] (COCO) [Possible and Impossible Objects][26] (courtesy of Erez Freud; read how to use and cite [here][27]) [ImageNet][28] [THINGS object concept and object image database][29] (1,800+ object concepts and 26,000 object images!) [Line drawings][30] (50 million of them!) **Faces** [AI-generated faces][31] (millions of them!) [10k US Adult Faces Database][32] (follows the demographics of 1990 US census) [American Multiracial Face Database][33] (110 faces with mixed-race heritage) [Face Place][34] (Tarr Lab) [Face Research Lab][35] (faces of diverse ethnicity & age) [Face databases][36] (collection by Ryan Stolier) [Psychological Image Collection at Stirling][37] (PICS) **Dynamic Stimuli** [Video clips of hand gestures and interactions with objects][39] [Moments in time — 3-second videos with labeled actions][40] (1 million of them!) [COIN dataset][41] (instructional videos) **Colors** [Perceptually uniform continuous color colormaps][42] [Perceptually distinctive colors for categorical data][43] **Shapes** [Validated circular shape space][44] [1]: https://meta-meta-resources.org/ [2]: https://osf.io/ufg89/wiki/home/ [3]: https://osf.io/preprints/psyarxiv/grxdz/ [4]: http://places.csail.mit.edu/ [5]: https://vision.princeton.edu/projects/2010/SUN/ [6]: http://figrim.mit.edu/ [7]: http://figrim.mit.edu/index_eyetracking.html [8]: http://memorability.csail.mit.edu/ [9]: https://osf.io/ptsvm/ [10]: https://storage.googleapis.com/openimages/web/index.html [11]: http://www.image-net.org/ [12]: https://www.dropbox.com/s/0289myye50yhp0z/art-room-stimuli.zip?dl=0 [13]: https://www.ncbi.nlm.nih.gov/pubmed/25766839 [14]: https://www.ncbi.nlm.nih.gov/pubmed/26755611 [15]: https://www.dropbox.com/s/so02gqkenf4g1iz/state-strength-stimuli.zip?dl=0 [16]: https://www.ncbi.nlm.nih.gov/pubmed/22272314 [17]: https://www.ncbi.nlm.nih.gov/pubmed/23791201 [18]: https://github.com/DuncanLab/OMINDS [19]: http://faculty.sites.uci.edu/starklab/mnemonic-similarity-task-mst/ [21]: http://www.wilmabainbridge.com/interactionenvelope.html [22]: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0238041 [24]: https://mariamh.shinyapps.io/dinolabobjects/ [25]: http://cocodataset.org/#home [26]: https://www.dropbox.com/s/snwnmjpw5ncz3pk/PosImp_Stimuli.zip?dl=0 [27]: https://www.dropbox.com/s/0scmx48ql819qw5/read-me.rtf?dl=0 [28]: http://www.image-net.org/ [29]: https://osf.io/jum2f/ [30]: https://quickdraw.withgoogle.com/data [31]: https://generated.photos/faces [32]: https://www.wilmabainbridge.com/facememorability2.html [33]: https://osf.io/qsdrp/ [34]: https://sites.google.com/andrew.cmu.edu/tarrlab/stimuli?authuser=1 [35]: https://figshare.com/articles/Face_Research_Lab_London_Set/5047666 [36]: https://rystoli.github.io/FSTC.html#stim [37]: http://pics.stir.ac.uk/ [39]: https://developer.qualcomm.com/software/ai-datasets/jester [40]: http://moments.csail.mit.edu/ [41]: https://coin-dataset.github.io/ [42]: https://colorcet.holoviz.org/user_guide/Continuous.html [43]: https://colorcet.holoviz.org/user_guide/Categorical.html [44]: https://osf.io/d9gyf/
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.