Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
This website provides resources related to the following manuscripts: 1. **"Tracking the affective states of unseen persons"** published in the Proceedings of the National Academy of Sciences (PNAS): https://www.pnas.org/content/early/2019/02/26/1812250116 2. **"Inferential affective tracking reveals the remarkable speed of context-based emotion perception"** currently accepted in *Cognition*. We provide on this web site the data, the code that we used to perform analysis, and the video stimuli (encrypted) used in the experiment. If you use the materials on this web site, please cite the manuscript as listed above. **Video stimuli** ------------- In the first manuscript published in PNAS, we used silent video clips from a variety of sources, including Hollywood movies, home videos, and documentaries totaling 5,593 seconds and 47 different clips across the experiments. We removed all auditory and text information in order to focus on the visual content alone.The video clips used in our experiments were gathered from online video-sharing website (YouTube) based on the following criteria: 1) showing live action but not animation or monologue; 2) the emotions of the characters should vary across time. In the second manuscript, we made use of the same dataset as the first manuscript. Specifically we used the baseline (fully informed) and the context-only stimuli from Experiment 2 and Experiment 3 of the first PNAS manuscript. We did annother validation experiment by adding a time lag in the fully informed video. These stimuli can be found under the folder called "Stimuli_ExpContextSpeed_VideosWithLag.zip". The time that the lag was inserted can be found in the stimuli filename. For example, "Exp3_000_1cbr_9sec_5frames.mp4" means that a lag of 5 frames (100 ms) was inserted at the 9th second of the video. To get a qualitative view of our raw video stimuli from their sources, please go to this [Youtube playlist][1]. The video stimuli on this website has encrypted for copyright protection. To access the password, users need to sign an agreement (see "End_user_agreement.pdf" in Files) and send it to the author by filling out this short [survey form][2]. The agreement is simply to confirm that the dataset will be used solely for non-profit scientific research and that the database will not be reproduced or broadcasted in violation of international copyright laws. If you use our videos, you have to agree with the following items: - To cite our reference in any of your papers that make any use of our resources. - To use the videos for research purpose only. - To not provide the videos to any second parties. Once verified, the author will send the passwords to the email address specified in the survey form. The author updated the stimuli in higher resolution on March 1st 2019. **Data** ------------- In our experiments, subjects were required to rate the affect (valence and arousal) of a target character in the video clip continuously in real-time, while they watched the clip for the first time. subjects were not allowed to view any clip more than once. ![enter image description here][3] For the first manuscript in PNAS, the data uploaded were subjected to minimal pre-processing, only binned into 10 Hz sampling rate. The continuous ratings of each video clip were saved as a '.csv' file with file name like "indivil_baseline_arousal_000_100ms.csv" for individual subject data, or 'mean_baseline_arousal_000_100ms.csv' for data averaged across subjects. These data can be found in the folder "Data_Exp1-3.zip". The file names contain information about: 1. The experimental condition ("baseline", "context-only", "character-only" or "blur-only"); 2. The rating type ("valence" or "arousal"); 3. The video clip num (e.g. "001"); 4. The data sampling rate (e.g. "100ms" for 10 Hz). For the second manuscript, the data uploaded were preprocessed for convenicence. We averaged ratings across subjects, applied differencing and z-score standardization. For the ratings in Exp 2 (lagged vs. no-lag), the affect ratings collected for the first few seconds before the 100 ms lag was introduced were removed. These data have only two columns, one for the time in video and the second for the preprocessed data point. These data can be found in the folder "Data_ExpContextSpeed.zip". **Code** ---- The code for analyzing the data were written in Python with Jupyter Notebook. With necessary packages properly installed and local file directory specified, the notebook will run properly and show all the major figures used in the manuscript. The documentation for using the code will be updated continuously with more details. **Additional resources** -------------------- We have shared on Github [the code][4] that we developed to collect real-time emotion ratings on video clips. The code is written with Javascript, Html and CSS, and is based on packages including NodeJS, npm and MongoDb. We also shared [the code][5] we developed to perform automatic video segmentation on any video clip in order to separate regions belonging to the characters versus the context. The code is written in Python with Jupyter Notebook and is based on state-of-the-art object dection and segmentation machine learning models made available by [Google Object Detection API][6]. [1]: https://www.youtube.com/playlist?list=PLm09SE4GxfvWhtSnHkfRt46w1RQ2iUpTF [2]: https://goo.gl/forms/wOWK2xVR4pOkpRH32 [3]: https://i.imgur.com/beLImBB.png [4]: https://github.com/MandyZChen/movie_tracking_exp [5]: https://github.com/MandyZChen/IET-video-autosegmentation [6]: https://github.com/tensorflow/models/tree/master/research/object_detection
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.