Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
**This OSF stores qualitative modeling results from [this project](https://psyarxiv.com/pt6hx). Please refer to the manuscript for quantitative results.** # Input The SEM model variants used PCA-ed resampled input vectors from https://osf.io/bc9t5/ (e.g. *1.1.1_kinect_sep_09_all_pcs_input_vector.csv* in *Resampled Data* directory) # Output The models were asked to predict the next timestep, input vectors and models' predictions were visualized in **Output Activities** directory. **NOTES:** there are "jumps" in the output videos, this is due to the mismatch between cached key frames and downsampled (resampled) frames (caching is necessary for fast video processing). The models didn't experience these jumps. TODO: match cached key frames with resampled frames and re-plot output videos. ## Explain the output videos: ### Top left: An actor performs a sequence of actions. Red boxes are three objects nearest to the right hand of the actor. Cyan boxes are background objects. Objects' names were fed into GloVe model to extract semantic embeddings, embeddings of nearest objects and background objects were combined (weighted sum) into a vector. This vector was projected onto a 13-dimensional vector space, and the projected vector served both as input and ground truth for model's prediction about semantic features. Yellow tinted boxes are three objects whose embeddings are closest to model's prediction. The blue skeleton is the raw 3d joint positions extracted from Kinect camera. Joint positions were used to derive joints' velocity, acceleration, and distance to the trunk. The vector of those derived features was projected onto a 14-dimensional vector space, and the projected vector served both as input and ground truth for model's prediction about motion features. ### Bottom left: A 14-dimensional vector was projected back to the original vector space of motion features, and joints' 3d distance to the trunk was used to reconstruct the skeleton, viewed from two angles. ### Bottom right: Model's predicted vector in the 14-dimensional vector space was projected back to the original vector space of motion features, and joints' 3d distance to the trunk was used to reconstruct the skeleton, viewed from two angles. ### Top right: Model's event boundaries and humans' boundary density. The blue line is the distribution of human boundaries. Vertical lines indicate model's event boundaries: solid lines indicate the model creates a new event schema, dashed lines indicate the model switches to an old event schema in its library of event schemas, dotted lines indicate the model restarts the currently active event schema; each color represents an event schema.
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.