Home

Menu

Loading wiki pages...

View
Wiki Version:
<p>Citation: Kline, M., Schulz, L., & Gibson, E. (2017). Partial Truths: Adults Choose to Mention Agents and Patients in Proportion to Informativity, Even If It Doesn’t Fully Disambiguate the Message. Open Mind: Discoveries in Cognitive Science, 1(3), 123–135. <a href="https://doi.org/10.1162/opmi_a_00013" rel="nofollow">https://doi.org/10.1162/opmi_a_00013</a></p> <h3>This Repository</h3> <p>This repository contains a post-peer-review, pre-proof version of the Partial Truths manuscript (OpenMind, In Press), and (in GitHub storage) all of the data, materials, and scripts necessary to reproduce the reported analyses and/or replicate the experiment. </p> <p>(These materials come with no guarantee they <em>will</em> work out-of-the-box on your system, but I am happy to help if you run into troubles using them.)</p> <h3>Abstract</h3> <p>How do we decide what to say to ensure our meanings will be understood? The Rational Speech Act model (RSA, Frank & Goodman, 2012) asserts that speakers plan what to say by comparing the informativity of words in a particular context. We present the first example of an RSA model of sentence level (who-did-what-to-whom) meanings. In these contexts, the set of possible messages must be abstracted from entities in common ground (people and objects) to possible events (<em>Jane eats the apple, Marco peels the banana</em>), with each word contributing unique semantic content. How do speakers accomplish the transformation from context to compositional messages? In a communication game, participants described transitive events (e.g. Jane pets the dog), with only two words, in contexts where two words either were or were not enough to uniquely identify an event. Adults chose utterances matching the predictions of the RSA even when there was no possible fully successful choice. Thus we show that adults’ communicative behavior can be described by a model that accommodates informativity in context, beyond the set of possible entities in common ground. This study suggests that full-blown natural speech may result from speakers who model and adapt to the listener’s needs.</p> <h3>Github Repository Contents</h3> <h4>Data & Analysis/</h4> <h5>Folders</h5> <ul> <li>batch/</li> <li>log/</li> </ul> <p>The raw data: outputs from AMT and the willow script</p> <ul> <li>Response coding/</li> </ul> <p>Files used during initial coding by RAs of raw responses -&gt; category (e.g. 'dog' -&gt; PATIENT)</p> <h5>Files</h5> <ul> <li>MD_turk.R, lab-misc.R</li> </ul> <p>The main analysis pipeline, which starts from the raw data and produces all analyses reported in Experiment 1. This file generates <em>humandata.csv</em> [this is NOT all the data, it includes just the 'normal' (choose 2 of agent/verb/patient responses, used to calculate parameters for the models) and <em>humanPerformance.jpg</em></p> <ul> <li>snazzy potato 11-20.txt</li> </ul> <p>Workers who participated in a pilot version of the study and thus shouldn't be included in analysis (we told them not to sign up if they had been in that experiment but some did anyway, analysis script uses this to screen participants)</p> <h4>Presentation Script (formerly MD11-20-12/)</h4> <p>The willow script (randomizes order/condition assignments, displays trials to participants, records responses to our server). It's old code. (Note, csv/txt outputs in the internal log/ folder in here are from piloting/script creation) </p> <h4>Stimuli (formerly Stims for Multi-distractor/)</h4> <p>All stimuli used for the experiment reported in this paper. We include the original clip art used while generating the stimuli (<em>objects/</em>), all context scenes (<em>contexts/</em>, e.g. 1 person and 6 animals for the 'FEED' event, FEED_1_6.jpg), and all <em>actions/</em> (FEED, DRINK, etc.)</p> <p>The file Multi_Stimlist_full.csv is a summary used during stimuli creation</p> <h4>Models/</h4> <ul> <li>humandata.csv</li> </ul> <p>Identical to the version in Adult - Multidistractor/, copied here for convenience</p> <ul> <li>mainFun.m, modelInformativeUtt.m</li> </ul> <p>Two matlab scripts that instantiate the models described in the paper; outputs are saved to the (many) csv files; note that the loop produces outputs for both the A human data and B human data, but in cases where there are no data-fit parameters these wind up being identical)</p> <ul> <li>CompareModelToData.R</li> </ul> <p>Computes all the correlations and graphs reported in the modeling section of the paper, plus some new exploratory plots of human vs. model predictions by word. This script produces all the jpgs.</p> <ul> <li>MODEL SUPPLEMENT.docx</li> </ul> <p>Documents two additional models not reported in the main paper, which are versions of the 'succeed or fail' and RSA models that also incorporate the cost of each word type (A, V, P) by estimating these costs from the dataset. </p> <ul> <li>CSVs: dummycost, informative_baserate, informative_nobaserate, succeedorfail, etc. </li> </ul> <p>Outputs of the models, produced by <em>mainFun.m</em></p> <ul> <li>one word models/</li> </ul> <p>Contains versions of the models used earlier on; they generated predictions for 1 word productions followed by sampling 2 words w/o replacement from those likelihoods (this is much more confusing to read about than the 2 word version, but equivalent)</p>
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.