Loading wiki pages...

Wiki Version:
<p>This General Information section was created on 27/04/2020. Most recent update: 30/09/2020.</p> <hr> <h2><strong>GENERAL INFORMATION</strong></h2> <p>This wiki is used to log the SPRINT Project's methods and data. It will be updated progressively throughout the project.</p> <hr> <h3><strong>Title of project</strong></h3> <p><em>Speech Prosody in Interaction: The form and function of intonation in human communication.</em><br> <br></p> <h3><strong>Dates of project</strong></h3> <p>1st October 2019 - 30 September 2024<br> <br></p> <h3><strong>Funding</strong></h3> <p>SPRINT is funded by the European Research Council, (grant no <a href="https://cordis.europa.eu/project/id/835263" rel="nofollow">ERC-ADG-835263</a>).<br> <br></p> <h3><strong>Project links</strong></h3> <p>Project website: <a href="http://www.sprintproject.io/" rel="nofollow">www.sprintproject.io/</a><br> <br></p> <h3><strong>Research team</strong></h3> <p><strong>Amalia Arvaniti</strong><br> &nbsp;&nbsp;&nbsp;<strong>Role:</strong> Principal Investigator<br> &nbsp;&nbsp;&nbsp;<strong>Institution:</strong> Radboud University<br> &nbsp;&nbsp;&nbsp;<strong>Email:</strong> amalia.arvaniti@let.ru.nl <br> <br> <strong>Yiya Chen</strong><br> &nbsp;&nbsp;&nbsp;<strong>Role:</strong> Senior Staff Member<br> &nbsp;&nbsp;&nbsp;<strong>Institution:</strong> Universiteit Leiden<br> <br> <strong>Chris Cummings</strong><br> &nbsp;&nbsp;&nbsp;<strong>Role:</strong> Senior Staff Member<br> &nbsp;&nbsp;&nbsp;<strong>Institution:</strong> The University of Edinburgh<br> <br> <strong>Kathleen Jepson</strong><br> &nbsp;&nbsp;&nbsp;<strong>Role:</strong> Post-doctoral Research Associate (2020-2022)<br> &nbsp;&nbsp;&nbsp;<strong>Institution:</strong> Radboud University<br> &nbsp;&nbsp;&nbsp;<strong>Email:</strong> kathleen.jepson@let.ru.nl<br> <br> <strong>Georg Lohfink</strong><br> &nbsp;&nbsp;&nbsp;<strong>Role:</strong> Lab Manager <br> &nbsp;&nbsp;&nbsp;<strong>Institution:</strong> University of Kent<br> <br> <strong>Cong Zhang</strong><br> &nbsp;&nbsp;&nbsp;<strong>Role:</strong> Post-doctoral Research Associate (2019-2022)<br> &nbsp;&nbsp;&nbsp;<strong>Institution:</strong> Radboud University<br> &nbsp;&nbsp;&nbsp;<strong>Email:</strong> cong.zhang@let.ru.nl <br></p> <h3><strong>Previous team members</strong></h3> <p><strong>Bryony Dutta</strong><br> &nbsp;&nbsp;&nbsp;<strong>Role:</strong> Project Officer<br> &nbsp;&nbsp;&nbsp;<strong>Institution:</strong> University of Kent <br></p> <h3><strong>Other contributors</strong></h3> <p>Other people are also assisting us with aspects of SPRINT. So far, these include: Lazaros Gonidis, University of Kent (frequency acuity test); Lucy Berrington, University of Kent (Greater London Area pilots); Eleni Kapogianni, University of Kent, Angelos Lengeris, Eleftheria Politi and Danae Tsinivits, University of Athens (Athens pilots), Christopher Spathis, University of Melbourne, (Athens pilots video materials).</p> <p>We would also like to thank the following colleagues and friends who have assisted with the translation of documents into Greek: Eleni Agathopoulou (AUTH), Stergios Chatzikyriakidis (Göteborg), Lazaros Gonidis (Kent), Elena Karlos, Afroditi Pina (Kent), Arhonto Terzi (UPatras), Stavroula Tsiplakou (OUC) for their help with translating a number of questionnaires; Dafni Bagioka, Stergios Chatzikyriakidis, Zesses Seglias (AUTH) for their help in translatining PROMS. </p> <p><br></p> <hr> <p>This About the Project section was created on 28/04/2020. Most recent update: 05/06/2020.</p> <hr> <h2><strong>ABOUT THE PROJECT</strong></h2> <hr> <h3><strong>Introduction</strong></h3> <p>Intonation, the modulation of voice pitch, is essential for communication as it conveys information that helps listeners make inferences about the pragmatic intent of the speaker. Despite increased understanding of intonation’s importance, there is little agreement even about essential aspects of its structure and meaning. This is due to two main reasons. First, research has focused either on the <strong>form</strong> of intonation, often taking a reductive approach to meaning, or has concentrated on <strong>meaning</strong> but without full scrutiny of form. Second, most research has eschewed the study of <strong>intonational variability</strong>, seeing it as a problem, rather than a natural facet of speech production that needs to be understood and accounted for. <strong>Examining all three aspects in tandem</strong> is critical for understanding how intonation is structured and functions in communication: taking meaning into consideration when we study intonational form (i.e., when we study the phonetics and phonology of intonation) can help delimit intonational categories, uncover the limits of within-category variability, and disentangle phonological representation from phonetic realization; in turn, a robust understanding of intonational form leads to insights into intonational pragmatics.</p> <p>SPRINT takes exactly this integrative approach to examine intonational phenomena attested in English and Greek that have vexed researchers for some time. These include uptalk, high and rising accents, and question tunes. Two varieties per language are being studied, British English as spoken in the Greater London Area (LonEN), Bristol English (BrsEN), Standard Athenian Greek (AthGR), and Corfiot Greek (CfuGR). Their systematic differences with respect to the phenomena under investigation allow us to examine both cross-linguistic differences and dialectal variation and its role in communication. The investigation involves phonetic and pragmatic analysis and modelling, as well as a series of behavioural and neurophysiological experiments. We intend to use these methods to shed light onto the realization, structure and function of intonation so as to develop a robust model of intonational phonology and pragmatics. </p> <p><br></p> <h3><strong>Challenges</strong></h3> <p>The aim of the project is to address three main challenges associated with the study of intonation. Each challenge is associated with specific research questions and objectives.<br> <br></p> <h4><strong>A. The challenge of representation</strong></h4> <p>At present there is no consensus as to how intonation should be represented, whether it requires some form of abstract representation and if so of what type. This is not surprising given that the main phonetic exponent of intonation, f0, does not present linguistically relevant changes in pattern that can help delineate its components; tunes are visually represented as curves, and at a certain level they may be retained as such by speakers of a language. In addition, f0 curves that differ dramatically from each other may be treated as instances of the same tune by a language’s speakers, while curves that are minimally distinct may be treated as entirely different in terms of their meaning and function.<br> <br></p> <p><strong>Research Questions</strong> - What are the building blocks of intonation? - How can they be determined and phonologically represented? - How can we disentangle phonetic realization from phonological representation? </p> <p><br> <strong>Objective</strong>: To develop criteria for determining phonological representations and deciding between alternatives, using knowledge derived from a systematic understanding of the realization and meaning of intonation (see objectives for challenges B and C), alongside consideration of phonological principles.</p> <p><br></p> <h4><strong>B. The challenge of phonetic variability</strong></h4> <p>Phonetically, the realization of tunes can vary quite significantly, based on a number of linguistic, social, stylistics, as well as idiosyncratic variables. For example, the shape of f0 curves depends on utterance length and a number of other linguistic parameters. Intonation also shows dialectal and social variation, the full gamut of which has not been explored in a sufficient number of languages. Finally, different speakers prioritize different aspects of the realization of a tune in their own production, and are not all equally sensitive to small differences when interpreting tunes.<br> <br></p> <p><strong>Research Questions</strong></p> <ul> <li>What are the sources of variability in intonation? </li> <li>How is variability constrained in speech, and how can we disentangle within-category variability from gradience? </li> <li>How can variability be modelled, and how is it handled by listeners? </li> <li>Does intonation involve redundant cues, and if so, what are they and what is their role in interpreting a tune’s pragmatic intent? Is there cue-trading between f0 and segmental cues?</li> <li>How is variability handled by listeners, and are there individual differences among them? </li> </ul> <p><br> <strong>Objective</strong>: To document facets of intonational variability in production and study their role in perception in order to arrive at better models of both intonational structure and meaning, as well as disentangle paralinguistic gradience from within-category variability.</p> <p><br></p> <h4><strong>C. The challenge of meaning</strong></h4> <p>What is the nature and structure of intonational meaning? How does intonation affect pragmatic interpretation and processing?<br> <br></p> <p><strong>Research Questions</strong> - What is the nature and structure of intonational meaning? - How does intonation affect pragmatic interpretation and processing?</p> <p><br> <strong>Objective</strong>: To analyse spontaneous data and use behavioural and neurophysiological experiments and computational modelling to understand the nature and structure of intonational meaning, and thus shed light on the structure of intonational representations, variability and gradience (objectives A and B respectively).</p> <p><br></p> <h3><strong>Phenomena Studied in SPRINT</strong></h3> <p>In order to address the three challenges and meet the stated objectives, SPRINT focuses on the following intonational phenomena.<br> <br></p> <p><strong>English uptalk</strong>: Uptalk refers to pitch rises at the end of statements. Despite the existing body of research on uptalk, there is no agreement as to what makes a phrase-final pitch rise an instance of uptalk, and whether there is a gradient or categorial difference between uptalked statements, on the one hand, and questions and continuation rises, on the other. SPRINT looks into uptalk in the Greater London Area (LonEN), where uptalk is considered an innovation, and in Bristol English (BriEN), where uptalk is considered the default ending of statements. Our aim is to document the usage, form(s) and functions of uptalk in the two varieties and determine how these are encoded when a variety (here, BriEN) makes extensive use of uptalk.<br> <br></p> <p><strong>English high pitch accents</strong>: Some analyses of English intonation distinguish between a H* and a L+H* accent, with H* encoding new and L+H* contrastive information. Other analyses consider these realizations to be the endpoints of a continuum that relates to emphasis. The former position assumes a categorial distinction related to information structure, while the latter treats the differences as gradient. LonEN is said not to make the distinction between H* and L+H*, raising questions as to how it encodes differences in information structure; e.g. can such differences be encoded gradiently, and if so, how can such effects be reconciled with a phonological, categorial representation of intonation? The results will be compared to those from BriEN which uses uptalk for new information, potentially leading to the use of entirely different tunes (and pitch accents) to encode information structure distinctions.<br> <br> </p> <p><strong>Greek high pitch accents</strong>: Standard Athenian Greek (AthGR) has three high pitch accents: H* which encodes new information, L+H* which encodes contrastive information, and H*+L which is used to introduce new information that the speaker believes should have already been in the common ground. Corfiot Greek (CorGR) does not use distinct accents for new and contrastive information, posing similar issues to those pertaining to LonEN with respect to English high pitch accents. Cross-linguistic comparison between LonEN and CorGR will allow us to determine if similar strategies are used in both to encode this essential difference in information structure, and will shed further light on the issue of gradience in intonation.<br> <br> </p> <p><strong>Greek polar question tunes</strong>: Polar (yes-no) questions have an unusual tune in AthGR which ends low and allows speakers to place focus on different words. This tune is not part of the CorGR system; in CorGR the tune used with questions is very similar to that used with statements, except that in questions it does not appear to allow for variation in the location of focus. This raises questions about within-dialect intelligibility of questions (which are not grammatically marked in CorGR) and the possible role of gradience in distinguishing questions from statements. Since the tune of CorGR questions is very different from that of AthGR, this raises additional questions about cross-dialectal intelligibility and the role of intonation in shaping pragmatic meaning during a communicative interaction.</p> <p><br></p> <hr> <p>This Production Study Pilots section was created on 28/04/2020. Most recent update: 05/06/2020.</p> <hr> <h2><strong>PRODUCTION STUDY PILOTS</strong></h2> <hr> <p><strong><em>We are currently conducting pilot studies for the first phase of SPRINT, which consists in the collection (and subsequent annotation, analysis and modelling) of a large corpus from the four varieties in the project, focusing on the four phenomena studied in SPRINT, rising accents in English and Greek, uptalk in English, and polar question tunes in Greek.</em></strong></p> <p><strong><em>As the UK went into lockdown as we were started to recruit for our pilots in Athens and the Greater London Area, we had to develop new ways to collect our pilot data remotely. We mention some of these changes below.</em></strong></p> <hr> <h4><strong>Dates of 1st phrase of pilot data collection</strong></h4> <p>Projected: May 2020 (English, Greater London)<br> Projected: June 2020 (Greek, Athenian)</p> <hr> <h3><strong>Methods</strong></h3> <p><br></p> <h4><strong>1. Questionnaires and tests</strong></h4> <p>All participants are asked to complete the following tests and questionnaires:</p> <p>(a) Demographic questionnaire<br> (b) <a href="https://www.autismresearchcentre.com/project_7_asquotient" rel="nofollow">The Adult Autism Spectrum Quotient</a><br> (c) <a href="https://www.autismresearchcentre.com/project_1_empathy" rel="nofollow">The Empathy Quotient</a><br> (d) <a href="https://ipip.ori.org/" rel="nofollow">The IPIP personality test</a><br> (e) <a href="https://ipip.ori.org/" rel="nofollow">Mini-PROMS test</a> (Profile of Music Perception Skills)<br> (f) Frequency discrimination acuity test (developed by the SPRINT team with help from Lazaros Gonidis)<br> (g) Exit questionnaire</p> <p>Questionnaires are provided as html forms in the original language of the participants. We have created Greek translations for the AQ, EQ, IPIP, and PROMS tests (those for AQ, EQ, and IPIP are based on the official translations with amendments to improve clarity). </p> <p>We are developing our own frequency discrimination acuity test to examine acuity at frequencies relevant for intonation. We will be posting details as soon as the test preparations are complete.</p> <p><br></p> <h4><strong>2. Tasks</strong></h4> <p>In the production pilots we are trying out a number of tasks. Our aim is to elicit natural and varied speech in conditions that are ecologically valid, to the extent possible, and involve different styles of speaking. Our tasks include the reading of scripted materials, as well as semi-scripted and spontaneous tasks.</p> <p><br></p> <h4><strong>2.1 Reading scripted materials</strong></h4> <p><br> <strong>(a) Mini-dialogues</strong></p> <p>Mini-dialogues of two-three turns each, using lexical items and constructions suitable for the examination of pitch accents, sag between pitch accents, and uptalk. In some instances (one-word polar questions and utterances with H*+L in Greek), dialogues are replaced by a modified Discourse Completion Task (DCT): participants are given a pragmatic situation and a sentence they are meant to produce in a way suitable for the context. In both English and Greek, this task is run with a confederate who speaks the target variety natively and reads part of the dialogue (or the instructions in the DCT items).</p> <p><br> In the English dialogues, we manipulated the following factors: </p> <p>(i) Length of utterance (one-word utterances e.g., <em>Melinda</em>, vs. longer utterances e.g., <em>No, we went through the meadow</em>). </p> <p>(ii) Information status (new vs. contrastive focus). </p> <p>(iii) Location of focus (e.g., <em>Anna <strong>Morrison<strong><em> vs. </em></strong>Anna</strong> Morrison</em>). </p> <p>(iv) Location of stress (e.g., one-word utterance include <em>Molly</em> vs. <em>maroon</em>). </p> <p>(v) Distance between accents (to examine tonal crowding and distal effects, e.g., in two-word utterances we used both <em>Lou Neil</em>, which exhibits tonal crowding due to the stress clash and short duration of each name, and <em>Malory Neil</em>, in which there is no clash) </p> <p>(vi) For uptalk in particular, we include a variety of sentence types (declaratives with different information structure, and sentences of varied pragmatic intent that could be expressed using uptalk or other rises, e.g., lists, polar questions, sentences expressing uncertainty). </p> <p><br> In the Greek dialogues, we manipulated the following factors: </p> <p>(i) Sentence modality: the materials include polar questions and declaratives </p> <p>(ii) Length of utterance one- and two-word utterances (e.g., <em>Μελίνα</em>, <em>Μαρίνα Γαλάνη</em>). </p> <p>(iii) Location of focus in two-word utterances in both statements and questions (e.g., <em>Ζωζώ</em> <strong>Μόραλη<strong><em> vs. </em></strong>Ζωζώ</strong> Μόραλη;*). </p> <p>(iv) Location of stress (e.g., one-word utterances include oxytones e.g, <em>Δοδώ</em>, paroxytones e.g., <em>Λήδα</em>, and proparoxytones, e.g., <em>Βύρωνας</em>). </p> <p>(v) Distance between accents in two-word utterances (to examine tonal crowding and distal effects, e.g., <em>Ζωζώ Μόραλη</em>, <em>Ρωμύλο Λαιμό</em>). </p> <p>(vi) Information status in declaratives, so as to elicit the three accents. </p> <p><br> <strong>(b) Narratives</strong></p> <p>Written narratives with some dialogue elements. A folktale and a news article are being tested in the pilot study. To achieve ecological validity, this task is completed with an experimenter who speaks the language to whom each participant reads the narratives.</p> <p><br></p> <h4><strong>2.2 Semi-spontaneous materials</strong></h4> <p><br> <strong>(a) Story-telling</strong></p> <p>We are piloting two methods of story-telling: retelling of the stories in the read narratives, and a simple picture-prompted storytelling game (<a href="https://www.storycubes.com/en/" rel="nofollow">Rory's Story Cubes</a>, using six cubes at a time for a total of three short narratives). To achieve ecological validity, both Greek and English participants complete the tasks with an experimenter. </p> <p>In English, story-telling is expected to lead to the use of uptalk and information structure distinctions. </p> <p>In Greek, story-telling is expected to lead to the use of information structure distinctions; to elicit responses with the H*+L pitch accent (which expresses an element of surprise), we are asking Greek participants to tell one of the stories with a confederate who asks naïve questions. We do not expect that participants will use many polar questions in their retelling unless they remember those embedded in one of the original texts. </p> <p><strong>Due to COVID-19 restrictions</strong>, we cannot provide our Greek participants with Story Cubes, as the pilots are conducted remotely. We will instead trial dice story apps for <a href="https://play.google.com/store/apps/details?id=com.developer.cachucha.storydice&hl=en_GB" rel="nofollow">Android</a> and <a href="https://apps.apple.com/us/app/story-dice-creative-storytelling/id525351988" rel="nofollow">iPhone</a>. Participants will use these to tell their stories in the same manner as they would using Rory's Story Cubes.</p> <p><br> <strong>(b) Map task</strong></p> <p>Map task is a cooperative task involving two people each having a map which the other cannot see. The Instruction Follower has to reproduce a route marked on the Instruction Giver’s map; the two maps are not identical, so Giver and Follower must negotiate the differences. </p> <p>For English, two sets of maps are used. Two participants complete the task, one time as the Instruction Giver and one time as Follower. We anticipate that the task will help us elicit uptalk and questions, as well as falling declaratives with information structure distinctions.</p> <p>For Greek, four sets of maps are used. Greek participants will complete the task following the same procedure as English participants using two sets of maps. The other two sets of maps are simplified (they have just 6 one-word landmarks, and simple paths), and each participant completes just one with a confederate. For these simple maps, participants act as Instruction Givers, and the confederate as a Follower who fails to understand instructions; this is done as to elicit from participants responses with H*+L. It is expected that participants will use polar questions in both the Instruction Follower and Giver roles.</p> <p><strong>Due to COVID-19 restrictions</strong>, Greek participants will not have access to hard-copy maps, as the pilots are conducted remotely. Instead, this task will be conducted digitally and paths will be drawn using the drawing tool function in Adobe Acrobat Reader. </p> <p>We chose the landmarks in the map tasks so as to control for the following parameters:</p> <p>(i) Length of item: one word and words in phrases of Adj+N structure (e.g., EN: <em>owl</em>, <em>tiny hummingbird</em>; GR: <em>βουνό</em>, <em>ώριμα λεμόνια</em>) </p> <p>(ii) Information status: EN: new information (H*) and contrastive information (L+H*); GR: new information (H*), contrastive information (L+H*), new information which the speaker believes should be in the common ground (H*+L) </p> <p>(iii) Location of focus (juxtaposing e.g., EN: <em>grey heron</em> with <em>grey raven</em>; GR: <em>ορεινή λίμνη</em> with <em>αλμυρή λίμνη</em>) </p> <p>(iv) Location of stress (e.g., EN: <em>baboon</em>, <em>wallaby</em>, <em>magnolia</em>; GR: <em>μουγγρί</em>, <em>ζαργάνα</em>, <em>μένουλα</em>) </p> <p>(v) Distance between accents in phrases (e.g., EN: <em>blue mallard</em>, <em>moveable marquee</em>; GR: <em>εννιά γλάροι</em>, <em>γκρίζα λεβεντόσαυρα</em>) </p> <p><br></p> <h4><strong>2.3 Spontaneous materials</strong></h4> <p><br> <strong>(a) Game playing</strong> </p> <p>We are piloting the following games to determine which ones will lead to spontaneous conversation with a large number of tunes of interest:</p> <ul> <li>English: <a href="https://www.mattelgames.com/en-gb/cards/uno" rel="nofollow">UNO</a> and <a href="https://www.dreimagier.de/spiele/mogel-motte/?lang=en" rel="nofollow">Cheating Moth</a>, played by four participants.</li> <li>Greek: UNO and <a href="https://www.spinmaster.com/product_detail.php?pid=p21396&bid=" rel="nofollow">Hedbanz</a>, and a Greek card game called koltsina played by four people (either four participants, or three participants and the confederate). </li> </ul> <p>UNO and Cheating Moth should lead to the use of information structure distinctions.</p> <p>Hedbanz is used to ensure we elicit a sufficient number of spontaneous polar questions in Greek. </p> <p>Koltsina will be used to elicit the H*+L accent (by instructing the confederate to make obvious mistakes). </p> <p><strong>Due to COVID-19 restrictions</strong>, we had to make the following amendments to the Greek games:</p> <p>a. Participants will play <a href="https://play.unofreak.com/" rel="nofollow">online UNO</a>.<br> b. Participants will not have the Hedbanz kit, so instead this game will be played using a guessing game app called <a href="https://play.google.com/store/apps/details?id=app.guesswho&hl=en_GB&showAllReviews=true" rel="nofollow">Μάντεψε Τι Είσαι</a> (“Guess that you are”).<br> c. Participants will be able to play koltsina if they have their own pack of cards and are recording in person and not virtually.<br> d. Participants will not be able to play Cheating Moth. </p> <p><br> <strong>(b) Conversation box</strong></p> <p>Pairs of participants are given a <a href="http://www.lrec-conf.org/proceedings/lrec2010/pdf/352_Paper.pdf" rel="nofollow">conversation box</a> filled with seven unusual objects, the identity or function of which is unlikely to be immediately obvious. The participants are encouraged to discuss the objects with a view to answering two questions about each object in the end: (1) what is this object? (2) what does it do/is it used for? At the end of the task, participants are given information about the identity and function of the objects.</p> <p><strong>Due to COVID-19 restrictions</strong>, we cannot give the objects to Greek participants, as the pilots are conducted remotely. Instead, we will provide them with a combined video file with a short video of each object as well as still photos. A confederate will give participants a key to each object at the end.</p> <p><br></p> <h4><strong>3. Data collection protocols</strong></h4> <p>We are experimenting with ways to record our pilot data while it is not possible to proceed with our planned recording protocols due to social distancing rules. </p> <ul> <li> <p>Locations: <br> Canterbury, Athens.</p> </li> <li> <p>Recording condition: <br> quiet rooms in participants' homes.</p> </li> <li> <p>Recording Apparatus: <br> <strong>Canterbury:</strong> ZOOM H4N Recorder, using only the inbuilt microphones. Recorded in person by University of Kent MA student Lucy Berrington.<br> <strong>Athens:</strong> Pilot recordings may take place in person or remotely. When completing speech tasks in person, each participant will record themselves on their mobile phone using using Awesome Voice Recorder (available on <a href="https://apps.apple.com/us/app/awesome-voice-recorder/id892208399" rel="nofollow">iPhone</a> and <a href="https://play.google.com/store/apps/details?id=com.newkline.avrx&hl=en_GB" rel="nofollow">Android</a>) and the experimenter will recorder the entire conversation. When recorded remotely, participants will complete speech tasks over the meeting software <a href="https://zoom.us/" rel="nofollow">Zoom</a>; participants will record themselves on their mobile phone using Awesome Voice Recorder and will also be recorded by the experimenter using the record function in Zoom (so that there is a recording of the entire conversation).</p> </li> <li>Recording details:<br> <strong>Canterbury:</strong> 24-bit, 44.1kHz, stereo, WAV.<br> <strong>Athens:</strong> mobile phones - 24-bit, 44.1kHz, mono, WAV; Zoom - M4A.</li> </ul>
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.