Date created: | Last Updated:
: DOI | ARK
Creating DOI. Please wait...
Category: Project
Description: Social gaze is an important source of information in guiding how we understand and interact with others (Gobel et al., 2015). During the pre-language era of our evolution, the ability to signal and perceive gaze cues from conspecifics is believed to have been critical in supporting collaboration and communication (Tomasello et al., 2007). Gaze use is also critical in supporting our ability to achieve joint attention with others – which, in turn, is pivotal during human infancy in supporting both language and social cognition development (Charman, 2003; Dawson et al., 2004). Eye contact, compared to averted gaze, is a potent signal that can rapidly capture attention (Senju & Hasegawa, 2005; Vuilleumier et al., 2005). Observing direct gaze also modulates the activation of neural substrates broadly associated with making inferences about the perspectives and intentions of others (Cañigueral & Hamilton, 2019). This influence of eye contact has been broadly referred to as the ‘Eye contact effect’ (Senju & Johnson, 2009). According to the ‘Fast track modulator’ explanation of the eye contact effect, observing direct gaze engages neural mechanisms that evolved to rapidly process and execute human responses to face-bound social cues. This is believed to involve subcortical pathways (e.g., superior colliculus, pulvinar, and amygdala) that trigger downstream social-cognitive processes (e.g., perspective taking). The prioritised access of low spatial frequency gaze and face-bound cues via this subcortical pathway is believed to have served an evolutionary function. On the one hand, it supports the rapid detection of threat-related signals from conspecifics (Morris et al., 1999; Öhman et al., 2001). On the other hand, it provides social information that allows us to readily understand and communicate with others in collaborative contexts (Tomasello et al., 2007). Eye contact is considered to be a special social signal for conveying a conspecific’s readiness or intention to communicate (see Cañigueral & Hamilton, 2019 for review). Several studies have shown direct gaze to increase the likelihood of conversation initiation (Cary, 1978), and enhanced gaze-following when gaze shifts follow eye contact observed both from a second-person perspective (Farroni et al., 2003; Senju & Csibra, 2008) and third-person perspective (i.e., observing mutual gaze between two other agents; Böckler et al., 2014). This has been presented as evidence that the act of engaging or observing eye contact increases the social relevance of subsequent eye movements. Such accounts are also indirectly supported by neuroimaging studies which show that observing eye contact, particularly during coordinated interactions, modulate activation in neural substrates associated with the ‘Theory-of-Mind network’, including the medial prefrontal cortex, superior temporal sulcus, and temporal parietal junction (Caruana et al., 2015; Redcay et al., 2012; Tylén et al., 2012). In a series of behavioural studies, Caruana and colleagues have also argued that eye contact cues are critical during gaze-based joint attention interactions, as they help differentiate communicative gaze shifts from non-communicative gaze shifts (Caruana et al., 2017, 2020). However, no study has directly or systematically investigated the influence that observing eye contact, from a second-person perspective, has on the explicit evaluation of another agent’s intentions as being communicative (e.g., making a request for a gazed-at object) or non-communicative (e.g., privately looking at an object). This line of enquiry is critical for elucidating precisely how eye contact can signal communicative intent during social interactions. By identifying the precise features of eye contact (e.g., frequency, duration, and temporal sequence in eye movement behaviours) that lead to stronger perceptions of communicative intent, specific models of social information processing during face-to-face interactions can be defined, which in turn can inform how to engineer communicative behaviours in artificial agents (e.g., social robots). Current Study. This study will begin this endeavour by manipulating the frequency and temporal sequence of eye contact in a series of eye movements displayed by another agent during a collaborative and online semi-interactive task (see full depiction and description of eye contact conditions below). The secondary aim of this study is to also examine whether the influence of eye contact on the perception of communicative intent generalises across both human and robot agents. This will: (1) allow us to determine whether the perceptual properties of our social stimulus influence the effects of gaze observed; and (2) whether we can expect eye contact to have the same effects on perceived communicative intent when displayed on anthropomorphic human-like or robotic artificial agents. Additional Task Details. In this study, participants will observe a series of images depicting an agent (in either robot or human form) gazing towards one of three objects (see Figure 1 for a summary of trial sequences by gaze condition and stimulus set). The images are presented in a rapid sequence to depict apparent motion (see Figure 1 for timing information). On each trial, participants will make a judgment about whether or not the agent is signalling social and communicative information. This particular task will be framed within a ‘collaborative’ context in which participants are instructed to ‘help’ the agent complete the construction of an unseen block model. On each trial, participants are told that the agent must select one of three blocks visible on the screen – sometimes these will be available to the agent and sometimes the agent will require the participant’s help. On each trial, the participant has to decide whether they should give the agent one of the three blocks via a keyboard response, or do nothing. The collaborative task context is intended to make the task more intuitive. However, given the practical/ethical constraints of running this study online, participants will not be deceived to believe that they are interacting with real humans/robots. Details about the agents’ agency or intelligence will not be specified in any way. [see attached doc for figure] The gaze sequence conditions depicted in Figure 1 differ with respect to whether and/or when eye contact is made. The exact directions of averted gaze will be randomised within each condition. On each trial, participants will indicate, using the key response options depicted in Figure 1C, whether the agent is privately gazing towards a block, or requesting assistance from the participant. This study will implement a 2 (Agent Type: Robot, Human) x 6 (Gaze Sequence: see above) fully-within subjects design. Agent conditions will be completed as blocks in counterbalanced order, and we will counterbalance, across participants, whether they interact with a male or female human agent in the Human condition, such that half the females in our sample ‘interact’ with a male or female avatar, and the same for males. Those who identify as non-binary will also be randomly allocated to interact with the male or female avatar. Example images of all agents used in the study are depicted in Figure 1. Full task instructions and experimental task code can be found on the corresponding OSF project page. Before commencing the task, participants will be provided with the opportunity to practice the response-key mapping across 8 trials (2 per response key). These will not take the form of experimental trials, and participants will not be presented with the agent stimulus. Rather, they will simply receive a text prompt asking them to “give nothing”, “give cylinder” etc. Participants will receive feedback on each of these trials. Once the experiment begins, participants will also be presented with a visual prompt reminding them of the response-key mapping at the end of each trial as depicted in Figure 1C. Each main Human/Robot block will be divided into 3 mini blocks allowing participants to take 2 self-paced breaks. Each of these blocks will be identical in their trial composition, with trials internally counterbalanced with respect to the variation and sequence of averted gaze directions across trials within each condition. Conditions are randomly allocated across the mini blocks, each comprising 48 trials. Thus, participants will complete 144 trials in total for each agent (3 mini blocks x 48 trials = 144), and 288 trials in total (144 trials x 2 agents = 288). After each main agent block (i.e., Human, Robot), participants will complete the Godspeed scales (Bartneck et al., 2009) with reference to the respective agent avatars they just observed. At the very end of the experimental task, participants will complete the Comprehensive Autistic Trait Inventory (CATI; English et al., 2021).
Files can now be accessed and managed under the Files tab.