Trust, Acceptance and Social Cues in Human-Robot Interaction - SCRITA

This workshop will be a full day event on 12 August, 2021 in conjunction with the IEEE RO-MAN 2021 conference, which is organised by the University of British Columbia and University of Waterloo, Canada.

Statement of Objectives

Trust is a fundamental aspect that helps to foster effective collaboration between people and robots. It is imperative that people trust robots to not create a hazardous situation, such as starting a fire when trying to make a cup of tea or giving the wrong medicine to a vulnerable person. Likewise, people should be able to trust robots not to create an unsafe situation, such as leaving the door open unattended or providing personal information to strangers - and potentially to thieves. Trust, however, is a complex feeling and it can be affected by several factors that depend on the human, the robot and context of the interaction. Trust might hinder a robot's assistance or lead to a loss of interest in robots after the novelty effect fades. Unreasonable over-trust in a robot's capabilities could even have fatal consequences. It is therefore important to design and develop mechanisms to increase and mitigate people's trust in service and assistive robots. A positive and balanced trust, indeed, is fundamental for building a high-quality interaction. Similarly, socially aware robots are perceived more positively by people in social contexts and situations. Social robotics systems, therefore, should integrate people's direct and indirect modes of communication. Moreover, robots should be capable of self-adapting to satisfy people's needs (i.e. personality, emotions, preferences, habits), and incorporating a reactive and predictive meta-cognition models to reason about the situational context (i.e. its own erroneous behaviours) and provide socially acceptable behaviours.

The current workshop is a continuation of a series of three successful workshops at the RO-MAN conference. This iteration of the workshop will continue contributing on how social cues can foster trust in human-robot interaction (HRI) and lead to a better acceptance of robots. Although the previous editions valued the participation of leading researchers in the field and several exceptional invited speakers who identified some of the principal points in this research direction, current research still presents several limitations. For this reason, we wish to continue to further explore the role of trust in social robotics to effectively design and develop socially acceptable and trustable robots.

In this context, we propose a deeper exploration of trust and acceptance in HRI from a multidisciplinary perspective, including robots' capabilities of sensing and perceiving other agents, the environment, and human-robot dynamics. Therefore, this workshop will analyse different aspects of human-robot interaction that can affect, enhance, undermine, or recover humans' trust in robots, such as the use of social cues or behaviour transparency (goals and actions).

Target Audience

We intend to open the workshop to a broad audience from academia and industry researching social robotics, machine learning, robot behavioural control, and user-profiling. We will foster the exchange of insights on past and ongoing research and contribute to the discussion of innovative ideas for tackling unresolved issues providing new and inspirational directions of research.

Topics of interest include, but are not limited to:

  • Impact of Social Cues on Trust in Human-Robot Interaction
  • Measuring Trust in Human-Robot Interaction
  • Trust Violation and Recovery Mechanism in HRI
  • Effects of Humans' Acceptance on Trust of Robots
  • Humans Sense of Control and Trust in Robots
  • Trust and Assistive Robotics
  • Overtrust in Robots
  • Antecedent of Trust and Robot Trust
  • Enhancing Humans Trust in Robots
  • Enhancing Trust in a Robot Companion
  • Privacy Implications on Trust in HRI
  • Mental Models and Trust in HRI
  • Trust and Safety in HRI
  • Ethics Implications on Trust in HRI
  • Trustworthy AI
  • XAI in HRI
  • Legal Frameworks for Trustworthy Robotics

Invited Speakers

  • Dr Alessandra Sciutti, Istituto Italiano di Tecnologia, Italy (confirmed)
  • Dr Hatice Gunes, University of Cambridge, UK (confirmed)
  • Dr Helen Hastie, Heriot-Watt University, UK (confirmed)


  • 22-03-07: The fifth iteration of SCRITA Trust, Acceptance and Social Cues in Human-Robot Interaction has been accepted at RO-MAN 2022 in Naples, Italy.
  • 22-01-10: The submission deadline for the joint special issue has been extended to 31 March 2022.
  • 21-09-02: SCRITA 2021 proceedings have been published on arXiv.
  • 21-06-08: Deadlines extended: Initial submission now possible until 11 July 2021.
  • 21-05-24: Invited speakers confirmed: Alessandra Sciutti, Hatice Gunes and Helen Hastie.
  • 21-05-12: A joint special issue with the TRAITS workshop has been confirmed at the International Journal of Social Robotics.