This website uses cookies that store information about your usage of the page. By continuing to use this page you confirm you are happy with that.
Review and change how cookies are used.
This workshop will be a full day event on 12 August, 2021 in conjunction with the IEEE RO-MAN 2021 conference, which is organised by the University of British Columbia and University of Waterloo, Canada.
Trust is a fundamental aspect that helps to foster effective collaboration between people and robots. It is imperative that people trust robots to not create a hazardous situation, such as starting a fire when trying to make a cup of tea or giving the wrong medicine to a vulnerable person. Likewise, people should be able to trust robots not to create an unsafe situation, such as leaving the door open unattended or providing personal information to strangers - and potentially to thieves. Trust, however, is a complex feeling and it can be affected by several factors that depend on the human, the robot and context of the interaction. Trust might hinder a robot's assistance or lead to a loss of interest in robots after the novelty effect fades. Unreasonable over-trust in a robot's capabilities could even have fatal consequences. It is therefore important to design and develop mechanisms to increase and mitigate people's trust in service and assistive robots. A positive and balanced trust, indeed, is fundamental for building a high-quality interaction. Similarly, socially aware robots are perceived more positively by people in social contexts and situations. Social robotics systems, therefore, should integrate people's direct and indirect modes of communication. Moreover, robots should be capable of self-adapting to satisfy people's needs (i.e. personality, emotions, preferences, habits), and incorporating a reactive and predictive meta-cognition models to reason about the situational context (i.e. its own erroneous behaviours) and provide socially acceptable behaviours.
The current workshop is a continuation of a series of three successful workshops at the RO-MAN conference. This iteration of the workshop will continue contributing on how social cues can foster trust in human-robot interaction (HRI) and lead to a better acceptance of robots. Although the previous editions valued the participation of leading researchers in the field and several exceptional invited speakers who identified some of the principal points in this research direction, current research still presents several limitations. For this reason, we wish to continue to further explore the role of trust in social robotics to effectively design and develop socially acceptable and trustable robots.
In this context, we propose a deeper exploration of trust and acceptance in HRI from a multidisciplinary perspective, including robots' capabilities of sensing and perceiving other agents, the environment, and human-robot dynamics. Therefore, this workshop will analyse different aspects of human-robot interaction that can affect, enhance, undermine, or recover humans' trust in robots, such as the use of social cues or behaviour transparency (goals and actions).
We intend to open the workshop to a broad audience from academia and industry researching social robotics, machine learning, robot behavioural control, and user-profiling. We will foster the exchange of insights on past and ongoing research and contribute to the discussion of innovative ideas for tackling unresolved issues providing new and inspirational directions of research.
Topics of interest include, but are not limited to: