The workshop is held in conjunction with RO-MAN 2019 in New Delhi, India at the Le Meridien, Windsor Palace. The structure of the Workshop may include, but is not necessarily limited to the following organisation:
- Introduction of the Workshop and main topics
- Invited speakers presentations
- Oral presentations of full papers
- Open discussion with the invited speakers
- Dr Mary-Anne Williams, Innovation and Enterprise Research Lab, Australia
- Dr Silvia Rossi, PRISCA Lab, Naples
The workshop will be held in the morning of 14th October 2019 and will be divided into two sessions with a coffee break in between. The exact schedule will be as follows.
|08:45||Mary-Anne Williams||Invited talk I: Explainable Social Robots for Human-Robot Interaction|
|09:30||Isabel Schwaninger||Qualities of Trust: Capturing Aspects beyond System Reliability|
|09:50||Alessandra Rossi||Evaluating social behaviours effects on people's trust of robot|
|10:10||Matthew Rueben||Helping Users Develop Accurate Mental Models of Robots' Perceptual Capabilities: A First Approach|
|11:00||Silvia Rossi||Invited talk II: Designing Socially Acceptable Non-Interacting Tasks|
|11:45||Trenton Schulz||Stuck on You: How a Stuck Robot Affects Participants' Opinions|
|13:00||End of workshop|
Invited talk I: Explainable Social Robots for Human-Robot Interaction
Mary-Anne Williams, 08:45
Qualities of Trust: Capturing Aspects beyond System Reliability
Isabel Schwaninger, 09:30
Trust has gained increasing interest in previous years with regards to the uptake of social robots in homes. However, as trust is a complex and multidimensional phenomenon, a more holistic understanding of trust may benefit from using additional methodological approaches to trust scales commonly used in HRI. Further, taking into account contextual knowledge and everyday life practices of people may be fruitful for deriving design implications for trustworthy robots. We therefore aim to compare two data-sets assessed using qualitative research methods with senior citizens in Vienna, discuss the advantages and disadvantages we find in each approach, and what we can learn from making use of each method to design trustworthy robots for homes.
Evaluating social behaviours effects on people's trust of robot
Alessandra Rossi, 09:50
As we expect that the presence of autonomous robots in our everyday life will increase, we must consider that people will have to trust robots to reliably and securely engage them in collaborative tasks. Our main research aims to assess whether a certain degree of transparency in the robots actions, the use of social behaviours and natural communications can affect humans' sense of trust and companionship towards the robots. In this study, we introduce the research topic and our approach to evaluate the impact of robot social behaviours on people' trust of the robot. Future works will use the results collected during this study to create guidelines for designing a robot that is able to enhance human perceptions of trust and acceptability of robots in a safe Human-Robot Interaction.
Helping Users Develop Accurate Mental Models of Robots' Perceptual Capabilities: A First Approach
Matthew Rueben, 10:10
Understanding a robot's perceptual capabilities is important for users to make informed decisions about their privacy. Privacy, in turn, is tied to trust and is also important for user acceptance of robots. We are beginning a research program that is exploring (1) how a user forms a mental model about a robot's perceptual capabilities, (2) how the robot can estimate such models via observation, and (3) how the robot can take actions to correct an inaccurate model, i.e., teach the user about its capabilities. We introduce the research topic, discuss recent relevant work, and then describe our initial approach toward enabling robots to model and influence user beliefs.
Invited talk II: Designing Socially Acceptable Non-Interacting Tasks
Silvia Rossi, 11:00
Social robots are typically pictured as machines able to interact with people in order to properly cooperate and assist. Nevertheless, a robot’s tasks may not necessarily require social interaction with people. For example, a home companion robot assisting an older person in her own home will socially interact with the person to provide assistance, but might also have other tasks, such as monitoring her state. Any action and behavior of a robot, whether in an interactive or non-interactive situation, should always be perceived as socially acceptable by human users or other non-users inhabiting the same environment. In this talk, we will discuss the role of designing socially acceptable behaviors. The concept of a possible trade-off between the robot’s performance in accomplishing its goals and the consideration of the social environments, in terms of humans’ safety, but also of acceptability, comfort, and trust will play a central role for the mature development of service and personal robots and deployment in various markets.
Stuck on You: How a Stuck Robot Affects Participants' Opinions
Trenton Schulz, 11:45
We examine some of the qualitative aspects of an experiment that examined people's perception of a robot based on a change of its motion. Specifically, we look at people's qualitative opinions when the robot gets “stuck” while navigating and corrects itself. This extended abstract presents preliminary results and themes that we wish to examine.
End of workshop