Programme

The workshop will consist of three types of sessions. We plan on having three keynote presentations that are dedicated to discussing the role of trust in social robotics. We will also feature the presentations of accepted papers discussing the workshop topics. Finally, we will foster a panel discussion where attendees, keynote speakers, and presenters will be invited to join a dynamic conversation with our invited panelists about the relevant challenges of effectively supporting the design and development of socially acceptable and trustable robots.

Break-out sessions will intermix the main sessions to allow attendees to network and further discuss the workshop topics. One of the break-out sessions will be dedicated to a "mentor and mentees" event where attendees, in particular students and PhD students who just started to work in research, are encouraged to ask questions and advices to the keynotes speakers. Participants and attendees will be invited to submit questions before the workshop to allow the keynotes to also reply offline. Questions and comments will be available in a special session on the workshop website.

The proceedings of SCRITA 2022 submissions can be browsed on arXiv:2208.11090.

Invited Speakers

  • Moojan Ghafurian, SIRRL lab (University of Waterloo, Canada)

    Moojan Ghafurian

    Moojan Ghafurian is a Research Assistant Professor at the Department of Electrical and Computer Engineering at the University of Waterloo. She got her PhD from the Pennsylvania State University and was the Inaugural Wes Graham Postdoctoral Fellow from 2018-2020 at David R. Cheriton School of Computer Science at the University of Waterloo. Her research interests and background are in human-computer/robot interaction, social robotics, affective computing, and cognitive science. Dr. Ghafurian’s research explores computational models of how humans interact with computers to inform user-centered design of intelligent assistive technologies in multiple domains, especially in healthcare and for supporting older adults, persons with dementia, and caregivers. Dr. Ghafurian is a member of the Social and Intelligent Robotics Lab (SIRRL). She is also a member of AGE-WELL, Canada's technology and aging network. She has published in multiple prestigious venues such as the IEEE Transactions on Affective Computing, International Journal of Social Robotics, and ACM Transactions on Computer-Human Interaction. Her work has been mentioned in different media, such as ACM TechNews, Semiconductor Engineering, and ScienceDaily.

    Invited talk: Using Emotions to Improve Human-Robot Interaction

  • Takayuki Kanda, HRI Lab (Kyoto University, Japan)

    Takayuki Kanda

    Takayuki Kanda is a professor in Informatics at Kyoto University, Japan. He is also a Visiting Group Leader at ATR Intelligent Robotics and Communication Laboratories, Kyoto, Japan. He received his B. Eng, M. Eng, and Ph. D. degrees in computer science from Kyoto University, Kyoto, Japan, in 1998, 2000, and 2003, respectively. He is one of the starting members of Communication Robots project at ATR. He has developed a communication robot, Robovie, and applied it in daily situations, such as peer-tutor at elementary school and a museum exhibit guide. His research interests include human-robot interaction, interactive humanoid robots, and field trials.

    Invited talk: Social robots in public space

  • Alan Wagner, REAL lab (Penn State University, USA)

    Alan Wagner

    Dr. Alan Wagner is an assistant professor of Aerospace Engineering and a research associate for the Rock Ethics Institute at Penn State. Previously Dr. Wagner was a senior research scientist at Georgia Institute of Technology’s Research Institute and a member of the Institute of Robotics and Intelligent Machines. His research interest include the development of algorithms that allow a robot to create categories of models, or stereotypes, of its interactive partners, creating robots with the capacity to recognize situations that justify the use of deception and to act deceptively, and methods for representing and reasoning about trust. Application areas for these interests range from military to healthcare. Dr. Wagner’s research has won several awards including being selected for by the Air Force Young Investigator Program and the National Science Foundation Faculty Early Career development Program (CAREER award). His research on deception has gained significant notoriety in the media resulting in articles in the Wall Street Journal, New Scientist Magazine, the journal of Science, and described as the 13th most important invention of 2010 by Time Magazine. His research has also won awards within the human-robot interaction community, such as the best paper award at RO-MAN 2007 and best journal article of 2018 for the ACM Transactions on Interactive Intelligent Systems. Dr. Wagner received his Ph.D. in computer science from Georgia Institute of Technology. He also holds a master’s degree in computer science from Boston University and a bachelor’s degree in psychology from Northwestern University.

    Invited talk: Exploring Human-Robot Trust during Emergencies

Invited Panelists

  • Guillem Alenyà, Institut de Robòtica i Informàtica Industrial, Spain

    Guillem Alenyà

    Guillem Alenyà is Researcher and Director at the Institut de Robotica i Informàtica Industrial (IRI), a joint centre of the Spanish Scientific Research Council (CSIC) and Polytechnic University of Catalonia (UPC). His current research is devoted to facilitate the introduction of robots in human environments, principally in the fields of assistive robotics and garment manipulation. He is coordinator of various projects on developing enabling technologies for assistive robotics: ROB-IN about personalization and explainability, CLOE-GRAPH about high-level representation of tasks and explainability (coIP J. Borras), and principal investigator in the SeCuRoPS project, about privacy and safety in HRI, and BURG, about benchmarking and repeatability. https://www.iri.upc.edu/people/galenya/

  • Kerstin Sophie Haring, University of Denver, USA

    Kerstin Sophie Haring

    Dr. Kerstin S. Haring is an Assistant Professor of Computer Science at the University of Denver (DU). She directs the Humane Robot Technology Laboratory (HuRoT) that envisions interdisciplinary research in robotics with the goal of improving human lives through the promotion of better technology. She also co-directs the "DU Want to Build A Bot" Lab that envisions accessible and validated robot designs. Before her appointment at DU, she researched Human-Machine-Teaming at the U.S. Air Force Academy, completed her PhD in Human Robot Interaction at the University of Tokyo in Japan and studied Computer Science at the University of Freiburg in Germany.

  • Alessandro Di Nuovo, Sheffield Hallam University, UK

    Alessandro Di Nuovo

    Alessandro Di Nuovo is Professor of Machine Intelligence at Sheffield Hallam University. He received the Laurea (MSc Eng) and the PhD in Informatics Engineering from the University of Catania, Italy, in 2005 and 2009, respectively. At Present, Prof. Di Nuovo is the leader of Technological and Digital Innovation for promoting independent lives at the Advanced Wellbeing Research Centre. He is also the leader of the Smart Interactive Technologies research laboratory of the Department of Computing. He is a member of the Executive Group of Sheffield Robotics, an internationally recognized initiative of two Sheffield Universities to support innovative and responsible research in robotics. Prof. Di Nuovo has a track-record of externally funded interdisciplinary research and innovation in AI and robotics; he has led several large collaborative research projects funded by the European Union, UK Research Councils, charities and large industries. He has published over 120 articles in computational intelligence and its application to cognitive modelling, human-robot interaction, computer-aided assessment of intellectual disabilities, and embedded computer systems. Currently, Prof. Di Nuovo is editor-in-chief (topics AI in Robotics; Human Robot/Machine Interaction) of the International Journal of Advanced Robotic Systems (SAGE). He is also serving as Associate Editor for the IEEE Journal of Translational Engineering in Health and Medicine.

  • Gerard Canal, King’s College London, UK

    Gerard Canal

    Gerard Canal is a Lecturer (Assistant Professor) in Autonomous Systems and a Royal Academy of Engineering (RAEng) UK IC Postdoctoral Research Fellow at the Department of Informatics of King’s College London. In March 2020 he completed his PhD in Automatic Control, Robotics and Computer Vision, at the Institut de Robòtica i Informàtica Industrial under the supervision of Dr. Guillem Alenyà and Prof. Carme Torras. He received my bachelor’s degree in Computer Science at the Facultat d’Informàtica de Barcelona (FIB) from the Universitat Politècnica de Catalunya (UPC) in 2013. Later, in 2015, I obtained a master’s degree in Artificial Intelligence at Universitat Politècnica de Catalunya (UPC), Universitat de Barcelona (UB) and Universitat Rovira i Virgili (URV). He received a MSc in Artificial Intelligence and BSc in Computer Science. His interests include Assistive Robotics, Robot behavior personalization based on preferences, Human-Robot Interaction, Social Robotics, AI Planning applied to Robotics and HRI, Explainable Robot Behavior.

Schedule

The workshop will be held on the first workshop days of RO-MAN 2022: 29 August. In general, invited keynote talks are about 45 minutes long, including time for Q&A and discussion. All submissions should plan for a 10 minutes talk and 5 minutes Q&A.

Please find the preliminary schedule below. We will update this page as soon as there are any updates to the programme.

Please note that all times are given in Central European Summer Time (CEST) (conversion to other time zones).

Morning session

The morning session will be between 9:00 and 13:00.

Time Speaker Title
9:00 Organisation committee Welcome & introduction
9:15 Takayuki Kanda Invited talk: Social robots in public space
10:00 Wright et al. When Robots Interact with Groups, Where Does the Trust Reside? (#7229)
10:15 Krantz et al. Using Speech to Reduce Loss of Trust in Humanoid Social Robots (#8667)
10:30 Angelopoulos et al. Robot Behaviors for Transparent Interaction (#9775)
10:45 Coffee break
11:15 Schreiter et al. The Effect of Anthropomorphism on Trust in an Industrial Human-Robot Interaction (#3783)
11:30 Mizaridis Cloud-based SLAM for personalized robots (#6317)
11:45 Marcinkiewicz et al. Integrating Humanoid Robots Into Simulation-Software-Generated Animations to Explore Judgments on Self-Driving Car Accidents (#6340)
12:00 All participants Panel discussion with Guillem Alenyà, Kerstin Haring, Alessandro Di Nuovo, and Gerard Canal

Lunch break is from 13:00 to 14:00.

Afternoon session

The afternoon session will be between 14:00 pm and 17:30 pm.

Time Speaker Title
14:00 Moojan Ghafurian Invited talk: Using Emotions to Improve Human-Robot Interaction
14:45 Krakovski et al. Older adults’ acceptance of SARs: The link between anticipated and actual interaction (#9189)
15:00 Bagozzi et al. Robots, Agents, interactions between Emotions and Trust in the LAAM model for ELA interaction: Focus on their potential effect on loneliness (#0171)
15:15 Casso et al. The Effect of Robot Posture and Idle Motion on Spontaneous Movement Contagion during Robot-Human Interactions (#8601)
15:30 Cocchella et al. "iCub, We Forgive You!" Investigating Trust in a Game Scenario with Kids (#3515)
15:45 Coffee break
16:15 Alan Wagner Invited talk: Exploring Human-Robot Trust during Emergencies
17:00 Krakovski et al. Robotic Exercise Trainer: How Failures and T-HRI Levels Affect User Acceptance and Trust (#8886)
17:15 Organisation committee Conclusion

The workshop will conclude at about 17:30.

Talk outline

Keynotes

Social robots in public space

Takayuki Kanda

Keynote, 9:15 - 10:00

Abstract: Social robots are coming to appear in our daily lives. Yet, it is not as easy as one might imagine. We developed a human-like social robot, Robovie, and studied the way to make it serve for people in public space, such as a shopping mall. On the technical side, we developed a human-tracking sensor network, which enables us to robustly identify locations of pedestrians. Given that the robot was able to understand pedestrian behaviors, we studied various human-robot interaction in the real-world. We faced with many of difficulties. For instance, the robot failed to initiate interaction with a person, and it failed to coordinate with environments, like causing a congestion around it. Toward these problems, we have modeled various human interaction. Such models enabled the robot to better serve for individuals, and also enabled it to understand people’s crowd behavior, like congestion around the robot; however, it invited another new problem, robot abuse. I plan to talk about a couple of studies in this direction, and finally talk about our recent on-going project about moral-interaction, hoping to provide an insight about near-future applications and research problems.


Using Emotions to Improve Human-Robot Interaction

Moojan Ghafurian

Keynote, 14:00 - 14:45

Abstract: The ability to show and communicate through emotions can improve human-robot interaction and increase effectiveness and adoption of social robots in multiple domains. There are many challenges in understanding how a social robot should show or communicate through emotions as emotional displays are context specific. It is also challenging to design emotions of social robots in a way that they can be understood accurately by humans. This talk presents a summary of our ongoing work on improving non-verbal and mainly emotional capabilities of intelligent agents, and emphasizes the benefits of using emotionally intelligent agents in different domains, such as healthcare, games, and search and rescue. Results of different experiments will be presented to show how emotions can improve humans’ enjoyment, trust, and effectiveness of communications with intelligent agents. Challenges and considerations for designing emotionally intelligent agents will be discussed.


Exploring Human-Robot Trust during Emergencies

Alan Wagner

Keynote, 16:15 - 17:00

Abstract: This talk presents our ongoing effort to develop emergency evacuation robots and to understand the conditions that influence evacuee trust in a robot. We will present our recent attempts to understand and predict how people react to evacuation directions given by a robot during an emergency and how they calibrate their trust in the robot. Results from both in person and virtual reality experiments provide evidence demonstrating that, in certain conditions, people will trust a robot too much. We present a formal conceptualization of human-robot trust that is not tied to a particular problem or situation as demonstrated by other applications of our research for autonomous driving and using robots for space station caretaking. Our presentation will also consider the ethical implications of creating emergency evacuation robots and present a set of ethical guidelines for developing evacuation robots. The talk will conclude by presenting avenues and motivations for future research.

Contributed papers

When Robots Interact with Groups, Where Does the Trust Reside?

Ben Wright, Emily Collins and David Cameron

Contributed paper, 10:00 - 10:15 (arXiv:2208.13311)

Abstract: As robots are introduced to more and more complex scenarios, the issues of trust become more complex as various groups, peoples, and entities begin to interact with a deployed robot. This short paper explores a few scenarios in which the trust of the robot may come into conflict between one (or more) entities or groups that the robot is required to deal with. We also offer up a scenario concerning the idea of repairing trust through a possible apology.


Using Speech to Reduce Loss of Trust in Humanoid Social Robots

Amandus Krantz, Christian Balkenius and Birger Johansson

Contributed paper, 10:15 - 10:30 (arXiv:2208.13688)

Abstract: We present data from two online human-robot interaction experiments where 227 participants viewed videos of a humanoid robot exhibiting faulty or non-faulty behaviours while either remaining mute or speaking. The participants were asked to evaluate their perception of the robot’s trustworthiness, as well as its likeability, animacy, and perceived intelligence. The results show that, while a non-faulty robot achieves the highest trust, an apparently faulty robot that can speak manages to almost completely mitigate the loss of trust that is otherwise seen with faulty behaviour. We theorize that this mitigation is correlated with the increase in perceived intelligence that is also seen when speech is present.


Robot Behaviors for Transparent Interaction

Georgios Angelopoulos, Alessandra Rossi, Claudia Di Napoli and Silvia Rossi

Contributed paper, 10:30 - 10:45

Abstract: Interacting physically and sharing the environment lead humans and robots to work with each other at close distance, and in such circumstances autonomous robots are expected to have safe and social behaviors. Thereby, developing a socially acceptable behavior for autonomous robots is a foreseeable problem for the Human-Robot Interaction field. We propose two methods for integrating transparent behaviors into social robots: a by-design and a by-learning approach. Furthermore, with this work we address the problem of transparency from the navigation point of view. Our first step has been to explore these two approaches by conducting a preliminary within-subjects study (33 participants). Our results showed that deictic gestures as navigational by-design cues for humanoid robots result in fewer navigation conflicts than the use of a simulated gaze. Additionally, the perceived anthropomorphism was increased when the robot used the deictic gesture as a cue. These findings highlight the need for a universal design approach to effectual non-verbal by-design behaviors to increase transparency in future humanoid robotic applications and also underscore the importance of investigating by-learning cues as the complexity of robots' internal state grows.


The Effect of Anthropomorphism on Trust in an Industrial Human-Robot Interaction

Tim Schreiter, Lucas Morillo-Mendez, Ravi Teja Chadalavada, Andrey Rudenko, Erik Alexander Billing and Achim J. Lilienthal

Contributed paper, 11:15 - 11:30 (arXiv:2208.14637)

Abstract: Robots are increasingly deployed in spaces shared with humans, including home settings and industrial environments. In these environments, the interaction between humans and robots (HRI) is crucial for safety, legibility, and efficiency. A key factor in HRI is trust, which modulates the acceptance of the system. Anthropomorphism has been shown to modulate trust development in a robot, but robots in industrial environments are not usually anthropomorphic. We designed a simple interaction in an industrial environment in which an anthropomorphic mock driver (ARMoD) robot simulates to drive an autonomous guided vehicle (AGV). The task consisted of a human crossing paths with the AGV, with or without the ARMoD mounted on the top, in a narrow corridor. The human and the system needed to negotiate trajectories when crossing paths, meaning that the human had to attend to the trajectory of the robot to avoid a collision with it. There was a significant increment in the reported trust scores in the condition where the ARMoD was present, showing that the presence of an anthropomorphic robot is enough to modulate the trust, even in limited interactions as the one we present here.


Cloud-based SLAM for personalized robots

Vasileios Mizaridis

Contributed paper, 11:30 - 11:45

Abstract: It is no secret that robots are becoming more accessible for everyone. Sooner or later, humans and robots will coexist together and collaborate in everyday activities. It is up to us, as roboticists, to lay the foundations so that robots are becoming more trustworthy, and acceptable to our world. There are 3 main topics that will be investigated in this position paper. First one is the task of autonomous navigation in unknown human environments. Active Simultaneous Localization and Mapping (active SLAM) will be considered as the core of the implementation with the goal of creating meaningful maps for the robot. To tune and improve the base active SLAM algorithm, Reinforcement Learning (RL) methods will be used. Finally, cloud-based networks will be tested against on-board solutions to see if we can achieve better results (low latency, less computational time) over the cloud.


Integrating Humanoid Robots Into Simulation-Software-Generated Animations to Explore Judgments on Self-Driving Car Accidents

Victoria Marcinkiewicz, Christopher Wallbridge, Qiyuan Zhang and Phillip Morgan

Contributed paper, 11:45 - 12:00

Abstract: Building on the knowledge that human drivers (HD’s) and self-driving cars (SDC’s) are not blamed and trusted in the same way following a road traffic accident (RTA) or near-miss event, this paper proposes a novel method to investigate whether the manipulation of anthropomorphism - in part using humanoid robots (HR) - leads to reduced levels of blame and increased trust in SDC’s that is more akin to HD’s.


Older adults’ acceptance of SARs: The link between anticipated and actual interaction

Maya Krakovski, Oded Zafrani, Galit Nimrod and Yael Edan

Contributed paper, 14:45 - 15:00 (arXiv:2209.01624)

Abstract: This abstract aims to demonstrate how the QE of SARs among older adults is shaped by anticipated and actual interaction. Accordingly, it was carried out in two parts: (a) an online survey to explore the anticipated interaction through video viewing of a SAR and (b) an acceptance study in which the older adults interacted with the robot. In both parts, we used “Gymmy,” a robotic system developed in our lab for older adults’ physical and cognitive training.


Robots, Agents, interactions between Emotions and Trust in the LAAM model for ELA interaction: Focus on their potential effect on loneliness

Rick Bagozzi, Brice Pablo Diesbach, Jean-Philippe Galan, Michele Grimaldi and Andrea Hoffman Rinderknecht

Contributed paper, 15:00 - 15:15

Abstract: We remind the basic characteristics of an agent, an embodied virtual agent, and propose the concept of ELA-Embodied Life-companion Agent. With 4 levels of embodiment (chatbot, EVA, hologram, robot) susceptible of becoming life-companions. A literature review justifies the introduction of the grand LAAM research-model (life-companion agent acceptance model), from which we extract a nested sub-model presented here. We remind the genesis of new technology acceptance modelling from the TAM, UTAUT, till the sRAM or service robot acceptance model. We then sop more in-depth on the role of emotions in user-ELA interaction, and the interaction between emotions, and trust. A section is dedicated to the importance of loneliness, not only for elderly or patients, but in our society as a whole, justifying our interest towards the fact that if ELAs might decrease loneliness this would be a societal achievement. In such approach, we pay a particular attention to two drivers of interest: trust towards, and perceived social presence of, an ELA.


The Effect of Robot Posture and Idle Motion on Spontaneous Movement Contagion during Robot-Human Interactions

Isabel Casso, Bing Li, Tatjana Nazir and Yvonne N. Delevoye-Turrell

Contributed paper, 15:15 - 15:30 (arXiv:2209.00983)

Abstract: In the next decade, social robots will be implemented in many public spaces to provide services to humans. We question the properties of these social robots to afford acceptance and spontaneous emotional interactions. More specifically, in the present study, we report the effects of the frequency of idle motions in a robot in a face-to-face interactive task with a human participant. The robotic system Buddy was programmed to adopt a sad posture and facial expression while speaking a total of three sad stories while moving its head up/down at low, medium-high, and high frequencies. Each participant (N=15 total) was invited to sit in front of Buddy and listen to the stories. Unconscious changes in posture in the human participant were recorded using a 3D motion capture system (Qualysis). Results show greater inclinations of the shoulder/torso towards the ground in low-frequency trials and more rigid postures in high-frequency trials. The quantity of spontaneous movement was also greater when Buddy moved at slow frequencies. These findings echo results reported in experimental psychology when two individuals are engaged in social interactions. The scores obtained in the Godspeed questionnaire further suggest that emotional contagion may occur when Buddy moves slowly because the robotic system is perceived as more natural and knowledgeable at these frequencies. Body posture and frequency of idle motion should be considered important factors in the conception of robotic systems. Such work will afford social robots that offer emotional contagion for effortless robot-human collaborative tasks.


"iCub, We Forgive You!" Investigating Trust in a Game Scenario with Kids

Francesca Cocchella, Giulia Pusceddu, Giulia Belgiovine, Linda Lastrico, Francesco Rea and Alessandra Sciutti

Contributed paper, 15:30 - 15:45 (arXiv:2209.01694)

Abstract: This study presents novel strategies to investigate the mutual influence of trust and group dynamics in children-robot interaction. We implemented a game-like experimental activity with the humanoid robot iCub and designed a questionnaire to assess how the children perceived the interaction. We also aim to verify if the sensors, setups, and tasks are suitable for studying such aspects. The questionnaires' results demonstrate that youths perceive iCub as a friend and generally in a positive way. Other preliminary results suggest that generally children trusted iCub during the activity and, after its mistakes, they tried to reassure it, with sentences such as: "Don't worry iCub, we forgive you". Furthermore, trust towards the robot in group cognitive activity appears to change according to gender: after two consecutive mistakes by the robot, girls tended to trust iCub more than boys. Finally, no significant difference has been evidenced between different age groups across points computed from the game and the self-reported scales. The tool we proposed is suitable for studying trust in human-robot interaction (HRI) across different ages and seems indicated to understand the mechanism of trust in group interactions.


Robotic Exercise Trainer: How Failures and T-HRI Levels Affect User Acceptance and Trust

Maya Krakovski, Naama Aharony and Yael Edan

Contributed paper, 17:00 - 17:15 (arXiv:2209.01622)

Abstract: Physical activity is important for health and wellbeing, but only few fulfill the World Health Organization's criteria for physical activity. The development of a robotic exercise trainer can assist in increasing training accessibility and motivation. The acceptance and trust of users are crucial for the successful implementation of such an assistive robot. This can be affected by the transparency of the robotic system and the robot's performance, specifically, its failures. The study presents an initial investigation into the transparency levels of task, human, robot, and interaction (T-HRI), with robot behavior adjusted accordingly. A failure in robot performance during part of the experiments allowed to analyze the effect of the T-HRI levels as related to failures. Participants who experienced failure in the robot's performance demonstrated a lower level of acceptance and trust than those who did not experience failure in the robot's performance. In addition, there were differences in acceptance measures between T-HRI levels and participant groups, suggesting several directions for future research.