This website uses cookies that store information about your usage of the page. By continuing to use this page you confirm you are happy with that.
Review and change how cookies are used.
The workshop will consist of three types of sessions. We plan on having one keynote presentation that is dedicated to shaping the discussion around acceptance and trust in social robotics from the experience of a senior researcher in the field. We will also feature short presentations of accepted position papers leading into a discussion at the end of the workshop.
Finally, we will have a group discussion session, where participants will be divided into smaller groups, each revolving around fixed topics - selected by the committee based on the position papers and as recognised by recent trends in literature. Participants and attendees will also be invited, before the workshop, to submit the aspects they believe affect people's trust in robots the most. During the group discussion activity, each group will discuss the main issues and questions revolving around the preselected topics, including factors affecting acceptance and trust in social robotics. The session will be closed with a roundtable discussion involving all groups, concluding the findings of each small group, and identifying the key take-home messages.
The group discussion activity will have attendees, the keynote speaker, presenters and organisers involved in a dynamic conversation around the use of tools in HRI, and about the relevant challenges of effectively supporting the design, development and assessment of socially acceptable and trustworthy robots. We will shuffle participants between groups to enhance inclusion, diversity and equity. This will also give the opportunity to early career researchers to learn and share their ideas and knowledge with more experienced members. The discussion will be then made available in a special section on the workshop website, and we aim to publish the final results in a high-impact journal.
This half-day workshop will consist of three types of sessions as detailled below.
We will have one keynote talk presented by Selma Šabanović.
We will feature short presentations of accepted position papers to lead into the discussion session.
Finally, we will have a group discussion session, where groups revolve around fixed topics - selected by the committee based on the position papers and as recognised by recent trends in literature.
Selma Šabanović is a Professor of Informatics and Cognitive Science at Indiana University Bloomington. She studies social robotics and human-robot interaction, with a focus on exploring how robots can be designed to assist people in various use contexts, including mental health, wellness, and education. She works with current and potential robot users of all ages, from children to older adults, and in various cultures, including East Asia, Europe, and the US. She served as the Editor in Chief of the ACM Transactions on Human-Robot Interaction from 2017-2024. She currently serves as the Associate Dean of Faculty Affairs for the Luddy School, and as Associate Vice President of the Educational Activities Board for the IEEE Robotics and Automation Society. She received her PhD in Science and Technology Studies in 2007 from Rensselaer Polytechnic Institute.
Understanding trust and acceptance of robots “in the field” requires a longitudinal and socially situated perspective, which takes into account how factors affecting trust and acceptance can change over time and in relation to the social roles of different stakeholders around the technology. I will discuss examples from our studies with robots in the home, used to support the wellbeing of older adults and as educational companions for children, to illuminate different aspects of trust and acceptance in these contexts. These will include how trust in robots is defined by the robot’s social role in the context of other social relationships relevant to users; how trust can differ and be negotiated among relevant social actors; and how users’ social perceptions of the robot can play a role in the types of things they trust them to do. In relation to acceptance, we will discuss factors that are salient in multiple phases of acceptance, from the decision to purchase a robot to figuring out how to use it productively over the longer term, and to establishing the robot’s fit into the broader community context with its own priorities and values. Finally, we will consider research and design methods that can serve to build appropriate levels of trust in robots in the communities that use them.
Time | Speaker | Title |
---|---|---|
13:00 | Organisation committee | Welcome & introduction |
Keynote | ||
13:15 | Selma Šabanović | (info and abstract) |
Paper presentations | ||
14:15 | C. Esterwood et al. | Old Wine in New Bottles: Are Agent-Specific Trustworthiness Measures Necessary? |
14:30 | R. Esposito et al. | Deception in HRI: Effects on People's Trust in the Robot and Open Challenges |
14:45 | N. Weinberger et al. | Trust “in the field”: Reflections on a Real-World Lab Deploying Social Robots in Childcare Settings |
15:00 | E. Norman et al. | Utilizing Organizational Communication Theory for Community Embedded Robotics |
15:15 | Break | |
15:30 | S. Gioumatzidou et al. | “Do you trust me with your secret if I wear the clothes you made for me?” Kids knitting clothes for SARs and choosing one to share a secret wish. |
15:45 | B. Dossett et al. | Trust Dynamics in AR Human-Robot Teams: Performance and Feedback as Trust Metric |
Group discussion | ||
16:00 | All participants | Group Discussion with Panel of Authors |
16:35 | Organisation committee | Introduction: The world café method |
16:45 | All participants | World café |
17:45 | Table representatives | Table summaries |
17:55 | Organisation committee | Conclusions |
18:00 | End of workshop |
Here, you can find detailed information of contributed papers, including extended abstracts. The full proceedings will be available soon.
Connor Esterwood, Samia Cornelius Bhatti and Lionel P. Robert
Contributed paper, 14:15 - 14:30
Abstract: A vital aspect of human–robot interaction (HRI) is trustworthiness. Measuring trustworthiness in the context of HRI, however, presents new challenges. One such challenge is the ongoing debate between whether the type of agent examined requires the use of a trustworthiness measure specific to that agent. This paper presents both sides of this debate and argues that there is no compelling reason to consider one of these measures more appropriate to a specific agent over another until evidence suggests otherwise.
Raffaella Esposito, Alessandra Rossi and Silvia Rossi
Contributed paper, 14:30 - 14:45
Abstract: This paper examines the roles of deception and nudging in human-robot interaction (HRI). We particularly focus on the effects of deception on people’s trust in robots, and the concerns related to the use of such persuasive techniques. While some type of deceptive tactics can diminish trust, others, such as those involving anthropomorphic behaviors, can enhance the quality of human-robot relationships. In the paper, we also emphasize the importance of understanding the balance between influencing behaviors through nudging and preserving personal autonomy. The work concludes with a discussion about future research for developing standardized measurements in HRI, and a deeper investigation of the long-term impacts of deception and nudging. Our ultimate goal is to enhance the effectiveness of robots in encouraging behavioral changes in humans.
Nora Weinberger, Kathrin Gerling, Jan Ole Rixen and Barbara Bruno
Contributed paper, 14:45 - 15:00
Abstract: Trust is highly relevant in human-robot interaction, particularly when it takes place in complex and dynamic social environments. Here, we give an overview of our research within the Real-World Lab Robotics-AI, an inter- and transdisciplinary research effort in which robots are embedded in society in longterm field research. We focus on two particularly challenging research sites, a kindergarten and an inclusive daycare, and reflect upon implications for researching and designing for trust in robots in this context.
Emily Norman, Ryan Gupta, Keri K. Stephens and Luis Sentis
Contributed paper, 15:00 - 15:15
Abstract: Our team of researchers from communications studies, engineering, computer science, and data informatics have worked towards transdisciplinary research in community embedded robotics for the past two years. This paper highlights how Adaptive Structuration Theory, a seminal social science framework, can inform trust when exploring the complex problem of deploying autonomous robots in our community at the University of Texas at Austin. More information about our full team and ongoing research can be found at https://sites.utexas.edu/nsf-gcr/.
ESofia Gioumatzidou, Anna-Maria Velentza, Katrin Fischer and Nikolaos Fachantidis
Contributed paper, 15:30 - 15:45
Abstract: Recent studies in children-robot interaction indicate a propensity for children to trust robots with humanoid characteristics, fostering social relationships that enhance interaction quality. Trust is fundamental in Human-Robot Interaction (HRI) and can be affected by parameters such as the robot’s appearance, texture, or performed activity. In the current paper, we investigate whether children’s trust in a SAR, specifically the Nao robot, increases after they design and dress it with handmade clothes using the felting technique. Children had the chance to interact with robots both naked and dressed with and without their creations and share a secret wish with one of them. Behavioral observations and wish content analysis provided insights into children’s emotional engagement and trust preferences. Most children chose to confide in the robot wearing their crafted outfit, using wish statements that reflect aspirational desires, contrasting with immediate wants expressed to the ’naked’ robot. This study underscores the role of co-design and initial interaction quality in fostering trust and emotional engagement between children and SARs, highlighting avenues for enhancing HRI outcomes.
Benjamin Dossett, Janamejay Sharma, Jason M. Gregory, Kerstin S. Haring and Christopher Reardon
Contributed paper, 15:45 - 16:00
Abstract: As robots become collaborators in human-robot teams, accurately measuring and calibrating trust becomes essential. This paper investigates trust dynamics in an Augmented Reality-based human-robot teaming system, focusing on the interplay between robot performance and robot-to-human feedback. We take the position that throughout the study, it emerges that existing trust metrics for human robot interactions or interactions mediated but new technologies like Augmented Reality are insufficient for capturing the true dynamics of trust, particularly for in-the-field robots. Through our investigation of trust dynamics in an Augmented Reality-based human-robot teaming system, we highlight the significant influence of robot performance and robot-to-human feedback on trust. Our findings, derived from an experiment with 31 participants, reveal that robot performance has a stronger impact on trust than feedback, and that the severity of errors significantly affects trust levels. These insights underscore the need for advanced, context-aware trust metrics that can adapt over time and incorporate both explicit and implicit trust indicators. We invite workshop participants to share their experiences and contribute to the development of these improved metrics, aiming to foster effective and trustworthy human-robot interactions across various applications.