Review and change how cookies are used.
The workshop will be held on 31st August, 2023, at 13:10 (local time).
The proceedings of SCRITA 2023 submissions can be browsed on arXiv:2311.05401.
The workshop will consist of three types of sessions as detailled below.
We will have one keynote talk. Unfortunately, the keynote had to be cancelled.
We will feature short presentations of accepted papers discussing the authors' prior experience using tools for measuring trust and other constructs and their point of view on the workshop topics.
Finally, we will have a spin-date discussion, following the world café method, where groups revolve around fixed topics - each for discussing one factor influencing people’s trust in robots - as recognised by most influenced works in literature. Participants and attendees will be invited, before the workshop, to submit the aspects they believe affect the most people’s trust in robots. Within this activity, each group will identify a fixed number of questions (about 5) for topics before the groups are shuffled around and people are re-assigned to a different topic. The combination of all questions will then form an initial questionnaire, which we aim to validate in a second stage.
This activity will have all attendees, including the keynote speaker, presenters and organisers involved in a dynamic conversation around the use of tools in HRI, specifically questionnaires, about the relevant challenges of effectively supporting the design, development and assessment of socially acceptable and trustable robots. We will shuffle participants between groups for enhancing inclusion, diversity and equity. This will also give the opportunity to early career researchers to learn and share their own ideas and knowledge with more experienced members. The discussion and questions will be then made available in a special section on the workshop website, and we aim to publish the final results in a high-impact journal.
|13:10||Organisation committee||Welcome & introduction|
|Session I: Extended abstracts|
|13:20||Phillip Morgan et al.||An Optimized Paradigm to Measure Effects of Anthropomorphized Self-Driving Cars on Trust and Blame Following an Accident|
|13:30||Katrin Fischer et al.||The Effect of Trust and its Antecedents on Robot Acceptance||arXiv:2311.06688|
|13:40||Stefan Schiffer et al.||BUSSARD - Better Understanding Social Situations for Autonomous Robot Decision-Making||arXiv:2311.06391|
|13:50||Bertram F. Malle and Daniel Ullman||Measuring Human-Robot Trust with the MDMT (Multi-Dimensional Measure of Trust)||arXiv:2311.14887|
|Session II: Position papers|
|14:00||Schroepfer et al.||Trust and Acceptance of Multi-Robot Systems “in the Wild”. A Roadmap exemplified within the EU-Project BugWright2|
|14:10||Raj Korpan||Trust in Queer Human-Robot Interaction||arXiv:2311.07458|
|14:20||Kerstin Haring||On Robot Acceptance and Trust: A Review and Unanswered Questions|
|14:30||Anna Lena Lange et al.||From Human to Robot Interactions: A Circular Approach towards Trustworthy Social Robots||arXiv:2311.08009|
|14:50||Patrick Holthaus & Alessandra Rossi||Common (good) practices to measure trust in HRI||arXiv:2311.12182|
|Session III: World café|
|15:00||Organisation committee||Introduction: The world café method|
|15:05||All participants||World café|
|15:55||Table representatives||Table summaries|
Here, you can find detailed information of contributed papers, including extended abstracts and position papers. The full proceedings can be browsed on arXiv:2311.05401.
Phillip Morgan, Victoria Marcinkiewicz, Qiyuan Zhang, Theodor Kozlowski, Louise Bowen and Christopher Wallbridge
Contributed paper, 13:20 - 13:30
Abstract: Despite increasing sophistication of automated technology within self-driving cars (SDCs), there have and will be instances where accidents occur. Trust could be eroded – with consequences for adoption and continued usage. At RO-MAN 2022 – we presented a SDC experiment focused on trust and blame in the event of an accident situation. We developed a novel method to investigate whether a humanoid robot informational assistant communicating SDC intentions and actions improved trust and reduced blame in such situations. One limitation was that the accident occurred with limited experience of the SDC performing maneuvers without incident. We have further developed the paradigm to include successful maneuvers to give important opportunities to build trust in the novel technology before the critical event. Initial data is presented and discussed.
Katrin Fischer, Donggyu Kim and Joo-Wha Hong
Contributed paper, 13:30 - 13:40 (arXiv:2311.06688)
Abstract: As social and socially assistive robots are becoming more prevalent in our society, it is beneficial to understand how people form first impressions of them and eventually come to trust and accept them. This paper describes an Amazon Mechanical Turk study (n = 239) that investigated trust and its antecedents trustworthiness and first impressions. Participants evaluated the social robot Pepper's warmth and competence as well as trustworthiness characteristics ability, benevolence and integrity followed by their trust in and intention to use the robot. Mediation analyses assessed to what degree participants' first impressions affected their willingness to trust and use it. Known constructs from user acceptance and trust research were introduced to explain the pathways in which one perception predicted the next. Results showed that trustworthiness and trust, in serial, mediated the relationship between first impressions and behavioral intention.
Stefan Schiffer, Astrid Rosenthal-von der Pütten and Bastian Leibe
Contributed paper, 13:40 - 13:50 (arXiv:2311.06391)
Abstract: We report on our effort to create a corpus dataset of different social context situations in an office setting for further disciplinary and interdisciplinary research in computer vision, psychology, and human-robot-interaction. For social robots to be able to behave appropriately, they need to be aware of the social context they act in. Consider, for example, a robot with the task to deliver a personal message to a person. If the person is arguing with an office mate at the time of message delivery, it might be more appropriate to delay playing the message as to respect the recipient's privacy and not to interfere with the current situation. This can only be done if the situation is classified correctly and in a second step if an appropriate behavior is chosen that fits the social situation. Our work aims to enable robots accomplishing the task of classifying social situations by creating a dataset composed of semantically annotated video scenes of office situations from television soap operas. The dataset can then serve as a basis for conducting research in both computer vision and human-robot interaction.
Bertram F. Malle and Daniel Ullman
Contributed paper, 13:50 - 14:00 (arXiv:2311.14887)
Abstract: We describe the steps of developing the MDMT (Multi-Dimensional Measure of Trust), an intuitive self-report measure of perceived trustworthiness of various agents (human, robot, animal). We summarize the evidence that led to the original four-dimensional form (v1) and to the most recent five-dimensional form (v2). We examine the measure’s strengths and limitations and point to further necessary validations.
Pete Schroepfer, Nathalie Schauffel, Jan Grundling, Thomas Ellwart, Benjamin Weyers and Cedric Pradalier
Contributed paper, 14:00 - 14:10
Abstract: This paper outlines a roadmap to effectively leverage shared mental models in multi-robot, multi-stakeholder scenarios, drawing on experiences from the BugWright2 project. The discussion centers on an autonomous multi-robot systems designed for ship inspection and maintenance. A significant challenge in the development and implementation of this system is the calibration of trust. To address this, the paper proposes that trust calibration can be managed and optimized through the creation and continual updating of shared and accurate mental models of the robots. Strategies to promote these mental models, including cross-training, briefings, debriefings, and task-specific elaboration and visualization, are examined. Additionally, the crucial role of an adaptable, distributed, and well-structured user interface (UI) is discussed.
Contributed paper, 14:10 - 14:20 (arXiv:2311.07458)
Abstract: Human-robot interaction (HRI) systems need to build trust with people of diverse identities. This position paper argues that queer (LGBTQIA+) people must be included in the design and evaluation of HRI systems to ensure their trust in and acceptance of robots. Queer people have faced discrimination and harm from artificial intelligence and robotic systems. Despite calls for increased diversity and inclusion, HRI has not systemically addressed queer issues. This paper suggests three approaches to address trust in queer HRI: diversifying human-subject pools, centering queer people in HRI studies, and contextualizing measures of trust.
Contributed paper, 14:20 - 14:30
Abstract: This position paper briefly considers the current benefits and shortcomings surrounding robot trust and acceptance, focusing on robots with interactive capabilities. The paper concludes with currently unanswered questions and may serve as a jumping-off point for discussion around those questions.
Anna Lena Lange, Murat Kirtay and Verena V. Hafner
Contributed paper, 14:30 - 14:40 (arXiv:2311.08009)
Abstract: Human trust research provides a range of interesting findings about the building of trust between interaction partners. The introduction of robots into social interactions calls for a reevaluation of these findings and also brings new challenges and opportunities. In this paper, we suggest approaching trust research in a circular fashion by drawing from human trust findings, validating them and conceptualizing them for robots, and finally using the precise manipulability of robots to explore previously untouched areas of trust formation to generate new hypothesis for human trust building.
Patrick Holthaus and Alessandra Rossi
Contributed paper, 14:50 - 15:00 (arXiv:2311.12182)
Abstract: Trust in robots is widely believed to be imperative for the adoption of robots into people's daily lives. It is, therefore, understandable that the literature of the last few decades focuses on measuring how much people trust robots -- and more generally, any agent - to foster such trust in these technologies. Researchers have been exploring how people trust robot in different ways, such as measuring trust on human-robot interactions (HRI) based on textual descriptions or images without any physical contact, during and after interacting with the technology. Nevertheless, trust is a complex behaviour, and it is affected and depends on several factors, including those related to the interacting agents (e.g. humans, robots, pets), itself (e.g. capabilities, reliability), the context (e.g. task), and the environment (e.g. public spaces vs private spaces vs working spaces). In general, most roboticists agree that insufficient levels of trust lead to a risk of disengagement while over-trust in technology can cause over-reliance and inherit dangers, for example, in emergency situations. It is, therefore, very important that the research community has access to reliable methods to measure people's trust in robots and technology. In this position paper, we outline current methods and their strengths, identify (some) weakly covered aspects and discuss the potential for covering a more comprehensive amount of factors influencing trust in HRI.