Programme
The workshop will consist of two main sessions: one in the morning focused on RTSS workshop and the presentation of novel works; and one in the afternoon focused on SCRITA workshop and the creation of a metric for measuring trust in HRI.
Organisers will orally present the results collected during the previous editions of the SCRITA workshop to the audience, with the aim of creating a new metric that allows researchers to assess and reduce common side effects influencing how people put their trust in robots.
We will invite authors to submit short position papers only, discussing their prior experience and new developments in the scope of the workshop to feed into the groups and the following panel discussions. Authors of the accepted papers will orally pitch their discussion's points.
We also invited distinguished representatives of the HRI field to present their works, share their experiences in research and knowledge, and guide new generations of researchers.
Please join the online session here. Meeting ID: 945 5060 1019 Passcode: 509212. Available from 09:00 am to 17:00 pm (CEST) on Friday, 29 August 2025
Keynotes
-
Minoru Asada, (Osaka University, Japan)
Minoru Asada received B.E., M.E., and Ph.D. degrees in control engineering from Osaka University, Osaka, Japan, in 1977, 1979, and 1982, respectively. In April 1995, he became a professor at Osaka University. He was a professor in the Department of Adaptive Machine Systems at the Graduate School of Engineering, Osaka University from April 1997 to March 2019. Since then, he has been a specially-appointed professor, Symbiotic Intelligent System Research Center, Open and Transdisciplinary Research Initiatives, Osaka University. From April 2021, he became a vice president of the International Professional University of Technology in Osaka. Dr. Asada has received many awards, such as the Best Paper award at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS92) and a Commendation by the Minister of Education, Culture, Sports, Science and Technology, Japanese Government, as a Person of Distinguished Services to Enlightening People on Science and Technology. He is one of the founders of the RoboCup, and the former president of the International RoboCup Federation (2002-2008). He was the president of the Robotics Society of Japan (RSJ, 2019-2021). IEEE Life Fellow since 2021. He was the Research Director of the ASADA Synergistic Intelligence Project at Exploratory Research for Advanced Technology by the Japan Science and Technology Agency (ERATO, 2005-2011), and was a principal investigator of the Grants-in-Aid for Scientific Research (Research Project Numbers:24000012, 2012-2016) titled Constructive Developmental Science based on Understanding the Process from Neuro-Dynamics to Social Interaction. He was a principal investigator of the JST RISTEX R&D Project titled Legal Beings: Electric personhoods of artificial intelligence and robots in NAJIMI society, based on a reconsideration of the concept of autonomy.
Invited talk: Toward Mutual Trust in Human-Robot Interaction: Designing Robots as Moral Trustors and Trustees
-
Kerstin Fischer, (University of Southern Denmark, Denmark)
Kerstin Fischer is professor for Language and Technology Interaction at the University of Southern Denmark and director of the Human-Robot Interaction Lab in Sonderborg. Kerstin is senior associate editor of the journal ACM Transactions on Human-Robot Interaction and associate editor of the book series ‘Studies in Pragmatics’ (Brill). She has written 3 monographs, 40 journal articles and more than 120 conference and book contributions, in which she brings her background in linguistics, communication and multimodal interaction analysis to the study of behavior change, persuasive technology and human-robot interaction.
Invited talk: Preventing Overtrust in Social Robots
-
Philip Brey, (University of Twente, Netherlands)
Philip Brey is a professor of philosophy and ethics of technology at the Department of Philosophy, University of Twente, the Netherlands. In his research, he investigates social, political, and ethical issues in the development, use, and regulation of technology. His focus is on new and emerging technologies, with special attention towards artificial intelligence, robotics, extended reality, and digital technologies. Brey is the former president of the International Society for Ethics and Information Technology (INSEIT) and of the Society for Philosophy and Technology (SPT). He currently leads the 10-year research programme Ethics of Socially Disruptive Technologies that includes seven universities in the Netherlands and over one hundred researchers. He is the winner of the 2022 Weizenbaum Award for excellence in the field of digital ethics.
Invited talk: Relational Intelligence in Robotics: Will Future Robots Build Trusting Relationships?
-
Roy Lindelauf, (Tilburg University, Netherlands)
Prof. dr. ir. Roy Lindelauf serves as Professor of Data Science in Military Operations at the Netherlands Defence Academy (NLDA) and holds the endowed Chair in Data Science, Safety & Security at Tilburg University's Department of Cognitive Science & Artificial Intelligence. He is an expert member of the Global Commission on Responsible AI in the Military Domain.
Invited talk: Responsible AI for Autonomous Systems: The Role of the Data Science Centre of Excellence of the NL MoD
Schedule
Workshop day: Friday, 29 August 2025
Morning session
The morning session will be between 9:00 and 12:00.
Time | Speaker | Title |
---|---|---|
9:00 | Organisation committee | Welcome & introduction |
Session 1: Keynotes | ||
9:10 | Minoru Asada | Toward Mutual Trust in Human-Robot Interaction: Designing Robots as Moral Trustors and Trustees |
9:50 | Kerstin Fischer | Preventing Overtrust in Social Robots |
10:30 | Coffee Break | |
Session 2: Paper presentations | ||
10:50 | B. Barrow et al. | The Influence of Facial Features on the Perceived Trustworthiness of a Social Robot |
11:05 | G. Kılınc Soylu et al. | Using Petri Nets for Context-Adaptive Robot Explanations |
Session 3: Keynotes | ||
11:20 | Philip Brey | Relational Intelligence in Robotics: Will Future Robots Build Trusting Relationships? |
12:00 | Lunch |
Afternoon session
The afternoon session will be between 12:50 pm and 17:00 pm.
Time | Speaker | Title |
---|---|---|
Session 4: Paper presentations | ||
12:50 | J. Perez-Osorio et al. | Implicit and Explicit Trust Metrics Reveal the Impact of Robot Gaze Reliability in Collaborative Tasks |
13:05 | F. Amooei et al. | Trustworthy Dermatology Robots: A Framework for Social, Tactile, and Transparent Interaction |
Session 5: Keynotes | ||
13:20 | Roy Lindelauf | Responsible AI for Autonomous Systems: The Role of the Data Science Centre of Excellence of the NL MoD |
Session 6: Paper presentations | ||
14:00 | A. Salehi Fathabadi et al. | The Trust-Safety Divide: A Critical Gap in Human-Robot Interaction Research |
14:10 | C. Mazzola et al. | Toward an Interaction-Centered Approach to Robot Trustworthiness |
14:20 | A. Fallahi et al. | Autonomy, Agency, and Trust: Towards Integrated Calibration in Human-Robot Interaction |
14:30 | Coffee Break | |
Session 7: Group discussions | ||
14:50 | Organisation committee | Previous Workshop Presentations |
15:10 | All participants | Group Discussion with Panel of Authors |
15:45 | All participants | Working Session |
16:30 | All participants | Summary & Feedback |
Session 8: Conclusion | ||
16:50 | Organisation committee | Wrapping up & Conclusions |
17:00 | End of workshop |
Talk outline
Keynotes
Toward Mutual Trust in Human-Robot Interaction: Designing Robots as Moral Trustors and Trustees
Keynote, 9:10 - 9:50
Abstract: Trust in HRI has primarily focused on how robots can earn human trust. However, truly reciprocal interaction also requires robots to act as trustors—agents capable of trusting others. From the RTSS (Robots that Trust and are Socially Sensitive) perspective, this talk explores how moral reciprocity underpins trust, emphasizing that robots must be both morally accountable and morally sensitive. To achieve this, I propose grounding robot moral behavior not just in observable actions, but in internal mechanisms of empathy, particularly through models of artificial pain. By simulating and responding to others’ pain via self–other mappings, robots can acquire the affective foundations for trust. The talk culminates in our latest framework, Silicopathy—a developmental model of artificial empathy that integrates affective learning, mirror systems, and moral simulation. This approach lays the foundation for robot agents that are both trustable and capable of trust, advancing the goal of truly mutual trust in HRI.
Preventing Overtrust in Social Robots
Keynote, 9:50 - 10:30
Abstract: Trust is a central ingredient not only in human interaction, but also in interactions with machines; for instance, they need to be reliable, predictable and easily usable. Social robots are machines that on top of these trust dimensions also involve social or relationship trust. To understand and even regulate trust in social robots, it is necessary to understand how these machines come to be treated as social beings in the first place. In this talk, I will therefore first outline what exactly it is that needs to be explained and then present how the depiction model addresses people’s observable behaviors toward social robots. The depiction model furthermore predicts that people are very likely to overtrust robots, i.e. assume higher and more varied capabilities than they actually have. On the other hand, the same mechanisms and processes that lead to overtrust can also be leveraged to prevent overtrust; I present some empirical evidence that people’s trust in social robots can be successfully regulated based on foundational pragmatic principles.
Relational Intelligence in Robotics: Will Future Robots Build Trusting Relationships?
Keynote, 11:20 – 12:00
Abstract: This talk examines relational intelligence (RI) in robotics. RI refers to the capacity of artificial agents—including but not limited to robots—to build, sustain, and ethically navigate social relationships with humans. Unlike general social interaction, RI emphasizes specifically relational skills such as empathy, trust-building, reciprocity, and relational repair. In robotics, these capacities draw on social robotics, affective computing, natural language technologies, and human–robot interaction, while also engaging psychology, ethics, and cognitive science. Current examples include companion robots like ElliQ and Moxie, which perform limited relational functions. Looking ahead, relationally intelligent robots may take on broader roles in care, education, and companionship. The talk will conclude with an ethical analysis of these developments, addressing both the risks they pose and the conditions needed for responsible design and use.
Responsible AI for Autonomous Systems: The Role of the Data Science Centre of Excellence of the NL MoD
Keynote, 13:20 – 14:00
Abstract: Autonomous systems such as drone swarms are reshaping the character of conflict, raising urgent questions of control, legality, and responsibility. The Data Science Centre of Excellence (DSCE), a joint initiative of the Netherlands Defence Academy and the Ministry of Defence (embedded in Tilburg University), provides a hub for education, research, and governance in this rapidly evolving domain. This talk explores how DSCE integrates war studies, operational sciences, mathematical modeling, and ethical aspects to shape responsible AI frameworks for military applications. Using a scenario of a private military company deploying drone swarms in an urban environment, I will illustrate how DSCE’s work in simulation, testing, and governance contributes to responsible design, robust oversight, and operational readiness.
Contributed papers
The Influence of Facial Features on the Perceived Trustworthiness of a Social Robot
Benedict Barrow, Roger Moore
Contributed paper, 10:50 - 11:05
Abstract: Trust and the perception of trustworthiness play an important role in decision-making and our behaviour towards others, and this is true not only of human-human interactions but also of human-robot interactions. While significant advances have been made in recent years in the field of social robotics, there is still some way to go before we fully understand the factors that influence human trust in robots. This paper presents the results of a study into the first impressions created by a social robot's facial features, based on the hypothesis that a `babyface' engenders trust. By manipulating the back-projected face of a Furhat robot, the study confirms that eye shape and size have a significant impact on the perception of trustworthiness. The work thus contributes to an understanding of the design choices that need to be made when developing social robots so as to optimise the effectiveness of human-robot interaction.
Using Petri Nets for Context-Adaptive Robot Explanations
Görkem Kılınc Soylu, Neziha Akalin, Maria Riveiro
Contributed paper, 11:05 - 11:20
Abstract: In human-robot interaction, robots must communicate in a natural and transparent manner to foster trust, which requires adapting their communication to the context. In this paper, we propose using Petri nets (PNs) to model contextual information for adaptive robot explanations. PNs provide a formal, graphical method for representing concurrent actions, causal dependencies, and system states, making them suitable for analyzing dynamic interactions between humans and robots. We demonstrate this approach through a scenario involving a robot that provides explanations based on contextual cues such as user attention and presence. Model analysis confirms key properties, including deadlock-freeness, context-sensitive reachability, boundedness, and liveness, showing the robustness and flexibility of PNs for designing and verifying context-adaptive explanations in human-robot interactions.
Implicit and Explicit Trust Metrics Reveal the Impact of Robot Gaze Reliability in Collaborative Tasks
Jairo Perez-Osorio, Eva Wiese
Contributed paper, 12:50 - 13:05
Abstract: Effective human–robot teamwork depends on calibrated trust. We combined reaction time (RT) with the Trust Perception Scale-HRI to investigate how the reliability of a humanoid robot’s gaze influences collaboration. In a pilot study, participants worked with a NAO robot whose gaze was either consistently valid or uninformative. Reliable gaze yielded faster responses and higher post-interaction trust, while unreliable gaze slowed performance and selectively lowered perceived reliability. Crucially, RTs revealed the influence of trust shifts before they surfaced in the questionnaire, underscoring the value of integrating implicit and explicit measures. Behavioral metrics help reveal early trust drifts and offer a complementary view of the influence of behavior on team performance.
Trustworthy Dermatology Robots: A Framework for Social, Tactile, and Transparent Interaction
Fereshteh Amooei, Ali Fallahi, Abolfazl Zaraki
Contributed paper, 13:05 - 13:20
Abstract: Trust is a fundamental component in the successful deployment of human-robot interaction (HRI) systems in healthcare. Dermatology, which involves visual assessment, physical proximity, and in some cases, robotic touch, presents unique challenges for trust formation and maintenance. In this paper, we propose a framework for designing trustworthy robotic systems in dermatological applications. We identify key trust factors, including the role of nonverbal social cues, tactile interaction, system transparency, and trust repair strategies. Drawing from existing HRI, medical robotics, and trust literature, we offer design recommendations that emphasise patient comfort, explainability, and responsive social behaviour. We also outline potential failure scenarios and suggest trust recovery mechanisms that could mitigate trust breakdowns. Finally, we propose future research directions and evaluation methodologies to guide empirical studies. This work serves as an initial step toward creating dermatology-focused robots that are both technically competent and socially acceptable.
The Trust-Safety Divide: A Critical Gap in Human-Robot Interaction Research
Asieh Salehi Fathabadi
Contributed paper, 14:00 - 14:10
Abstract: Human-Robot Interaction (HRI) research faces a fundamental divide: technical safety verification develops independently from robot trust requirements, creating a critical barrier to real-world robot deployment. While formal methods can mathematically prove robot safety, these assurances consistently fail to translate into human trust in robots. Conversely, HRI trust approaches lack the rigorous foundations required for safety-critical robotic applications. This position paper argues that the field must move beyond this artificial separation toward computational robot trust verification - systematic approaches that formally integrate robot safety with human trust properties. We examine the state-of-the-art across both domains, identify critical gaps preventing integration, and outline a research agenda for bridging this divide. Without addressing this trust-safety gap, technically sound robots will continue to face deployment barriers, limiting their potential to benefit human-robot collaboration.
Toward an Interaction-Centered Approach to Robot Trustworthiness
Carlo Mazzola, Hassan Ali, Kristína Malinovská, Igor Farkaš
Contributed paper, 14:10 - 14:20
Abstract: As robots get more integrated into human environments, fostering trustworthiness in embodied robotic agents becomes paramount for an effective and safe human--robot interaction (HRI). To achieve that, HRI applications must promote human trust that aligns with robot skills and avoid misplaced trust or overtrust, which can pose safety risks and ethical concerns. In this position paper, we outline an interaction-based framework for building trust through mutual understanding between humans and robots. We emphasize two main pillars: human awareness and transparency, referring to the robot ability to interpret human actions accurately and to clearly communicate its intentions and goals, respectively. By integrating these two pillars, robots can behave in a manner that aligns with human expectations and needs while providing their human partners with both comprehension and control over their actions. We also introduce four components that we think are important for bridging the gap between a human-perceived sense of trust and a robot true capabilities.
Autonomy, Agency, and Trust: Towards Integrated Calibration in Human-Robot Interaction
Ali Fallahi, Patrick Holthaus, Farshid Amirabdollahian, Gabriella Lakatos
Contributed paper, 14:20 - 14:30
Abstract: This position paper argues that perceived agency is the key mediator linking robot autonomy and user trust in human-robot interaction (HRI). ُThe main focus of this work is on autonomy and agency as two important robot-related elements. Building on our previous work where participants interacted with a Pepper robot framed as either autonomous or remotely controlled, this paper emphasises the need to integrate nuanced trust calibration mechanisms in HRI. The future direction of this research includes analysing behavioural video recordings and participants’ open-ended responses to further understand how trust is behaviourally and cognitively manifested. This position paper proposes that a more holistic analysis—incorporating behavioural, verbal, and task-specific indicators will advance our understanding of trust dynamics in HRI.