Programme
The online workshop is held in conjunction with RO-MAN 2020. The structure of the workshop may include, but is not necessarily limited to the following organisation:
- Introduction of the Workshop and main topics
- Invited speakers presentations
- Oral presentations of full papers
- Open discussion with the invited speakers
Invited Speakers
-
Prof Bertram F. Malle
Brown UniversityBertram F. Malle earned his Master's degrees in philosophy/linguistics (1987) and psychology (1989) at the University of Graz, Austria. After coming to the United States in 1990 he received his Ph.D. at Stanford University in 1995 and joined the University of Oregon Psychology Department. Since 2008 he is Professor at the Department of Cognitive, Linguistic, and Psychological Sciences at Brown University. He received the Society of Experimental Social Psychology Outstanding Dissertation award, a National Science Foundation CAREER award, and he is past president of the Society of Philosophy and Psychology. Malle's research has been funded by the NSF, Army, Templeton Foundation, Office of Naval Research, and DARPA. He has distributed his work in 130 articles and several books, on the topics of social cognition (intentionality, mental state inferences, behavior explanations), moral psychology (cognitive and social blame, guilt, norms), and human-robot interaction (moral competence in robots, socially assistive robotics).
Invited talk: Trust in Multiple Dimensions
-
Prof Cindy L. Bethel
STaRS Lab, Mississipi State UniversityCindy L. Bethel is an Associate Professor in the Computer Science and Engineering Department and the Billie J. Ball Endowed Professor in Engineering at Mississippi State University. She is the Director of the Social, Therapeutic, and Robotic Systems (STaRS) Lab, a Research Fellow with the Center for Advanced Vehicular Systems (CAVS) in the Human Factors Group, and a Research Fellow with the Social Science Research Center. Cindy was an NSF/CCC/CRA Computing Innovation Postdoctoral Fellow (CIFellow) in the Social Robotics Laboratory in the Computer Science Department at Yale University from 2009-2011. She was a National Science Foundation Graduate Research Fellow and the recipient of the 2008 IEEE Robotics and Automation Society Graduate Fellowship. She graduated in 2009 with her Ph.D. in Computer Science and Engineering from the University of South Florida. She graduated with a B.S. in Computer Science Summa Cum Laude from the University of South Florida. She was awarded the King O'Neal Scholar Award, the Computer Science and Engineering Outstanding Graduate Award, and the Engineering Alumni Society Outstanding Senior of the Year Award.
Invited talk: A Discussion of Trust and Acceptance of Robots in Human-Robot Interaction Applications
Schedule
The workshop will be held on the second day of RO-MAN 2020: Tuesday, 1st September, from 16:00 CEST to 19:15 CEST.
The preliminary schedule is as follows:
Programme
Welcome
Organisation Committee, 16:00
Input paper: Children's Overtrust: Intentional Use of Robot Errors to Decrease Trust
Denise Geiskkovitch and James Young, 16:10
Robots are being developed to help in educational settings (among others) with young children. Research suggests that children may overtrust robots, which can have a negative impact. We suggest the use of intentional, egregious robot errors as one technique to mitigate such overtrust. Additionally, how attempt to recover from intentional and unintentional errors could also help in reducing children’s trust towards them. In this paper we provide our reasoning behind the purposeful use of errors, as well as suggestions for how various types of errors could be used to decrease trust towards robots.
Invited talk: A Discussion of Trust and Acceptance of Robots in Human-Robot Interaction Applications
Cindy L. Bethel, 16:20
Trust in robot-mediated health information
David Cameron, Marina Sarda-Gou and Laura Sbaffi, 17:05
This paper outlines a social robot platform for providing health information. In comparison with previous findings for accessing information online, the use of a social robot may affect which factors users consider important when evaluating the trustworthiness of health information provided.
Perceived differences between on-line and real robotic failures
Alexander Mois Aroyo, Dario Pasquali, Austin Kothig, Francesco Rea, Giulio Sandini and Alessandra Sciutti, 17:15
Robotic failures are an inevitable occurrence. This study tries to shed light on how people perceived failures and how much they affect the interaction. Continuing the work on a previous reliable study, this research gathers information on how people perceive a failure in an online validation study. After the failures have been selected, they were applied in a real-world game-like scenario where participants played a Treasure Hunt game with iCub. Initial results show that failure perception is more severe in the online study rather than the actual game.
Are Robots Perceived as Good Decision-Makers? A Study Investigating Trust and Preference of Robotic and Human Linesman-Referees in Football
Kaustav Das, Yixiao Wang, Malte Jung and Keith Green, 17:25
Increasingly, robots are decision-makers in manufacturing, finance, medicine, and other areas, but the technology is young and may not be trusted enough to replace a human. In decision-making in sports, in specifically the case of football (or “soccer” as it’s known in the US), we report on a study of how the appearance and accuracy of a human and three robotic linesmen (as presented in a study by Malle et al.) impact fans trusts and preferences for them. Our online study with 104 participants finds that there is a positive correlation between trust and preference for “humanoid” and human linesmen, but not for “AI” and “mechanical” linesmen. Although there are no significant trust differences for different types of linesmen, participants do prefer human and AI linesmen to mechanical and humanoid linesmen. Our qualitative study further validated these quantitative findings by probing reasons for people’s preference: when the appearance of a lineman is human-like, people focus less on the trust issues but more on other reasons for their linesman preference such as efficiency, stability, and minimal robot design. These findings provide important insights for the design of trustworthy robots as robots increasingly become integral to more and more aspects of our everyday lives.
Emotional Musical Prosody for the Enhancement of Trust in Robotic Arm Communication
Richard Savery, Lisa Zahray and Gil Weinberg, 17:35
As robotic arms become prevalent in industry it is crucial to improve levels of trust from human collaborators. Low levels of trust in human-robot interaction can reduce overall performance and prevent full robot utilization. We investigated the potential benefits of using emotional musical prosody to allow the robot to respond emotionally to the user's actions. We tested participants' responses to interacting with a virtual robot arm that acted as a decision agent, helping participants select the next number in a sequence. We compared results from three versions of the application in a between-group experiment, where the robot had different emotional reactions to the user's input depending on whether the user agreed with the robot and whether the user's choice was correct. In all versions, the robot reacted with emotional gestures. One version used prosody-based emotional audio phrases selected from our dataset of singer improvisations, the second version used audio consisting of a single pitch randomly assigned to each emotion, and the final version used no audio, only gestures. Our results showed no significant difference for the percentage of times users from each group agreed with the robot, and no difference between user's agreement with the robot after it made a mistake. However, participants also took a trust survey following the interaction, and we found that the reported trust ratings of the musical prosody group were significantly higher than both the single-pitch and no audio groups.
Coffee break
17:45
Invited talk: Trust in Multiple Dimensions
Bertram F. Malle, 17:55
Questions of trust arise when one agent is vulnerable and another agent might protect or imperil that vulnerability. We propose that the first agent must assess whether the second agent has certain characteristics that provide protection—these are characteristics of trustworthiness, and they are multidimensional. An agent’s capability and reliability make up the performance dimension of trustworthiness, and an agent’s sincerity, benevolence, and ethical integrity make up the moral dimension. We review evidence in support of this multi-dimensional model, introduce a compact assessment tool to measure multi-dimensional trust, and then ask what behavioral cues—in humans or robots—might reveal capability, reliability, sincerity, benevolence, and ethical integrity.
Deceptive Robots
Cristiano Castelfranchi, 18:40
We present the theory of behavior as communication. It will be crucial in H-R cooperative interaction. One possible use of this communication is pretense, simulation. Robots will use even this form, also toward their human partners and also for good paternalistic reasons.
Committing to Interdependence: Implications from Game Theory for Human-Robot Trust
Yosef Razin and Karen Feigh, 18:50
Human-robot interaction and game theory have developed distinct theories of trust for over three decades in relative isolation from one another. Human-robot interaction has focused on the underlying dimensions, layers, correlates, and antecedents of trust models, while game theory concentrated on the psychology and strategies behind singular trust decisions. Both fields have grappled to understand over-trust and trust calibration, as well as how to measure trust expectations, risk, and vulnerability. This paper presents initial steps in closing the gaps between these fields. Using insights and experimental findings from interdependence theory and social psychology, this work analyzes a large game theory data set. It demonstrates that the strongest predictors for a wide variety of trust interactions are our newly proposed and validated metrics of commitment and trust. These metrics better capture social ‘over-trust’ than either rational or normative psychological reasoning, as often proposed in game theory. They also are better situated to explain ‘over-trust’ in human-robot interaction than normative reasoning alone. This work further explores how interdependence theory –with its focus on commitment, power, vulnerability, and calibration– addresses many of the proposed underlying constructs and antecedents within human-robot trust, shedding new light on key differences and similarities that arise when robots replace humans in trust interactions.
The Measure of Trust between Man and Machine: A Meta-Analysis of Trust Metrics in HRI
Yosef Razin and Karen Feigh, 19:00
One of the greatest challenges to measuring human-robot trust is the sheer amount of construct proliferation, models, and available questionnaires, with little to no validation for the majority. This work identified the most frequently cited human-automation trust questionnaires, pinpointing ten validated studies spanning 201 questions. From these, we determined nine distinct common constructs that form the dimensions and antecedents of human-robot trust. These constructs are enriched by comparisons to social and institutional trust models to ensure that the most holistic picture of trust is captured. Finally, this work presents what is believed to be the most complete and integrated model of human-robot trust along with a new trust questionnaire that fully utilizes the findings from the meta-analysis. This powerful instrument will allow the assessment of human-robot trust, with all its complexity, establishing a solid, integrated foundation for future experimentation.
Closing
19:10