This website uses cookies that store information about your usage of the page. By continuing to use this page you confirm you are happy with that.
Review and change how cookies are used.
The workshop will be held online in conjunction with RO-MAN 2021 on underline.io. It will consist of three types of sessions. We plan on having three keynote presentations that are dedicated to discussing the role of trust in social robotics. We will also feature the presentations of accepted papers discussing the workshop topics. Finally, we will foster a general discussion where attendees, keynote speakers, and presenters will be invited to a dynamic conversation about the relevant challenges of effectively supporting the design and development of socially acceptable and trustable robots.
Break-out sessions will intermix the main sessions to allow attendees to network and further discuss the workshop topics.
** Dr Alessandra Sciutti, Istituto Italiano di Tecnologia **
Alessandra Sciutti is Tenure Track Researcher, head of the CONTACT (COgNiTive Architecture for Collaborative Technologies) unit of the Italian Institute of Technology (IIT). With a background on Bioengineering, she received her Ph.D. in Humanoid Technologies from the University of Genova in 2010. After two research periods in USA and Japan, in 2018 she has been awarded the ERC Starting Grant wHiSPER (www.whisperproject.eu), focused on the investigation of joint perception between humans and robots. She published more than 70 papers in international journals and conferences and participated in the coordination of the CODEFROR European IRSES project. She is currently Associate Editor for several journals, among which the International Journal of Social Robotics, the IEEE Transactions on Cognitive and Developmental Systems and Cognitive System Research. The scientific aim of her research is to investigate the sensory and motor mechanisms underlying mutual understanding in human-human and human-robot interaction. More info on her website.
Invited talk: Cognitive Robotics for Mutual Understanding and Trust
** Dr Hatice Gunes, University of Cambridge **
Hatice Gunes (Senior Member, IEEE) received the Ph.D. degree in computer science from the University of Technology Sydney, Australia. She is a Reader with the Department of Computer Science and Technology, University of Cambridge, UK, leading the Affective Intelligence and Robotics (AFAR) Lab. Her expertise is in the areas of affective computing and social signal processing cross-fertilizing research in multimodal interaction, computer vision, signal processing, machine learning and social robotics. Dr Gunes’ team has published over 125 papers in these areas (H-index=34, citations > 5,900) and has received various awards and competitive grants, with funding from the Engineering and Physical Sciences Research Council UK (EPSRC), Innovate UK, British Council, the Alan Turing Institute and the EU Horizon 2020. Dr Gunes is the former President of the Association for the Advancement of Affective Computing (2017-2019), the General Co-Chair of ACII 2019, and the Program Co-Chair of ACM/IEEE HRI 2020 and IEEE FG 2017. She also served as the Chair of the Steering Board of IEEE Transactions on Affective Computing (2017-2019) and as a member of the Human-Robot Interaction Steering Committee (2018-2021). In 2019, Dr Gunes has been awarded the prestigious EPSRC Fellowship to investigate adaptive robotic emotional intelligence for well-being (2019-2024) and has been named a Faculty Fellow of the Alan Turing Institute – UK’s national centre for data science and artificial intelligence.
Invited talk: Will Artificial Social Intelligence Lead to Trust and Acceptance in HRI?
** Dr Helen Hastie, Heriot-Watt University **
Helen Hastie is a Professor of Computer Science at Heriot-Watt University, Director of the EPSRC CDT in Robotic and Autonomous Systems at the Edinburgh Centre of Robotics, and Academic Lead for the National Robotarium, opening in 2022 in Edinburgh. Her field of research includes multimodal and spoken dialogue systems, human-robot interaction and trustworthy autonomous systems. She is currently PI on both the UKRI Trustworthy Autonomous Systems Node on Trust and the EPSRC Hume Prosperity Partnership, as well as being HRI theme lead for the EPSRC ORCA Hub. She recently held a Royal Academy of Engineering/Leverhulme Senior Research Fellowship on trustworthy autonomous systems and was co-ordinator of the EU project PARLANCE. She has over 100 publications and has held positions on many scientific committees and advisory boards, including the Scottish Government AI Strategy, IEEE SLTC and the IEEE Standard for Transparency of Autonomous Systems (P7001)
Invited talk: Trustworthy Robotic and Autonomous Systems
Please note that all times are given in British Summer Time (BST).
The workshop will be held on the last day of RO-MAN 2021: 12 August 2021, between 13:00 and 20:00 (conversion to other time zones). The preliminary schedule is as follows but we will update this page as soon as there are any changes to the programme.
In general, keynote talks are about 45 minutes long, with an additional 15 minutes for Q&A and discussion. Regular submissions should plan for a 20 minutes talk and 10 minutes Q&A, and position papers for a **15 minutes ** plus 5.
The proceedings of SCRITA 2021 submissions can be browsed on arXiv.
Hatice Gunes
Keynote, 13:15 - 14:15
Talk summary: Designing artificially intelligent systems and interfaces with socio-emotional skills is a challenging task. Progress in industry and developments in academia provide us a positive outlook, however, the artificial social and emotional intelligence of the current technology is still limited. My lab’s research has been pushing the state of the art in a wide spectrum of research topics in this area, including the design and creation of new datasets; novel feature representations and learning algorithms for sensing and understanding human nonverbal behaviours in solo, dyadic and group settings; theoretical and practical frameworks for lifelong learning and long-term human-robot interaction with applications to wellbeing; and providing solutions to mitigate the bias that creeps into these systems. In this talk, I will present my research team’s explorations specifically in the areas of facilitation, appropriateness of actions and data-driven behaviour generation, with the guiding question of ‘Will Artificial Social Intelligence Lead to Trust and Acceptance in HRI?’ to provoke a range of ideas, discussions and reactions from the workshop participants.
Alessandra Sciutti
Keynote, 14:15 - 15:15
Talk summary: Human interaction depends on mutual understanding: we know how to communicate because we entertain a model of other humans, which enables us to select an effective way to convey to them what we want and to have an intuition of their needs, fears or desires. Mastering this comprehension is necessary for robots as well, to adapt, predict, pro-actively interact with their human partners and to foster trust and familiarity. In this talk, I will discuss how robots can be precious tools to shed light on these mechanisms and investigate in a controllable and reproducible way the unfolding of a social interaction. In particular, I will present research on the humanoid iCub to investigate shared perception, commitment and trust toward a cognitive interactive agent, also within the framework of the wHiSPER ERC project (https://whisperproject.eu/). The technological goal of these efforts will be to build more humane robots, intended as robots that are more considerate of the partners and that are able to adapt to their needs. At the same time, this exercise will help us gain a better understanding of the development of human cognition.
Helen Hastie
Keynote, 15:30 - 16:30
Talk summary: Trust is a multifaceted, complex phenomena that is not well understood when it occurs between humans, let alone between humans and robots. Robots that portray social cues, including voice, gestures and facial expressions, are key tools in researching human-robot trust, specifically how trust is established, lost and regained. In this talk, I will discuss various aspects of trust for HRI including language, social cues, embodiment, transparency, mental models and theory of mind. I will present a number of studies performed in the context of two large projects: the UKRI Trustworthy Autonomous Systems Programme, specifically the Node on Trust; and the EPSRC ORCA Hub for robotic and autonomous systems for remote hazardous environments. This work will be contextualised around the new National Robotarium opening soon in Edinburgh.
Arianna Pipitone, Alessandro Geraci, Antonella D'Amico, Valeria Seidita and Antonio Chella
Position paper, 16:30 - 16:50 (arXiv:2109.09388)
Abstract: Recent studies demonstrated that robot’s inner speech affects human-robot interaction and the robot performances in accomplishing tasks. This work aims to investigate how robot’s inner speech affects trust and anthropomorphic cues when human and robot cooperate. A set of participants was engaged to virtually collaborate with the robot. During cooperation, the robot talks to itself. To evaluate how the robot’s inner speech influences the cues, two questionnaires were administered to each participant in two different times, that are before (pre-test) and after (post-test) the cooperative session with the robot. To the point of the experiments, differences between the answers of the pre-test and post-test suggest that robot’s inner speech influences the cues. Results show that participant’s levels of trust and perception of robot anthropomorphic cues increase after the experimental interaction with the robot equipped with inner speech.
David Figueroa, Ryuji Yamazaki, Shuichi Nishio, Yuma Nagata, Yuto Satake, Miyae Yamakawa, Maki Suzuki, Manabu Ikeda and Hiroshi Ishiguro
Regular contribution, 16:50 - 17:20
Abstract: This work presents a study on long-term usage of social robots introduced into houses of patients with mild cognitive impairment. We evaluated impressions, effect in their daily lives, and attachment of the participants towards the robots. The results showed that the participants accepted the robots, becoming more open to their suggestions and improving their mood. Moreover, the participants started to seek more interactions with people after spending time with the robots. In this work, we present the results and discuss the implications of how the attachment to the robot can benefit this population.
David Cameron and Emily Collins
Position paper, 17:30 - 17:50 (arXiv:2109.00861)
Abstract: There is an increasing interest in considering, implementing, and measuring trust in human-robot interaction (HRI). Typically, this centres on influencing user trust within the framing of HRI as a dyadic interaction between robot and user. We propose this misses a key complexity: a robot's trustworthiness may also be contingent on the user's relationship with, and opinion of, the individual or organisation deploying the robot. Our new HRI triad model (User, Robot, Deployer), offers novel predictions for considering and measuring trust more completely.
Rachele Carli and Amro Najjar
Regular contribution, 17:50 - 18:20 (arXiv:2109.06800)
Abstract: In 2018 the European Commission highlighted the demand of a human-centered approach to AI. Such a claim is gaining even more relevance considering technologies specifically designed to directly interact and physically collaborate with human users in the real world. This is notably the case of social robots. The domain of Human-Robot Interaction (HRI) emerged to investigate these issues. "Human-robot trust" has been highlighted as one of the most challenging and intriguing factors influencing HRI. On the one hand, user studies and technical experts underline how trust is a key element to facilitate users' acceptance, consequently increasing the chances to pursue the given task. On the other hand, such a phenomenon raises also ethical and philosophical concerns leading scholars in these domains to argue that humans should not trust robots.
However, trust in HRI is not an index of fragility, it is rooted in anthropomorphism, and it is a natural characteristic of every human being. Thus, instead of focusing solely on how to inspire user trust in social robots, this paper argues that what should be investigated is to what extent and for which purpose it is suitable to trust robots. Such an endeavour requires an interdisciplinary approach taking into account (i) technical needs and (ii) psychological implications.
Patrick Holthaus
Position paper, 18:30 - 18:50 (arXiv:2107.08805)
Abstract: This position paper aims to highlight and discuss the role of a robot's social credibility in interaction with humans. In particular, I want to explore a potential relation between social credibility and a robot's acceptability and ultimately its trustworthiness. I thereby also review and expand the notion of social credibility as a measure of how well the robot obeys social norms during interaction with the concept of conscious acknowledgement.
Patrik Jonell, Anna Deichler, Ilaria Torre, Iolanda Leite and Jonas Beskow
Regular contribution, 18:50 - 19:20 (arXiv:2109.01206)
Abstract: In this paper we present a pilot study which investigates how non-verbal behavior affects social influence in social robots. We also present a modular system which is capable of controlling the non-verbal behavior based on the interlocutor's facial gestures (head movements and facial expressions) in real time, and a study investigating whether three different strategies for facial gestures ("still", "natural movement", i.e. movements recorded from another conversation, and "copy", i.e. mimicking the user with a four second delay) has any affect on social influence and decision making in a "survival task". Our preliminary results show there was no significant difference between the three conditions, but this might be due to among other things a low number of study participants (12).