Human-Robot Trust

Towards Trustworthy AI System

Trust is essential for successful human-robot interaction. When robots are in human environments, establishing a trusting relationship is crucial. This project focuses on understanding human-robot trust in time-sensitive scenarios where a robot or AI agent provides advice. The goal is to develop algorithms that infer human trust from prior interactions and mitigate negative outcomes when trust is violated (Xu, 2018).

Key aspects of the project include:

References

2022

  1. Evaluating the impact of emotional apology on human-robot trust
    Jin Xu , and Ayanna Howard
    In 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) , 2022

2020

  1. How much do you trust your self-driving car? exploring human-robot trust in high-risk scenarios
    Jin Xu , and Ayanna Howard
    In 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC) , 2020
  2. Would you take advice from a robot? Developing a framework for inferring human-robot trust in time-sensitive scenarios
    Jin Xu , and Ayanna Howard
    In 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) , 2020

2018

  1. Overtrust of robots in high-risk scenarios
    Jin Xu
    In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (AIES) , 2018
  2. The impact of first impressions on human-robot trust during problem-solving scenarios
    Jin Xu , and Ayanna Howard
    In 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) , 2018
  3. Investigating the relationship between believability and presence during a collaborative cognitive task with a socially interactive robot
    Jin Xu , and Ayanna Howard
    In 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) , 2018