论文标题

在3D环境中评估绩效 - 信任与道德信任的行为

Evaluation of Performance-Trust vs Moral-Trust Violation in 3D Environment

论文作者

Trivedi, Maitry Ronakbhai, Khavas, Zahra Rezaei, Robinette, Paul

论文摘要

人类机器人的互动,其中具有一定程度的自主权的机器人与人类相互作用以实现特定目标,这取得了很大的进步。随着自主机器人的引入以及在不久的将来广泛使用人们的可能性,至关重要的是,人类在与它们互动的同时了解机器人的意图,因为这将促进人类机器人信任的发展。近年来,研究人员引入了对信任的新概念化,将人类机器人互动的信任视为多维性质。归因于信任的两个主要方面是绩效信任和道德信任。我们旨在设计一个实验,以调查搜索和救援场景中侵犯性能违规和道德信任的后果。我们想看看是否有两个类似的机器人失败,一种是由侵犯性能违规而引起的,另一种是由道德信任违规造成的,对人类的信任产生了明显的影响。除此之外,我们计划开发一个界面,使我们能够调查将界面的模式从网格世界情景(2D环境)变为现实模拟(3D环境)是否会影响人类对任务的看法以及机器人对人类信任的失败的影响。

Human-Robot Interaction, in which a robot with some level of autonomy interacts with a human to achieve a specific goal has seen much recent progress. With the introduction of autonomous robots and the possibility of widespread use of those in near future, it is critical that humans understand the robot's intention while interacting with them as this will foster the development of human-robot trust. The new conceptualization of trust which had been introduced by researchers in recent years considers trust in Human-Robot Interaction to be a multidimensional nature. Two main aspects which are attributed to trust are performance trust and moral trust. We aim to design an experiment to investigate the consequences of performance-trust violation and moral-trust violation in a search and rescue scenario. We want to see if two similar robot failures, one caused by a performance-trust violation and the other by a moral-trust violation have distinct effects on human trust. In addition to this, we plan to develop an interface that allows us to investigate whether altering the interface's modality from grid-world scenario (2D environment) to realistic simulation (3D environment) affects human perception of the task and the effects of the robot's failure on human trust.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源