by Wester, Joel, Schneiders, Eike and van Berkel, Niels
Abstract:
Humans tend to perceive human qualities in interactive systems. This particularly applies to social robots that utilise human attributes such as human body characteristics and natural language capabilities. Social robots with such characteristics are increasingly deployed in critical settings, such as health and well-being, where it is key to align robot behaviour with end-user expectations. Relatively little is known about how people perceive these social robots’ moral agency. In this position paper, we stress the difference between moral agency and perceived moral agency, and argue that the latter is a timely concern. We discuss the implications of perceived moral agency and outline research directions to explore how humans make sense of social robots in critical settings through perceived moral agency.
Reference:
J. Wester, E. Schneiders and N. van Berkel, "Perceived Moral Agency of Non-Moral Entities: Implications and Future Research Directions for Social Robots", in Adjunct Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI'23 EA), 2023, 1–3.
Bibtex Entry:
@inproceedings{Wester2023MoralAgencyEntities,
title = {Perceived Moral Agency of Non-Moral Entities: Implications and Future Research Directions for Social Robots},
author = {Wester, Joel and Schneiders, Eike and van Berkel, Niels},
year = 2023,
booktitle = {Adjunct Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction},
location = {HRI'23 EA},
pages = {1--3},
url = {https://nielsvanberkel.com/files/publications/hri2023a.pdf},
abstract = {Humans tend to perceive human qualities in interactive systems. This particularly applies to social robots that utilise human attributes such as human body characteristics and natural language capabilities. Social robots with such characteristics are increasingly deployed in critical settings, such as health and well-being, where it is key to align robot behaviour with end-user expectations. Relatively little is known about how people perceive these social robots’ moral agency. In this position paper, we stress the difference between moral agency and perceived moral agency, and argue that the latter is a timely concern. We discuss the implications of perceived moral agency and outline research directions to explore how humans make sense of social robots in critical settings through perceived moral agency.},
}