by Wester, Joel, Lee, Minha and van Berkel, Niels
Abstract:
From straightforward interactions to full-fledged open-ended dialogues, Conversational User Interfaces (CUIs) are designed to support end-user goals and follow their requests. As CUIs become more capable, investigating how to restrict or limit their ability to carry out user requests becomes increasingly critical. Currently, such intentionally constrained user interactions are accompanied by a generic explanation (e.g., “I’m sorry, but as an AI language model, I cannot say...”). We describe the role of moral bias in such user restrictions as a potential source of conflict between CUI users’ autonomy and system characterisation as generated by CUI designers. Just as the users of CUIs have diverging moral viewpoints, so do CUI designers—which either intentionally or unintentionally affects how CUIs communicate. Mitigating user moral biases and making the moral viewpoints of CUI designers apparent is a critical path forward in CUI design. We describe how moral transparency in CUIs can support this goal, as exemplified through intelligent disobedience. Finally, we discuss the risks and rewards of moral transparency in CUIs and outline research opportunities to inform the design of future CUIs.
Reference:
J. Wester, M. Lee and N. van Berkel, "Moral Transparency as a Mitigator of Moral Bias in Conversational User Interfaces", in Proceedings of the 5th International Conference on Conversational User Interfaces (CUI '23), 2023.
Bibtex Entry:
@inproceedings{Wester2023MoralTransparency,
title = {Moral Transparency as a Mitigator of Moral Bias in Conversational User Interfaces},
author = {Wester, Joel and Lee, Minha and van Berkel, Niels},
year = {2023},
doi = {10.1145/3571884.3603752},
url = {https://nielsvanberkel.com/files/publications/cui2023a.pdf},
abstract = {From straightforward interactions to full-fledged open-ended dialogues, Conversational User Interfaces (CUIs) are designed to support end-user goals and follow their requests. As CUIs become more capable, investigating how to restrict or limit their ability to carry out user requests becomes increasingly critical. Currently, such intentionally constrained user interactions are accompanied by a generic explanation (e.g., “I’m sorry, but as an AI language model, I cannot say...”). We describe the role of moral bias in such user restrictions as a potential source of conflict between CUI users’ autonomy and system characterisation as generated by CUI designers. Just as the users of CUIs have diverging moral viewpoints, so do CUI designers—which either intentionally or unintentionally affects how CUIs communicate. Mitigating user moral biases and making the moral viewpoints of CUI designers apparent is a critical path forward in CUI design. We describe how moral transparency in CUIs can support this goal, as exemplified through intelligent disobedience. Finally, we discuss the risks and rewards of moral transparency in CUIs and outline research opportunities to inform the design of future CUIs.},
booktitle = {Proceedings of the 5th International Conference on Conversational User Interfaces},
articleno = {12},
numpages = {6},
location = {CUI '23}
}