On Moral Manifestations in Large Language Models (bibtex)
by Wester, Joel, Delaunay, Julien, de Jong, Sander and van Berkel, Niels
Abstract:
Since OpenAI released ChatGPT, researchers, policy-makers, and laypersons have raised concerns regarding its false and incorrect statements, which are furthermore expressed in an overly confident manner. We identify this flaw as part of its functionality and describe why large language models (LLMs), such as ChatGPT, should be understood as social agents manifesting morality. This manifestation happens as a consequence of human-like natural language capabilities, giving rise to humans interpreting the LLMs as potentially having moral intentions and abilities to act upon those intentions. We outline why appropriate communication between people and ChatGPT relies on moral manifestations by exemplifying `overly confident' communication of knowledge. Moreover, we put forward future research directions of fully autonomous and semi-functional systems, such as ChatGPT, by calling attention to how engineers, developers, and designers can facilitate end-users sense-making of LLMs by increasing moral transparency.
Reference:
J. Wester, J. Delaunay, S. de Jong, N. van Berkel, "On Moral Manifestations in Large Language Models", in Adjunct Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI'23 EA), 2023, 1–4.
Bibtex Entry:
@inproceedings{Wester2023MoralManifestationLLM,
	title        = {On Moral Manifestations in Large Language Models},
	author       = {Wester, Joel and Delaunay, Julien and de Jong, Sander and van Berkel, Niels},
	year         = 2023,
	booktitle    = {Adjunct Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems},
	location     = {CHI'23 EA},
	pages        = {1--4},
	abstract     = {Since OpenAI released ChatGPT, researchers, policy-makers, and laypersons have raised concerns regarding its false and incorrect statements, which are furthermore expressed in an overly confident manner. We identify this flaw as part of its functionality and describe why large language models (LLMs), such as ChatGPT, should be understood as social agents manifesting morality. This manifestation happens as a consequence of human-like natural language capabilities, giving rise to humans interpreting the LLMs as potentially having moral intentions and abilities to act upon those intentions. We outline why appropriate communication between people and ChatGPT relies on moral manifestations by exemplifying `overly confident' communication of knowledge. Moreover, we put forward future research directions of fully autonomous and semi-functional systems, such as ChatGPT, by calling attention to how engineers, developers, and designers can facilitate end-users sense-making of LLMs by increasing moral transparency.},
	url          = {https://nielsvanberkel.com/files/chi2023e.pdf},
	type         = {Conference Paper},
}
Powered by bibtexbrowser