"As an AI language model, I cannot": Investigating LLM Denials of User Requests
by Wester, Joel, Schrills, Tim, Pohl, Henning and van Berkel, Niels
Abstract:
Users ask large language models (LLMs) to help with their homework, for lifestyle advice, or for support in making challenging decisions. Yet LLMs are often unable to fulfil these requests, either as a result of their technical inabilities or policies restricting their responses. To investigate the effect of LLMs denying user requests, we evaluate participants' perceptions of different denial styles. We compare specific denial styles (baseline, factual, diverting, and opinionated) across two studies, respectively focusing on LLM's technical limitations and their social policy restrictions. Our results indicate significant differences in users' perceptions of the denials between the denial styles. The baseline denial, which provided participants with brief denials without any motivation, was rated significantly higher on frustration and significantly lower on usefulness, appropriateness, and relevance. In contrast, we found that participants generally appreciated the diverting denial style. We provide design recommendations for LLM denials that better meet peoples' denial expectations.
Reference:
J. Wester, T. Schrills, H. Pohl, N. van Berkel, ""As an AI language model, I cannot": Investigating LLM Denials of User Requests", in Proceedings of ACM SIGCHI Conference on Human Factors in Computing Systems (CHI'24), 2024.
Bibtex Entry:
@inproceedings{Wester2024LLMDenials,
	title        = {``As an AI language model, I cannot'': Investigating LLM Denials of User Requests},
	author       = {Wester, Joel and Schrills, Tim and Pohl, Henning and van Berkel, Niels},
	year         = 2024,
	booktitle    = {Proceedings of ACM SIGCHI Conference on Human Factors in Computing Systems},
	location     = {CHI'24},
	doi          = {10.1145/3613904.3642135},
	url          = {https://nielsvanberkel.com/files/publications/chi2024a.pdf},
	abstract     = {Users ask large language models (LLMs) to help with their homework, for lifestyle advice, or for support in making challenging decisions. Yet LLMs are often unable to fulfil these requests, either as a result of their technical inabilities or policies restricting their responses. To investigate the effect of LLMs denying user requests, we evaluate participants' perceptions of different denial styles. We compare specific denial styles (baseline, factual, diverting, and opinionated) across two studies, respectively focusing on LLM's technical limitations and their social policy restrictions. Our results indicate significant differences in users' perceptions of the denials between the denial styles. The baseline denial, which provided participants with brief denials without any motivation, was rated significantly higher on frustration and significantly lower on usefulness, appropriateness, and relevance. In contrast, we found that participants generally appreciated the diverting denial style. We provide design recommendations for LLM denials that better meet peoples' denial expectations.},
	type         = {Conference Paper}
}