Cognitive Forcing for Better Decision-Making: Reducing Overreliance on AI Systems Through Partial Explanations
by de Jong, Sander, Paananen, Ville, Tag, Benjamin and van Berkel, Niels
Abstract:
In AI-assisted decision-making, explanations aim to enhance transparency and user trust but can also lead to negligence. In two separate studies, we explore the use of partial explanations to activate cognitive forcing and increase user engagement. In Study I (N = 264), we present participants with weighted graphs and ask them to identify the shortest paths. In Study II (N = 210), participants correct spelling and grammar mistakes in short text segments. In both studies, we provide a solution suggestion accompanied by either no explanation, a full explanation, or a partial explanation. Our results show that partial explanations reduce overreliance on incorrect AI suggestions, performing significantly better than the baseline but not as well as full explanations. Individuals with a high need for cognition benefit more from AI explanations and consequently perform better. Our work suggests that partial explanations can be valuable in domains where reducing overreliance on AI is critical, like medical diagnosis. It also underscores the need to consider explanation effectiveness across different task difficulties, a factor often overlooked in contemporary human-AI studies.
Reference:
S. de Jong, V. Paananen, B. Tag, N. van Berkel, "Cognitive Forcing for Better Decision-Making: Reducing Overreliance on AI Systems Through Partial Explanations", Proceedings of the ACM on Human-Computer Interaction - CSCW, 2025, to appear.
Bibtex Entry:
@article{Jong2025Partial,
	title        = {Cognitive Forcing for Better Decision-Making: Reducing Overreliance on {AI} Systems Through Partial Explanations},
	author       = {de Jong, Sander and Paananen, Ville and Tag, Benjamin and van Berkel, Niels},
	year         = 2025,
	journal      = {Proceedings of the ACM on Human-Computer Interaction - CSCW},
	pages        = {to appear},
	doi          = {},
	url          = {https://nielsvanberkel.com/files/publications/cscw2025a.pdf},
	abstract     = {In AI-assisted decision-making, explanations aim to enhance transparency and user trust but can also lead to negligence. In two separate studies, we explore the use of partial explanations to activate cognitive forcing and increase user engagement. In Study I (N = 264), we present participants with weighted graphs and ask them to identify the shortest paths. In Study II (N = 210), participants correct spelling and grammar mistakes in short text segments. In both studies, we provide a solution suggestion accompanied by either no explanation, a full explanation, or a partial explanation. Our results show that partial explanations reduce overreliance on incorrect AI suggestions, performing significantly better than the baseline but not as well as full explanations. Individuals with a high need for cognition benefit more from AI explanations and consequently perform better. Our work suggests that partial explanations can be valuable in domains where reducing overreliance on AI is critical, like medical diagnosis. It also underscores the need to consider explanation effectiveness across different task difficulties, a factor often overlooked in contemporary human-AI studies.},
	core         = {A}
}