Impact of Explanation Technique and Representation on Users' Comprehension and Confidence in Explainable AI
by Delaunay, Julien, Galárraga, Luis, Largouet, Christine and van Berkel, Niels
Abstract:
Local explainability, an important sub-field of eXplainable AI, focuses on describing the decisions of AI models for individual use cases by providing the underlying relationships between a model's inputs and outputs. While the machine learning community has made substantial progress in improving explanation accuracy and completeness, these explanations are rarely evaluated by the final users. In this paper, we evaluate the impact of various explanation and representation techniques on users' comprehension and confidence. Through a user study on two different domains, we assessed three commonly used local explanation techniques—feature-attribution, rule-based, and counterfactual—and explored how their visual representation—graphical or text-based—influences users' comprehension and trust. Our results show that the choice of explanation technique primarily affects user comprehension, whereas the graphical representation impacts user confidence.
Reference:
J. Delaunay, L. Galárraga, C. Largouet, N. van Berkel, "Impact of Explanation Technique and Representation on Users' Comprehension and Confidence in Explainable AI", Proceedings of the ACM on Human-Computer Interaction - CSCW, 2025, to appear.
Bibtex Entry:
@article{Delaunay2025ExplanationRepresentation,
	title        = {Impact of Explanation Technique and Representation on
Users' Comprehension and Confidence in Explainable {AI}},
	author       = {Delaunay, Julien and Galárraga, Luis and Largouet, Christine and van Berkel, Niels},
	year         = 2025,
	journal      = {Proceedings of the ACM on Human-Computer Interaction - CSCW},
	pages        = {to appear},
	doi          = {},
	abstract     = {Local explainability, an important sub-field of eXplainable AI, focuses on describing the decisions of AI models for individual use cases by providing the underlying relationships between a model's inputs and outputs. While the machine learning community has made substantial progress in improving explanation accuracy and completeness, these explanations are rarely evaluated by the final users. In this paper, we evaluate the impact of various explanation and representation techniques on users' comprehension and confidence. Through a user study on two different domains, we assessed three commonly used local explanation techniques---feature-attribution, rule-based, and counterfactual---and explored how their visual representation---graphical or text-based---influences users' comprehension and trust. Our results show that the choice of explanation technique primarily affects user comprehension, whereas the graphical representation impacts user confidence.},
	core         = {A}
}