Human-Centered Explainable AI (HCXAI): Reloading Explainability in the Era of Large Language Models (LLMs)
by Ehsan, Upol, Watkins, Elizabeth A., Wintersberger, Philipp, Manger, Carina, Kim, Sunnie S. Y., van Berkel, Niels, Riener, Andreas and Riedl, Mark O
Abstract:
Human-centered XAI (HCXAI) advocates that algorithmic transparency alone is not sufficient for making AI explainable. Explainability of AI is more than just “opening” the black box — who opens it matters just as much, if not more, as the ways of opening it. In the era of Large Language Models (LLMs), is “opening the black box” still a realistic goal for XAI? In this fourth CHI workshop on Human-centered XAI (HCXAI), we build on the maturation through the previous three installments to craft the coming-of-age story of HCXAI in the era of Large Language Models (LLMs). We aim towards actionable interventions that recognize both affordances and pitfalls of XAI. The goal of the fourth installment is to question how XAI assumptions fare in the era of LLMs and examine how human-centered perspectives can be operationalized at the conceptual, methodological, and technical levels. Encouraging holistic (historical, sociological, and technical) approaches, we emphasize “operationalizing”. We seek actionable analysis frameworks, concrete design guidelines, transferable evaluation methods, and principles for accountability.
Reference:
U. Ehsan, E. A. Watkins, P. Wintersberger, C. Manger, S. S. Y. Kim, N. van Berkel, A. Riener, M. O. Riedl, "Human-Centered Explainable AI (HCXAI): Reloading Explainability in the Era of Large Language Models (LLMs)", in Adjunct Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI'24 Workshop), 2024.
Bibtex Entry:
@inproceedings{Ehsan2024HCXAIExplainabilityLLM,
	title        = {Human-Centered Explainable AI (HCXAI): Reloading Explainability in the Era of Large Language Models (LLMs)},
	author       = {Ehsan, Upol and Watkins, Elizabeth A. and Wintersberger, Philipp and Manger, Carina and Kim, Sunnie S. Y. and van Berkel, Niels and Riener, Andreas and Riedl, Mark O},
	year         = 2024,
	booktitle    = {Adjunct Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems},
	location     = {CHI'24 Workshop},
	articleno    = {477},
	numpages     = {6},
	howpublished = {Workshop},
	doi 		 = {10.1145/3613905.3636311},
	abstract     = {Human-centered XAI (HCXAI) advocates that algorithmic transparency alone is not sufficient for making AI explainable. Explainability of AI is more than just “opening” the black box — who opens it matters just as much, if not more, as the ways of opening it. In the era of Large Language Models (LLMs), is “opening the black box” still a realistic goal for XAI? In this fourth CHI workshop on Human-centered XAI (HCXAI), we build on the maturation through the previous three installments to craft the coming-of-age story of HCXAI in the era of Large Language Models (LLMs). We aim towards actionable interventions that recognize both affordances and pitfalls of XAI. The goal of the fourth installment is to question how XAI assumptions fare in the era of LLMs and examine how human-centered perspectives can be operationalized at the conceptual, methodological, and technical levels. Encouraging holistic (historical, sociological, and technical) approaches, we emphasize “operationalizing”. We seek actionable analysis frameworks, concrete design guidelines, transferable evaluation methods, and principles for accountability.}
}