Initial Responses to False Positives in AI-supported Continuous Interactions – A Colonoscopy Case Study (bibtex)
by van Berkel, Niels, Opie, Jeremy, Ahmad, Omer F., Lovat, Laurence, Stoyanov, Danail and Blandford, Ann
Abstract:
The use of Artificial Intelligence in clinical support systems is increasing. In this paper we focus on AI support for continuous interaction scenarios. A thorough understanding of end-user behaviour during these continuous Human-AI interactions, in which user input is sustained over time and during which AI suggestions can appear at any time, is still missing. We present a controlled lab-study involving 21 endoscopists and an AI colonoscopy support system. Using a custom-developed application and an off-the-shelf videogame controller, we record participants' navigation behaviour and clinical assessment across 14 endoscopic videos. Each video is manually annotated to mimic an AI recommendation, being either true positive or false positive in nature. We find that time between AI recommendation and clinical assessment is significantly longer for incorrect assessments. Further, the type of medical content displayed significantly affects decision time. Finally, we discover that the participant's clinical role plays a large part in the perception of clinical AI support systems. Our study presents a realistic assessment of the effects of imperfect and continuous AI support in a clinical scenario.
Reference:
N. van Berkel, J. Opie, O. F. Ahmad, L. Lovat, D. Stoyanov, A. Blandford, "Initial Responses to False Positives in AI-supported Continuous Interactions – A Colonoscopy Case Study", ACM Transactions on Interactive Intelligent Systems, 2021, to appear.
Bibtex Entry:
@article{Berkel2021FalsePosColon,
	Abstract = {The use of Artificial Intelligence in clinical support systems is increasing. In this paper we focus on AI support for continuous interaction scenarios. A thorough understanding of end-user behaviour during these continuous Human-AI interactions, in which user input is sustained over time and during which AI suggestions can appear at any time, is still missing. We present a controlled lab-study involving 21 endoscopists and an AI colonoscopy support system. Using a custom-developed application and an off-the-shelf videogame controller, we record participants' navigation behaviour and clinical assessment across 14 endoscopic videos. Each video is manually annotated to mimic an AI recommendation, being either true positive or false positive in nature. We find that time between AI recommendation and clinical assessment is significantly longer for incorrect assessments. Further, the type of medical content displayed significantly affects decision time. Finally, we discover that the participant's clinical role plays a large part in the perception of clinical AI support systems. Our study presents a realistic assessment of the effects of imperfect and continuous AI support in a clinical scenario.},
	Author = {van Berkel, Niels and Opie, Jeremy and Ahmad, Omer F. and Lovat, Laurence and Stoyanov, Danail and Blandford, Ann},
	Journal = {ACM Transactions on Interactive Intelligent Systems},
	Pages = {to appear},
	Title = {Initial Responses to False Positives in AI-supported Continuous Interactions – A Colonoscopy Case Study},
	Year = {2021},
	BFI = {BFI 1}}
Powered by bibtexbrowser