Abstract
AI-based tools are increasingly developed for clinical radiology, offering potential gains in diagnostic accuracy, consistency, and workflow efficiency. Yet adoption remains limited, as many systems lack support for effective human-AI collaboration within complex diagnostic workflows. To address this gap, we visually analysed 17 FDA- and/or CE-certified commercial radiology AI systems, examining how their interface designs operational ize human-AI interaction across three key dimensions: diagnostic task coverage, information richness of AI explanations, and human control mechanisms. Using visual diagramming, typology-based task mapping, and cross-system comparison, we characterize when and how AI outputs are introduced, how understandable they are communicated, and how clinicians can influence or correct AI behaviour. Our analysis shows that most systems augment isolated subtasks, such as detection, quantification, and annotation, while higher-order or multi-phase workflow support remains rare. AI explanations typically combine categorical, numerical, and visual outputs but rarely make underlying reasoning transparent, leaving interpretive responsibility to clinicians. Control mechanisms vary in depth and frequency, ranging from single-step initiation to multi-stage walk throughs, yet few systems support iterative engagement and oversight of AI output. These findings reveal significant variation and fragmentation in current design practices, emphasizing the need for standardized frameworks to evaluate and guide human-AI interaction in clinical tools. Future work should link interaction design dimensions, such as control granularity and explanation richness, to safety, usability, and adoption outcomes, ensuring that AI systems enhance rather than constrain clinician expertise and agency in diagnostic decision-making.
Keywords
Radiology; Artificial Intelligence; User Experience; Human-AI Interaction
DOI
https://doi.org/10.21606/iasdr.2025.1010
Citation
Ruijs, N., Pluyter, J.R., Chen, L.,and Funk, M.(2025) The Missing Link: Understanding Human-AI Interaction in FDA/CE-Approved Radiology Tools, in Chang, C.-Y., and Hsu, Y. (eds.), IASDR 2025: Design Next, 02-05 December, Taiwan. https://doi.org/10.21606/iasdr.2025.1010
Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License
Conference Track
Track 4 - Human-Centered AI
The Missing Link: Understanding Human-AI Interaction in FDA/CE-Approved Radiology Tools
AI-based tools are increasingly developed for clinical radiology, offering potential gains in diagnostic accuracy, consistency, and workflow efficiency. Yet adoption remains limited, as many systems lack support for effective human-AI collaboration within complex diagnostic workflows. To address this gap, we visually analysed 17 FDA- and/or CE-certified commercial radiology AI systems, examining how their interface designs operational ize human-AI interaction across three key dimensions: diagnostic task coverage, information richness of AI explanations, and human control mechanisms. Using visual diagramming, typology-based task mapping, and cross-system comparison, we characterize when and how AI outputs are introduced, how understandable they are communicated, and how clinicians can influence or correct AI behaviour. Our analysis shows that most systems augment isolated subtasks, such as detection, quantification, and annotation, while higher-order or multi-phase workflow support remains rare. AI explanations typically combine categorical, numerical, and visual outputs but rarely make underlying reasoning transparent, leaving interpretive responsibility to clinicians. Control mechanisms vary in depth and frequency, ranging from single-step initiation to multi-stage walk throughs, yet few systems support iterative engagement and oversight of AI output. These findings reveal significant variation and fragmentation in current design practices, emphasizing the need for standardized frameworks to evaluate and guide human-AI interaction in clinical tools. Future work should link interaction design dimensions, such as control granularity and explanation richness, to safety, usability, and adoption outcomes, ensuring that AI systems enhance rather than constrain clinician expertise and agency in diagnostic decision-making.