Abstract
Machine Learning (ML) has the potential to revolutionize healthcare by enhancing risk prediction and reducing clinical workloads. However, as it impacts risk assessment, integration of ML into clinical practice presents several ethical challenges. This study focuses on enabling clinicians to express their ethical values about ML-powered clinical decision support systems, to facilitate their consideration during the design phase. Grounded in human-centered AI and value-sensitive design, we introduce a tangible toolkit that assists clinicians in visualizing interaction stages with an AI decision support system in their daily practice, and in articulating ethical values and concerns emerging in each step. Preliminary tests with four clinicians for a cardiac risk prediction ML model case-study showcase the toolkit's potential to foster discussion on situated ethical considerations. This research provides a practical tool for designers and clinicians to influence ethical development of AI-driven healthcare solutions and demonstrates its potential for meaningful contributions to such processes.
Keywords
ethical values in ai; value sensitive design; clinical decision support systems; human-ai interaction
DOI
https://doi.org/10.21606/drs.2024.862
Citation
Faber, I., van Renswouw, L., and Colombo, S. (2024) A tangible toolkit to uncover clinician's ethical values about AI clinical decision support systems, in Gray, C., Ciliotta Chehade, E., Hekkert, P., Forlano, L., Ciuccarelli, P., Lloyd, P. (eds.), DRS2024: Boston, 23–28 June, Boston, USA. https://doi.org/10.21606/drs.2024.862
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License
Conference Track
Research Paper
Included in
A tangible toolkit to uncover clinician's ethical values about AI clinical decision support systems
Machine Learning (ML) has the potential to revolutionize healthcare by enhancing risk prediction and reducing clinical workloads. However, as it impacts risk assessment, integration of ML into clinical practice presents several ethical challenges. This study focuses on enabling clinicians to express their ethical values about ML-powered clinical decision support systems, to facilitate their consideration during the design phase. Grounded in human-centered AI and value-sensitive design, we introduce a tangible toolkit that assists clinicians in visualizing interaction stages with an AI decision support system in their daily practice, and in articulating ethical values and concerns emerging in each step. Preliminary tests with four clinicians for a cardiac risk prediction ML model case-study showcase the toolkit's potential to foster discussion on situated ethical considerations. This research provides a practical tool for designers and clinicians to influence ethical development of AI-driven healthcare solutions and demonstrates its potential for meaningful contributions to such processes.