Abstract

Machine Learning (ML) has the potential to revolutionize healthcare by enhancing risk prediction and reducing clinical workloads. However, as it impacts risk assessment, integration of ML into clinical practice presents several ethical challenges. This study focuses on enabling clinicians to express their ethical values about ML-powered clinical decision support systems, to facilitate their consideration during the design phase. Grounded in human-centered AI and value-sensitive design, we introduce a tangible toolkit that assists clinicians in visualizing interaction stages with an AI decision support system in their daily practice, and in articulating ethical values and concerns emerging in each step. Preliminary tests with four clinicians for a cardiac risk prediction ML model case-study showcase the toolkit's potential to foster discussion on situated ethical considerations. This research provides a practical tool for designers and clinicians to influence ethical development of AI-driven healthcare solutions and demonstrates its potential for meaningful contributions to such processes.

Keywords

ethical values in ai; value sensitive design; clinical decision support systems; human-ai interaction

Creative Commons License

Creative Commons Attribution-NonCommercial 4.0 International License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License

Conference Track

Research Paper

Share

COinS
 
Jun 23rd, 9:00 AM Jun 28th, 5:00 PM

A tangible toolkit to uncover clinician's ethical values about AI clinical decision support systems

Machine Learning (ML) has the potential to revolutionize healthcare by enhancing risk prediction and reducing clinical workloads. However, as it impacts risk assessment, integration of ML into clinical practice presents several ethical challenges. This study focuses on enabling clinicians to express their ethical values about ML-powered clinical decision support systems, to facilitate their consideration during the design phase. Grounded in human-centered AI and value-sensitive design, we introduce a tangible toolkit that assists clinicians in visualizing interaction stages with an AI decision support system in their daily practice, and in articulating ethical values and concerns emerging in each step. Preliminary tests with four clinicians for a cardiac risk prediction ML model case-study showcase the toolkit's potential to foster discussion on situated ethical considerations. This research provides a practical tool for designers and clinicians to influence ethical development of AI-driven healthcare solutions and demonstrates its potential for meaningful contributions to such processes.

 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.