Abstract

As AI technologies increasingly enter ethically sensitive domains like healthcare, questions emerge about how to design for trust, transparency, and appropriate human-AI interaction. This paper examines how imperfection in AI systems - when made interpret able and ethically governed - can become a design feature rather than a flaw, especially in the context of dementia care. Situated in Taiwan’s rapidly ageing demographic, the study engages with the dual tensions between technological efficiency and relational care. Through a quantitative survey (n=70) and statistical analysis, it interrogates how design strategies such as 'controllable imperfection' and 'exiting mechanisms' influence user trust, emotional attachment, and dependency anxiety. Findings show that moderate, explain able imperfections, when paired with transparent withdrawal protocols, can mitigate ethical concerns while reinforcing human-AI relationality. This research extends conventional TAM models by foregrounding design, policy, and ethical nuance, arguing for a recalibration of trust that moves beyond precision towards situated transparency. It contributes a practical framework for rethinking AI reliability in care contexts, less about faultless ness, more about interpret ability and risk-aware collaboration.

Keywords

Dementia care; AI Trust and Acceptance; Human-centred design; Ethical considerations

Creative Commons License

Creative Commons Attribution-NonCommercial 4.0 International License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License

Conference Track

Track 9 - Healthcare Design

Share

COinS
 
Dec 2nd, 9:00 AM Dec 5th, 5:00 PM

Designing AI Based Service for Dementia Care Through Innovation Management Lens: A Model of Trust through Imperfect Technology

As AI technologies increasingly enter ethically sensitive domains like healthcare, questions emerge about how to design for trust, transparency, and appropriate human-AI interaction. This paper examines how imperfection in AI systems - when made interpret able and ethically governed - can become a design feature rather than a flaw, especially in the context of dementia care. Situated in Taiwan’s rapidly ageing demographic, the study engages with the dual tensions between technological efficiency and relational care. Through a quantitative survey (n=70) and statistical analysis, it interrogates how design strategies such as 'controllable imperfection' and 'exiting mechanisms' influence user trust, emotional attachment, and dependency anxiety. Findings show that moderate, explain able imperfections, when paired with transparent withdrawal protocols, can mitigate ethical concerns while reinforcing human-AI relationality. This research extends conventional TAM models by foregrounding design, policy, and ethical nuance, arguing for a recalibration of trust that moves beyond precision towards situated transparency. It contributes a practical framework for rethinking AI reliability in care contexts, less about faultless ness, more about interpret ability and risk-aware collaboration.

 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.