Abstract
As AI technologies increasingly enter ethically sensitive domains like healthcare, questions emerge about how to design for trust, transparency, and appropriate human-AI interaction. This paper examines how imperfection in AI systems - when made interpret able and ethically governed - can become a design feature rather than a flaw, especially in the context of dementia care. Situated in Taiwan’s rapidly ageing demographic, the study engages with the dual tensions between technological efficiency and relational care. Through a quantitative survey (n=70) and statistical analysis, it interrogates how design strategies such as 'controllable imperfection' and 'exiting mechanisms' influence user trust, emotional attachment, and dependency anxiety. Findings show that moderate, explain able imperfections, when paired with transparent withdrawal protocols, can mitigate ethical concerns while reinforcing human-AI relationality. This research extends conventional TAM models by foregrounding design, policy, and ethical nuance, arguing for a recalibration of trust that moves beyond precision towards situated transparency. It contributes a practical framework for rethinking AI reliability in care contexts, less about faultless ness, more about interpret ability and risk-aware collaboration.
Keywords
Dementia care; AI Trust and Acceptance; Human-centred design; Ethical considerations
DOI
https://doi.org/10.21606/iasdr.2025.211
Citation
Lin, C.,and Nguyen, M.(2025) Designing AI Based Service for Dementia Care Through Innovation Management Lens: A Model of Trust through Imperfect Technology, in Chang, C.-Y., and Hsu, Y. (eds.), IASDR 2025: Design Next, 02-05 December, Taiwan. https://doi.org/10.21606/iasdr.2025.211
Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License
Conference Track
Track 9 - Healthcare Design
Designing AI Based Service for Dementia Care Through Innovation Management Lens: A Model of Trust through Imperfect Technology
As AI technologies increasingly enter ethically sensitive domains like healthcare, questions emerge about how to design for trust, transparency, and appropriate human-AI interaction. This paper examines how imperfection in AI systems - when made interpret able and ethically governed - can become a design feature rather than a flaw, especially in the context of dementia care. Situated in Taiwan’s rapidly ageing demographic, the study engages with the dual tensions between technological efficiency and relational care. Through a quantitative survey (n=70) and statistical analysis, it interrogates how design strategies such as 'controllable imperfection' and 'exiting mechanisms' influence user trust, emotional attachment, and dependency anxiety. Findings show that moderate, explain able imperfections, when paired with transparent withdrawal protocols, can mitigate ethical concerns while reinforcing human-AI relationality. This research extends conventional TAM models by foregrounding design, policy, and ethical nuance, arguing for a recalibration of trust that moves beyond precision towards situated transparency. It contributes a practical framework for rethinking AI reliability in care contexts, less about faultless ness, more about interpret ability and risk-aware collaboration.