Abstract
AI voice assistants promise greater accessibility, yet misalignments in their design can reproduce systemic bias. This study examines the usability of AI voice assistants for blind and low-vision users through a multi-method qualitative approach. We combined a literature review with observation- assisted usability sessions (n = 23) and semi-structured interviews (n = 8) with participants in China and the UK. Thematic ally analysing the corpus in NVivo 14, we identified four recurring challenges: elevated task error rates; limited recognition of diverse accents alongside vague or inconsistent auditory feedback; interface barriers that embed visual dependencies; and a resulting sense of exclusion in human–machine interaction. Together, these patterns surface what we frame as hidden “design violence”: subtle design choices that impose outsized burdens on marginalized users. Situating the results within inclusive design and algorithmic justice, we introduce an Inclusive AI Assistance Technologies Design framework — Hear, Research, Make — and a set of design-focused, checkable criteria that make accessibility auditable in practice (e.g., non-visual-first journeys, truthful read-backs with one-utterance repair, stateful continuity, and consentful pro activity). The work contributes a nuanced lens on how voice interaction can be re-aligned with social equity imperatives and offers a practical pathway for teams to measure and improve accessibility release by release.
Keywords
AI voice assistants; Accessibility; Algorithmic bias; Voice interaction; Inclusive design; Human-in-the-loop; Design justice
DOI
https://doi.org/10.21606/iasdr.2025.110
Citation
Meng, J., Zhang, Y.,and Lin, Z.(2025) Design for All: Mitigating Systemic Bias and Hidden Violence in AI Voice Assistants for Visually Impaired Users, in Chang, C.-Y., and Hsu, Y. (eds.), IASDR 2025: Design Next, 02-05 December, Taiwan. https://doi.org/10.21606/iasdr.2025.110
Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License
Conference Track
Track 4 - Human-Centered AI
Design for All: Mitigating Systemic Bias and Hidden Violence in AI Voice Assistants for Visually Impaired Users
AI voice assistants promise greater accessibility, yet misalignments in their design can reproduce systemic bias. This study examines the usability of AI voice assistants for blind and low-vision users through a multi-method qualitative approach. We combined a literature review with observation- assisted usability sessions (n = 23) and semi-structured interviews (n = 8) with participants in China and the UK. Thematic ally analysing the corpus in NVivo 14, we identified four recurring challenges: elevated task error rates; limited recognition of diverse accents alongside vague or inconsistent auditory feedback; interface barriers that embed visual dependencies; and a resulting sense of exclusion in human–machine interaction. Together, these patterns surface what we frame as hidden “design violence”: subtle design choices that impose outsized burdens on marginalized users. Situating the results within inclusive design and algorithmic justice, we introduce an Inclusive AI Assistance Technologies Design framework — Hear, Research, Make — and a set of design-focused, checkable criteria that make accessibility auditable in practice (e.g., non-visual-first journeys, truthful read-backs with one-utterance repair, stateful continuity, and consentful pro activity). The work contributes a nuanced lens on how voice interaction can be re-aligned with social equity imperatives and offers a practical pathway for teams to measure and improve accessibility release by release.