Abstract

AI voice assistants promise greater accessibility, yet misalignments in their design can reproduce systemic bias. This study examines the usability of AI voice assistants for blind and low-vision users through a multi-method qualitative approach. We combined a literature review with observation- assisted usability sessions (n = 23) and semi-structured interviews (n = 8) with participants in China and the UK. Thematic ally analysing the corpus in NVivo 14, we identified four recurring challenges: elevated task error rates; limited recognition of diverse accents alongside vague or inconsistent auditory feedback; interface barriers that embed visual dependencies; and a resulting sense of exclusion in human–machine interaction. Together, these patterns surface what we frame as hidden “design violence”: subtle design choices that impose outsized burdens on marginalized users. Situating the results within inclusive design and algorithmic justice, we introduce an Inclusive AI Assistance Technologies Design framework — Hear, Research, Make — and a set of design-focused, checkable criteria that make accessibility auditable in practice (e.g., non-visual-first journeys, truthful read-backs with one-utterance repair, stateful continuity, and consentful pro activity). The work contributes a nuanced lens on how voice interaction can be re-aligned with social equity imperatives and offers a practical pathway for teams to measure and improve accessibility release by release.

Keywords

AI voice assistants; Accessibility; Algorithmic bias; Voice interaction; Inclusive design; Human-in-the-loop; Design justice

Creative Commons License

Creative Commons Attribution-NonCommercial 4.0 International License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License

Conference Track

Track 4 - Human-Centered AI

Share

COinS
 
Dec 2nd, 9:00 AM Dec 5th, 5:00 PM

Design for All: Mitigating Systemic Bias and Hidden Violence in AI Voice Assistants for Visually Impaired Users

AI voice assistants promise greater accessibility, yet misalignments in their design can reproduce systemic bias. This study examines the usability of AI voice assistants for blind and low-vision users through a multi-method qualitative approach. We combined a literature review with observation- assisted usability sessions (n = 23) and semi-structured interviews (n = 8) with participants in China and the UK. Thematic ally analysing the corpus in NVivo 14, we identified four recurring challenges: elevated task error rates; limited recognition of diverse accents alongside vague or inconsistent auditory feedback; interface barriers that embed visual dependencies; and a resulting sense of exclusion in human–machine interaction. Together, these patterns surface what we frame as hidden “design violence”: subtle design choices that impose outsized burdens on marginalized users. Situating the results within inclusive design and algorithmic justice, we introduce an Inclusive AI Assistance Technologies Design framework — Hear, Research, Make — and a set of design-focused, checkable criteria that make accessibility auditable in practice (e.g., non-visual-first journeys, truthful read-backs with one-utterance repair, stateful continuity, and consentful pro activity). The work contributes a nuanced lens on how voice interaction can be re-aligned with social equity imperatives and offers a practical pathway for teams to measure and improve accessibility release by release.

 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.