Abstract
A key challenge in the design of AI systems is how to support people in understanding them. We address this challenge by positioning explanations in everyday life, within ongoing relations between people and artificial agents. By reorienting explainability through more-than-human design, we call for a new approach that considers both people and artificial agents as active participants in constructing understandings. To articulate such an approach, we first review the assumptions underpinning the premise of explaining AI. We then conceptualize a shift from explanations to shared understandings, which we characterize as situated, dynamic, and performative. We conclude by proposing two design strategies to support shared understandings, i.e. looking across AI and exposing AI failures. We argue that these strategies can help designers reveal the hidden complexity of AI (e.g., positionality and infrastructures), and thus support people in understanding agents' capabilities and limitations in the context of their own lives.
Keywords
explainability, artificial intelligence, everyday life, more-than-human design
DOI
https://doi.org/10.21606/drs.2022.773
Citation
Nicenboim, I., Giaccardi, E., and Redström, J. (2022) From explanations to shared understandings of AI, in Lockton, D., Lenzi, S., Hekkert, P., Oak, A., Sádaba, J., Lloyd, P. (eds.), DRS2022: Bilbao, 25 June - 3 July, Bilbao, Spain. https://doi.org/10.21606/drs.2022.773
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License
Conference Track
Research Paper
Included in
From explanations to shared understandings of AI
A key challenge in the design of AI systems is how to support people in understanding them. We address this challenge by positioning explanations in everyday life, within ongoing relations between people and artificial agents. By reorienting explainability through more-than-human design, we call for a new approach that considers both people and artificial agents as active participants in constructing understandings. To articulate such an approach, we first review the assumptions underpinning the premise of explaining AI. We then conceptualize a shift from explanations to shared understandings, which we characterize as situated, dynamic, and performative. We conclude by proposing two design strategies to support shared understandings, i.e. looking across AI and exposing AI failures. We argue that these strategies can help designers reveal the hidden complexity of AI (e.g., positionality and infrastructures), and thus support people in understanding agents' capabilities and limitations in the context of their own lives.