Abstract

A key challenge in the design of AI systems is how to support people in understanding them. We address this challenge by positioning explanations in everyday life, within ongoing relations between people and artificial agents. By reorienting explainability through more-than-human design, we call for a new approach that considers both people and artificial agents as active participants in constructing understandings. To articulate such an approach, we first review the assumptions underpinning the premise of explaining AI. We then conceptualize a shift from explanations to shared understandings, which we characterize as situated, dynamic, and performative. We conclude by proposing two design strategies to support shared understandings, i.e. looking across AI and exposing AI failures. We argue that these strategies can help designers reveal the hidden complexity of AI (e.g., positionality and infrastructures), and thus support people in understanding agents' capabilities and limitations in the context of their own lives.

Keywords

explainability, artificial intelligence, everyday life, more-than-human design

Creative Commons License

Creative Commons Attribution-NonCommercial 4.0 International License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License

Conference Track

Research Paper

Share

COinS
 
Jun 25th, 9:00 AM

From explanations to shared understandings of AI

A key challenge in the design of AI systems is how to support people in understanding them. We address this challenge by positioning explanations in everyday life, within ongoing relations between people and artificial agents. By reorienting explainability through more-than-human design, we call for a new approach that considers both people and artificial agents as active participants in constructing understandings. To articulate such an approach, we first review the assumptions underpinning the premise of explaining AI. We then conceptualize a shift from explanations to shared understandings, which we characterize as situated, dynamic, and performative. We conclude by proposing two design strategies to support shared understandings, i.e. looking across AI and exposing AI failures. We argue that these strategies can help designers reveal the hidden complexity of AI (e.g., positionality and infrastructures), and thus support people in understanding agents' capabilities and limitations in the context of their own lives.

 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.