Abstract
In an increasingly complex landscape, designers grapple with unprecedented uncertainty, often exacerbated by inherent biases and implicit assumptions. Utilizing Large Language Models (LLMs), our formative study introduces "Anticipate," a tool designed to interrogate these hidden presumptions and mitigate uncertainty. A subsequent study demonstrates that LLMs can critically challenge design ideas, elucidate underlying thought patterns, and expose biases, thereby preempting undesirable outcomes. Importantly, we employ specific input framing techniques to minimize the risk of LLM-induced biases and hallucinations in decision-making. Collectively, these methodologies aim to attenuate both designer and algorithmic biases, thereby mitigating the perpetuation of adverse societal trends.
Keywords
large language models; cognitive bias; reflection; implicit assumptions
DOI
https://doi.org/10.21606/drs.2024.1367
Citation
Muhs, N., and Stankowski, A. (2024) Leveraging LLMs for Reflection ��: Approaches to Mitigate Assumptions within the Design Process, in Gray, C., Ciliotta Chehade, E., Hekkert, P., Forlano, L., Ciuccarelli, P., Lloyd, P. (eds.), DRS2024: Boston, 23–28 June, Boston, USA. https://doi.org/10.21606/drs.2024.1367
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License
Conference Track
Research Paper
Included in
Leveraging LLMs for Reflection ��: Approaches to Mitigate Assumptions within the Design Process
In an increasingly complex landscape, designers grapple with unprecedented uncertainty, often exacerbated by inherent biases and implicit assumptions. Utilizing Large Language Models (LLMs), our formative study introduces "Anticipate," a tool designed to interrogate these hidden presumptions and mitigate uncertainty. A subsequent study demonstrates that LLMs can critically challenge design ideas, elucidate underlying thought patterns, and expose biases, thereby preempting undesirable outcomes. Importantly, we employ specific input framing techniques to minimize the risk of LLM-induced biases and hallucinations in decision-making. Collectively, these methodologies aim to attenuate both designer and algorithmic biases, thereby mitigating the perpetuation of adverse societal trends.