Abstract

This paper presents a novel theoretical model conceptualizing the Sense of Agency (SoA) as foundational to ethical and effective human-AI collaboration. Moving beyond instrumentalist views of AI, we frame human-AI interaction as a dynamic, dyadic process wherein both humans and AI systems continuously assess, interpret, and adapt to perceptions of their own agency and that of their partner in real time. Grounded in philosophy of mind (intentional agency), psychological principles (Self- Determination Theory (SDT), equi finality), and systems engineering, our model introduces a quadripartite agency framework: the human’s first-person SoA, the human’s perception of AI agency, the AI’s computational self-agency, and the AI’s estimation of human agency. We advance nine testable propositions (P1–P9) examining how volitional action, equi finality, anthropomorphism, adaptive intelligence, task domain, and design features (e.g., transparency, control lability) dynamically shape agency perceptions and collaborative outcomes. Key insights reveal that AI systems function as active participants in co-constructing agency, with adaptive intelligence (P7), enabling AI to detect and respond to human agency fluctuations, critical for sustaining trust and collaboration. Design implications prioritize transparency, meaningful control lability, domain sensitivity (e.g., human primacy in existential tasks), and ethically calibrated anthropomorphism. This model provides a roadmap for designing AI systems that augment human capabilities while preserving autonomy within interdependent socio-technical ecosystems.

Keywords

Sense of agency; Human-AI collaboration; Adaptive intelligence; Anthropomorphism

Creative Commons License

Creative Commons Attribution-NonCommercial 4.0 International License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License

Conference Track

Track 4 - Human-Centered AI

Share

COinS
 
Dec 2nd, 9:00 AM Dec 5th, 5:00 PM

Modeling the Dynamic Agency in Human-AI Collaboration

This paper presents a novel theoretical model conceptualizing the Sense of Agency (SoA) as foundational to ethical and effective human-AI collaboration. Moving beyond instrumentalist views of AI, we frame human-AI interaction as a dynamic, dyadic process wherein both humans and AI systems continuously assess, interpret, and adapt to perceptions of their own agency and that of their partner in real time. Grounded in philosophy of mind (intentional agency), psychological principles (Self- Determination Theory (SDT), equi finality), and systems engineering, our model introduces a quadripartite agency framework: the human’s first-person SoA, the human’s perception of AI agency, the AI’s computational self-agency, and the AI’s estimation of human agency. We advance nine testable propositions (P1–P9) examining how volitional action, equi finality, anthropomorphism, adaptive intelligence, task domain, and design features (e.g., transparency, control lability) dynamically shape agency perceptions and collaborative outcomes. Key insights reveal that AI systems function as active participants in co-constructing agency, with adaptive intelligence (P7), enabling AI to detect and respond to human agency fluctuations, critical for sustaining trust and collaboration. Design implications prioritize transparency, meaningful control lability, domain sensitivity (e.g., human primacy in existential tasks), and ethically calibrated anthropomorphism. This model provides a roadmap for designing AI systems that augment human capabilities while preserving autonomy within interdependent socio-technical ecosystems.

 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.