Abstract
This paper presents a novel theoretical model conceptualizing the Sense of Agency (SoA) as foundational to ethical and effective human-AI collaboration. Moving beyond instrumentalist views of AI, we frame human-AI interaction as a dynamic, dyadic process wherein both humans and AI systems continuously assess, interpret, and adapt to perceptions of their own agency and that of their partner in real time. Grounded in philosophy of mind (intentional agency), psychological principles (Self- Determination Theory (SDT), equi finality), and systems engineering, our model introduces a quadripartite agency framework: the human’s first-person SoA, the human’s perception of AI agency, the AI’s computational self-agency, and the AI’s estimation of human agency. We advance nine testable propositions (P1–P9) examining how volitional action, equi finality, anthropomorphism, adaptive intelligence, task domain, and design features (e.g., transparency, control lability) dynamically shape agency perceptions and collaborative outcomes. Key insights reveal that AI systems function as active participants in co-constructing agency, with adaptive intelligence (P7), enabling AI to detect and respond to human agency fluctuations, critical for sustaining trust and collaboration. Design implications prioritize transparency, meaningful control lability, domain sensitivity (e.g., human primacy in existential tasks), and ethically calibrated anthropomorphism. This model provides a roadmap for designing AI systems that augment human capabilities while preserving autonomy within interdependent socio-technical ecosystems.
Keywords
Sense of agency; Human-AI collaboration; Adaptive intelligence; Anthropomorphism
DOI
https://doi.org/10.21606/iasdr.2025.469
Citation
Liou, S., Chiu, C., Sibo, I.P., Chen, Y., Wang, I.J., Lin, S., Ai, S.,and Chang, C.(2025) Modeling the Dynamic Agency in Human-AI Collaboration, in Chang, C.-Y., and Hsu, Y. (eds.), IASDR 2025: Design Next, 02-05 December, Taiwan. https://doi.org/10.21606/iasdr.2025.469
Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License
Conference Track
Track 4 - Human-Centered AI
Modeling the Dynamic Agency in Human-AI Collaboration
This paper presents a novel theoretical model conceptualizing the Sense of Agency (SoA) as foundational to ethical and effective human-AI collaboration. Moving beyond instrumentalist views of AI, we frame human-AI interaction as a dynamic, dyadic process wherein both humans and AI systems continuously assess, interpret, and adapt to perceptions of their own agency and that of their partner in real time. Grounded in philosophy of mind (intentional agency), psychological principles (Self- Determination Theory (SDT), equi finality), and systems engineering, our model introduces a quadripartite agency framework: the human’s first-person SoA, the human’s perception of AI agency, the AI’s computational self-agency, and the AI’s estimation of human agency. We advance nine testable propositions (P1–P9) examining how volitional action, equi finality, anthropomorphism, adaptive intelligence, task domain, and design features (e.g., transparency, control lability) dynamically shape agency perceptions and collaborative outcomes. Key insights reveal that AI systems function as active participants in co-constructing agency, with adaptive intelligence (P7), enabling AI to detect and respond to human agency fluctuations, critical for sustaining trust and collaboration. Design implications prioritize transparency, meaningful control lability, domain sensitivity (e.g., human primacy in existential tasks), and ethically calibrated anthropomorphism. This model provides a roadmap for designing AI systems that augment human capabilities while preserving autonomy within interdependent socio-technical ecosystems.