Abstract
Advancements in affective computing highlight the growing potential of virtual reality (VR)–based conversational agents to provide emotional support during distressing experiences. This pilot study employed a convergent mixed-methods design to examine how a VR-based empathetic agent influences users’ emotional states, perceived empathy, and trust after recalling negative personal memories. Five participants completed an emotion-induction task followed by a 15-minute supportive dialogue with an AI agent in an immersive VR environment. Quantitative measures indicated general mood improvement with a slight rise in tension, likely reflecting residual arousal rather than discomfort with the agent. Qualitative analysis identified three key themes: technical friction and authenticity gaps, surface empathy with limited depth, and dynamic trust calibration shaped by the agent’s perceived role. Integrating these findings with recent scholarship on the companionship–alienation dialectic, the study underscores the need for emotionally credible VR design that balances technological fluency with users’ deeper psychological needs. Design implications include adaptive pacing, transparent emotional limits, and support for user agency within empathic AI interactions..
Keywords
Affective computing; Empathy; Virtual reality; Human-AI interaction
DOI
https://doi.org/10.21606/iasdr.2025.970
Citation
Chiu, P., University, K., Schöniger, T., University, C.K.,and Wu, C.(2025) Empathy in the Headset: Exploring Affective Computing through AI in Virtual Reality, in Chang, C.-Y., and Hsu, Y. (eds.), IASDR 2025: Design Next, 02-05 December, Taiwan. https://doi.org/10.21606/iasdr.2025.970
Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License
Conference Track
Track 4 - Human-Centered AI
Empathy in the Headset: Exploring Affective Computing through AI in Virtual Reality
Advancements in affective computing highlight the growing potential of virtual reality (VR)–based conversational agents to provide emotional support during distressing experiences. This pilot study employed a convergent mixed-methods design to examine how a VR-based empathetic agent influences users’ emotional states, perceived empathy, and trust after recalling negative personal memories. Five participants completed an emotion-induction task followed by a 15-minute supportive dialogue with an AI agent in an immersive VR environment. Quantitative measures indicated general mood improvement with a slight rise in tension, likely reflecting residual arousal rather than discomfort with the agent. Qualitative analysis identified three key themes: technical friction and authenticity gaps, surface empathy with limited depth, and dynamic trust calibration shaped by the agent’s perceived role. Integrating these findings with recent scholarship on the companionship–alienation dialectic, the study underscores the need for emotionally credible VR design that balances technological fluency with users’ deeper psychological needs. Design implications include adaptive pacing, transparent emotional limits, and support for user agency within empathic AI interactions..