Abstract

Advancements in affective computing highlight the growing potential of virtual reality (VR)–based conversational agents to provide emotional support during distressing experiences. This pilot study employed a convergent mixed-methods design to examine how a VR-based empathetic agent influences users’ emotional states, perceived empathy, and trust after recalling negative personal memories. Five participants completed an emotion-induction task followed by a 15-minute supportive dialogue with an AI agent in an immersive VR environment. Quantitative measures indicated general mood improvement with a slight rise in tension, likely reflecting residual arousal rather than discomfort with the agent. Qualitative analysis identified three key themes: technical friction and authenticity gaps, surface empathy with limited depth, and dynamic trust calibration shaped by the agent’s perceived role. Integrating these findings with recent scholarship on the companionship–alienation dialectic, the study underscores the need for emotionally credible VR design that balances technological fluency with users’ deeper psychological needs. Design implications include adaptive pacing, transparent emotional limits, and support for user agency within empathic AI interactions..

Keywords

Affective computing; Empathy; Virtual reality; Human-AI interaction

Creative Commons License

Creative Commons Attribution-NonCommercial 4.0 International License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License

Conference Track

Track 4 - Human-Centered AI

Share

COinS
 
Dec 2nd, 9:00 AM Dec 5th, 5:00 PM

Empathy in the Headset: Exploring Affective Computing through AI in Virtual Reality

Advancements in affective computing highlight the growing potential of virtual reality (VR)–based conversational agents to provide emotional support during distressing experiences. This pilot study employed a convergent mixed-methods design to examine how a VR-based empathetic agent influences users’ emotional states, perceived empathy, and trust after recalling negative personal memories. Five participants completed an emotion-induction task followed by a 15-minute supportive dialogue with an AI agent in an immersive VR environment. Quantitative measures indicated general mood improvement with a slight rise in tension, likely reflecting residual arousal rather than discomfort with the agent. Qualitative analysis identified three key themes: technical friction and authenticity gaps, surface empathy with limited depth, and dynamic trust calibration shaped by the agent’s perceived role. Integrating these findings with recent scholarship on the companionship–alienation dialectic, the study underscores the need for emotionally credible VR design that balances technological fluency with users’ deeper psychological needs. Design implications include adaptive pacing, transparent emotional limits, and support for user agency within empathic AI interactions..

 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.