Abstract

This research explores the potential of semantic sonification as a method to enhance the interpretation of abstract visual art in exhibition contexts. By translating contextual elements—such as historical background, artistic intent, and socio-political context—into structured musical layers, the study investigates whether system-generated music can support meaning-making and emotional engagement among viewers. A custom interactive system was developed to capture visual artworks, analyze their semantic attributes using AI, and generate short musical pieces that evolve in complexity based on user interaction. The study contributes to the field of multisensory exhibition design by proposing a sonification-based approach to enhance art accessibility and engagement. Future work will involve adaptive sound layering, a larger participant base, and real-world deployment to further evaluate semantic effectiveness and user experience.

Keywords

Semantic sonification; Multisensory exhibition; AI-driven interaction; User experience

Creative Commons License

Creative Commons Attribution-NonCommercial 4.0 International License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License

Conference Track

Track 3 - Design, Art & Technology

Share

COinS
 
Dec 2nd, 9:00 AM Dec 5th, 5:00 PM

Semantic Sonification of Visual Art: Translating Contextual Information into Structured Musical Layers for Multisensory Exhibition Experiences

This research explores the potential of semantic sonification as a method to enhance the interpretation of abstract visual art in exhibition contexts. By translating contextual elements—such as historical background, artistic intent, and socio-political context—into structured musical layers, the study investigates whether system-generated music can support meaning-making and emotional engagement among viewers. A custom interactive system was developed to capture visual artworks, analyze their semantic attributes using AI, and generate short musical pieces that evolve in complexity based on user interaction. The study contributes to the field of multisensory exhibition design by proposing a sonification-based approach to enhance art accessibility and engagement. Future work will involve adaptive sound layering, a larger participant base, and real-world deployment to further evaluate semantic effectiveness and user experience.

 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.