Abstract
This research explores the potential of semantic sonification as a method to enhance the interpretation of abstract visual art in exhibition contexts. By translating contextual elements—such as historical background, artistic intent, and socio-political context—into structured musical layers, the study investigates whether system-generated music can support meaning-making and emotional engagement among viewers. A custom interactive system was developed to capture visual artworks, analyze their semantic attributes using AI, and generate short musical pieces that evolve in complexity based on user interaction. The study contributes to the field of multisensory exhibition design by proposing a sonification-based approach to enhance art accessibility and engagement. Future work will involve adaptive sound layering, a larger participant base, and real-world deployment to further evaluate semantic effectiveness and user experience.
Keywords
Semantic sonification; Multisensory exhibition; AI-driven interaction; User experience
DOI
https://doi.org/10.21606/iasdr.2025.1017
Citation
Xin, C., Quintero, J.C.,and Martinez Nimi, H.(2025) Semantic Sonification of Visual Art: Translating Contextual Information into Structured Musical Layers for Multisensory Exhibition Experiences, in Chang, C.-Y., and Hsu, Y. (eds.), IASDR 2025: Design Next, 02-05 December, Taiwan. https://doi.org/10.21606/iasdr.2025.1017
Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License
Conference Track
Track 3 - Design, Art & Technology
Semantic Sonification of Visual Art: Translating Contextual Information into Structured Musical Layers for Multisensory Exhibition Experiences
This research explores the potential of semantic sonification as a method to enhance the interpretation of abstract visual art in exhibition contexts. By translating contextual elements—such as historical background, artistic intent, and socio-political context—into structured musical layers, the study investigates whether system-generated music can support meaning-making and emotional engagement among viewers. A custom interactive system was developed to capture visual artworks, analyze their semantic attributes using AI, and generate short musical pieces that evolve in complexity based on user interaction. The study contributes to the field of multisensory exhibition design by proposing a sonification-based approach to enhance art accessibility and engagement. Future work will involve adaptive sound layering, a larger participant base, and real-world deployment to further evaluate semantic effectiveness and user experience.