Abstract
This paper reports on the Lab that the authors conducted during the DRS2024 conference. Through novel concepts of "Auditory Footprint," the authors aimed at exploring and highlighting the perceptual and ecological dimensions of sound events in both private and public spaces. During the DRS2024 Lab, multidisciplinary experts explored AI’s capacity to interpret and influence human interactions with soundscapes, proposing speculative-use cases, and addressing challenges such as subjective perception, equitable data collection, and ethical considerations. The findings advocate for a responsible integration of SoundAI technologies, aiming to foster accessibility, environmental awareness, and community well-being, while mitigating risks. Future work will formalize frameworks for AI-based sound applications and explore policy and technical feasibility.
Keywords
soundscape; Machine Listening; Artificial Intelligence; human-data interaction
DOI
http://doi.org/10.21606/drs.2024.1550
Citation
Lenzi, S., Özcan, E., Mora, S., Mazzarello, M., Haatveit, A., and Duarte, F. (2024) What in the world do we hear? Understanding public and private spaces through SoundAI, in Gray, C., Hekkert, P., Forlano, L., Ciuccarelli, P. (eds.), DRS2024: Boston, 23–28 June, Boston, USA. https://doi.org/10.21606/drs.2024.1550
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License
Conference Track
Labs
Included in
What in the world do we hear? Understanding public and private spaces through SoundAI
This paper reports on the Lab that the authors conducted during the DRS2024 conference. Through novel concepts of "Auditory Footprint," the authors aimed at exploring and highlighting the perceptual and ecological dimensions of sound events in both private and public spaces. During the DRS2024 Lab, multidisciplinary experts explored AI’s capacity to interpret and influence human interactions with soundscapes, proposing speculative-use cases, and addressing challenges such as subjective perception, equitable data collection, and ethical considerations. The findings advocate for a responsible integration of SoundAI technologies, aiming to foster accessibility, environmental awareness, and community well-being, while mitigating risks. Future work will formalize frameworks for AI-based sound applications and explore policy and technical feasibility.