Embodied exploration of deep latent spaces in interactive dance-music performance

Back to list

Sarah NABI

In recent years, significant advances have been made in deep learning models for audio generation, offering promising tools for musical creation. In this work, we investigate the use of deep audio generative models in interactive dance/music performance. We adopted a performance-led research design approach, establishing an art-research collaboration between a researcher/musician and a dancer. First, we describe our motion-sound interactive system integrating deep audio generative model and propose three methods for embodied exploration of deep latent spaces. Then, we detail the creative process for building the performance centered on the co-design of the system. Finally, we report feedback from the dancer’s interviews and discuss the results and perspectives. The code implementation is publicly available on our github1.

9th International Conference on Movement and Computing