AudioStellar: Open source data-driven musical instrument for latent sound structure discovery and music experimentation

Hi guys

A couple of weeks ago we released the first public version of our open source project: AudioStellar.

We’ve put together a basic machine learning pipeline for visualizing a collection of short audio samples in an interactive 2D point map we like to call “sound space”.

Any sound artist can create a sound space using his/her own sounds and explore AI learned representation. We propose three creative modes for playing samples including casting moving particles and creating constellation loops.

It’s made in c++ using openFrameworks and Python

Code and binaries: https://gitlab.com/ayrsd/audiostellar/

Video: https://www.youtube.com/watch?v=ly11EhW7-T0

We want to incorporate OSC as an interface to other enviroments (like tidal for example).
So its important to have feedback for thinking an API that con be useful for different scenarios

3 Likes