How your livecoding system works (running code, etc.)

In my naked self-interest I started this thread because I’m writing something on how livecoding systems work - how code is triggered to be compiled, etc. This is probably of interest to more than just me, and I thought this thread could be a good place to gather all the descriptions together.

I particularly like livecodelab’s technical description, to give you an idea of what I’m looking for. Also, the topic What is this live coding system that you have made? had some great info. Also, Chris Nash had some writing on the topic, and Sam Aaron and Alan Blackwell have a paper that describes SonicPi in quite a lot of detail.

I’ll add my own system here shortly…

1 Like

Everything I’m doing at the moment sits on top of Clojure/ClojureScript and the pile of network REPL support in its ecosystem. The front end is Emacs and CIDER. The back end for ClojureScript at least is Figwheel Main which supports live coding into the browser, or into node.js. (Straight Clojure/Java is more robust than ClojureScript/Javascript but less popular.)

The current version of Gibber takes in JavaScript and codegens a JavaScript audio callback. When you add a new synth, effect etc. a new audio callback is compiled and replaces the old one. You can do some weird stuff with this, like reconfigure the audio graph three times on three subsequent samples, but more practically it provides feedback loops, sample-accurate scheduling, and audio-rate modulation of timing. Executed end-user code is typically stored in a function that is scheduled for the start of the next musical measure. Graphics are mainly provided by three.js, with a layer of abstractions for sequencing and multimodal mapping that makes treating audio objects and the graphical objects almost identical.

In the alpha version of Gibber that I’m working on, it uses JavaScript with a bit of extra parsing to enable operator overloading. There’s also a PEG for Tidal’s pattern language. Audio runs in a separate thread but there’s a bunch of metaprogramming that tries to make it so end-users don’t have to worry about this. The audio callback is still code generated when the graph is changed, however, there is a lot more optimization happening underneath the hood than the previous engine, like using a single memory heap for the the callback. The graphics are provided by marching.js which compiles GLSL shaders based on end-user JavaScript code.

There’s an extra parsing stage in both versions (and also in gibberwocky) to markup end-user JS tokens (and also tokens in Tidal patterns) for in situ visualizations / annotations, a la https://charlieroberts.github.io/annotationsAndVisualizations/

For some reason I always seem to come back to the same basic structure that has been laid out in @yaxu’s TidalCycles/Dirt system:

  1. An event generator, usually implemented in an interpreted language (first Python, more recently Lisp). All the code interaction between user and system happens here, and how the events (which in my case are parametric descriptions of sound) are generated is usually my main focus.
  2. A one-way channel (mostly OSC) for the event generator to send events to a Synth/Sampler that turns the parametric descriptions into sound.
  3. A collection of pre-compiled parametric synths and samplers (usually ScSynth).

Recently I’ve been changing things a bit, created a little ad-hoc pattern parser and my own synth that can be compiled to WebAssembly and runs inside a JavaScript AudioWorklet, while the communication channel in that case is JavaScript’s message facilites. But the basic structure still holds. Sometimes it feels a bit uncreative to always end up with the same structure, but then again, fancy synthesis etc. has never been my focus, and I put way more effort in how the music is generated as a collection of events, how the events are generated, and how they form a structure.

1 Like

I use a Squeak Smalltalk virtual machine running as a panel in the Chrome DevTools (the extension that makes it work is in the Chrome Web Store). It has a two-way bridge with the underlying JavaScript environment, and can use all of the Chrome extension and debugger APIs.

I write web apps that are hybrids of Smalltalk and JavaScript, with the ability to use Smalltalk block closures as JS promises and callbacks. I’ve written mashups with several interesting JS frameworks, including Mozilla’s A-Frame VR framework, VueJS, and Hydra graphics.

Smalltalk has its own process scheduling model, so I work with processes that evolve over time and can be paused and resumed, rather than just evaluating standalone bits of code repeatedly.

I would love to know a bit more about how you’ve gone about doing this, I had an idea to try and integrate with Firefox’s dev tools but gave up. Are you doing a talk on this anywhere or a write-up anytime soon?

A write-up and talk/demo would be a good idea. A Chrome DevTools extension is just a Chrome extension that has access to the chrome.devtools APIs, via permissions granted in the extension manifest. Getting the SqueakJS virtual machine itself into the extension was easy, since it’s pure JS. The system also runs in a normal webpage; see https://caffeine.js.org/.

thanks,
Craig