How your livecoding system works (running code, etc.)

In my naked self-interest I started this thread because I’m writing something on how livecoding systems work - how code is triggered to be compiled, etc. This is probably of interest to more than just me, and I thought this thread could be a good place to gather all the descriptions together.

I particularly like livecodelab’s technical description, to give you an idea of what I’m looking for. Also, the topic What is this live coding system that you have made? had some great info. Also, Chris Nash had some writing on the topic, and Sam Aaron and Alan Blackwell have a paper that describes SonicPi in quite a lot of detail.

I’ll add my own system here shortly…

2 Likes

Everything I’m doing at the moment sits on top of Clojure/ClojureScript and the pile of network REPL support in its ecosystem. The front end is Emacs and CIDER. The back end for ClojureScript at least is Figwheel Main which supports live coding into the browser, or into node.js. (Straight Clojure/Java is more robust than ClojureScript/Javascript but less popular.)

The current version of Gibber takes in JavaScript and codegens a JavaScript audio callback. When you add a new synth, effect etc. a new audio callback is compiled and replaces the old one. You can do some weird stuff with this, like reconfigure the audio graph three times on three subsequent samples, but more practically it provides feedback loops, sample-accurate scheduling, and audio-rate modulation of timing. Executed end-user code is typically stored in a function that is scheduled for the start of the next musical measure. Graphics are mainly provided by three.js, with a layer of abstractions for sequencing and multimodal mapping that makes treating audio objects and the graphical objects almost identical.

In the alpha version of Gibber that I’m working on, it uses JavaScript with a bit of extra parsing to enable operator overloading. There’s also a PEG for Tidal’s pattern language. Audio runs in a separate thread but there’s a bunch of metaprogramming that tries to make it so end-users don’t have to worry about this. The audio callback is still code generated when the graph is changed, however, there is a lot more optimization happening underneath the hood than the previous engine, like using a single memory heap for the the callback. The graphics are provided by marching.js which compiles GLSL shaders based on end-user JavaScript code.

There’s an extra parsing stage in both versions (and also in gibberwocky) to markup end-user JS tokens (and also tokens in Tidal patterns) for in situ visualizations / annotations, a la https://charlieroberts.github.io/annotationsAndVisualizations/

For some reason I always seem to come back to the same basic structure that has been laid out in @yaxu’s TidalCycles/Dirt system:

  1. An event generator, usually implemented in an interpreted language (first Python, more recently Lisp). All the code interaction between user and system happens here, and how the events (which in my case are parametric descriptions of sound) are generated is usually my main focus.
  2. A one-way channel (mostly OSC) for the event generator to send events to a Synth/Sampler that turns the parametric descriptions into sound.
  3. A collection of pre-compiled parametric synths and samplers (usually ScSynth).

Recently I’ve been changing things a bit, created a little ad-hoc pattern parser and my own synth that can be compiled to WebAssembly and runs inside a JavaScript AudioWorklet, while the communication channel in that case is JavaScript’s message facilites. But the basic structure still holds. Sometimes it feels a bit uncreative to always end up with the same structure, but then again, fancy synthesis etc. has never been my focus, and I put way more effort in how the music is generated as a collection of events, how the events are generated, and how they form a structure.

1 Like

I use a Squeak Smalltalk virtual machine running as a panel in the Chrome DevTools (the extension that makes it work is in the Chrome Web Store). It has a two-way bridge with the underlying JavaScript environment, and can use all of the Chrome extension and debugger APIs.

I write web apps that are hybrids of Smalltalk and JavaScript, with the ability to use Smalltalk block closures as JS promises and callbacks. I’ve written mashups with several interesting JS frameworks, including Mozilla’s A-Frame VR framework, VueJS, and Hydra graphics.

Smalltalk has its own process scheduling model, so I work with processes that evolve over time and can be paused and resumed, rather than just evaluating standalone bits of code repeatedly.

I would love to know a bit more about how you’ve gone about doing this, I had an idea to try and integrate with Firefox’s dev tools but gave up. Are you doing a talk on this anywhere or a write-up anytime soon?

A write-up and talk/demo would be a good idea. A Chrome DevTools extension is just a Chrome extension that has access to the chrome.devtools APIs, via permissions granted in the extension manifest. Getting the SqueakJS virtual machine itself into the extension was easy, since it’s pure JS. The system also runs in a normal webpage; see https://caffeine.js.org/.

thanks,
Craig

I just posted a lengthy description of the rationale of two different systems in this other thread. I also wrote a bit more about the first one on the repo page:

but I also talked about it a lot in the videos mentioned in the other post.

Hello there. I don’t know if it is too late to give a reply to this thread, but I would like to - I sometimes work on a live coding system and language which is primarily focused on 3D graphics as the main type of new media to play with. Code is written to a HTML element, where it is evaluated.

This system is responsible in drawing various 3D shapes (cube, sphere, torus etc), setting their attributes such as color, texture etc. It’s possible to set a combination of colors to make a model more colourful and aesthetically more developed to look at. Basic functions are possible for the background, too.

At first I wanted to develop it in Rust, but after many attempts to control the environment I moved to JS, where I use THREE.js - sorry Rustaceans, but JS is much simpler!

My future plans include applying animations to screen and models - modulating their location, and probably also reacting to sound (FFT analysis).

All development is still on the beginning.

See it on GitHub
Try it on a computer

1 Like

Thanks for bumping the thread, it’s inspired me to add something too :slight_smile:

I’ve created Syntə -GitHub - SynteLang/SynteLang: Public repository for the Syntə live-coding language - a language and sound engine built with Go.
In essence it is similar to Pure Data and Modular synthesis, but with a textual interface.
Listings (code) are entered via a terminal CLI which are then parsed and launched to the sound engine and displayed by another terminal program. It’s a fairly simple process which is a consequence of the deliberate simplicity of the language.
The syntax is simply operand/operator pairs, which are chained together to produce output.
One of the design goals was being able to live-code efficiently for performance too. Part of this is providing easy to use abstractions for things like wav files which require no setup. Just add your wavs to the wavs directory before you begin.
The language does demand some knowledge of signal chains and sythesis, but that can be picked up as you go.
The running listings can be edited individually in any text editor and are recompiled and relaunched as soon as the main CLI detects a change in the file (once it is saved.) This was actually far easier to implement than I expected, using the magic of Go’s concurrency.
Ancillary ajustments can also be enacted via the CLI, such as muting/unmuting listings, changing a few parameters such as fade out on exit.
The sound engine runs in a separate goroutine for efficiency and has a built in frequency dependent limiter for no loud surprises. There is also a built-in mix function (grouping of operators) to assist in setting levels.

The hardest part of it all is getting round to recording demo videos, which I am way behind on!
I wanted to add to the ecosystem something that went beyond mainly sequencing the triggering of samples, which suits my purposes well. There are some tradeoffs, but the languange is extensible by design (user adding of functions) so it can be continually adapted and built upon.
Sometimes I feel I have strayed too far from the ‘pattern’ paradigm, but the ability to compose and shape pitch and amplitude over time (among many possibilities) more than makes up for that. You can freely choose scales/intonations as you wish
In fact, if I implement a MIDI or OSC interface, it would be possible to chain the two worlds together which would be exciting! And make for some interesting pair performance!

1 Like

I finally put my Live-Coding Audio in C paper draft that was in my repository since 2017-2018 online at Clive :: mathr ; from a performer’s perspective when you save in your editor, the C code you’ve written for the DSP callback gets recompiled and reloaded into the audio engine, with the memory preserved, so if you assume the same memory layout (append-only is easiest) then sound can continue uninterrupted. Technical details about how it is implemented are a bit more complicated, see link.

The main motivation (apart from preferring text to visual nodes) was two-phase edit/commit, coming from Pure-data which has all changes live instantly making it tricky to change two different parts simultaneously.

There is only one DSP callback which gets replaced in a “whole-program compilation” paradigm, which would make collaborative multiplayer texteditor use difficult.

1 Like