What coding is really being done, though, in 'Live Coding'

What coding is really being done, though, in ‘Live Coding’. Because if you were developing you’d be designing first i.e. you’d would sit there and say, ok I need a ‘for loop’ to loop around some data structure with eg some notes in it and it’d have to connect through to some sound source. I mean you’d be sat there for a while before anything would happen surely (and the audience would be bored). It looks to me that perhaps one or two values are being adjusted, or maybe a section of pre existing code is being run maybe. It looks to me that its borrowing from DJ culture where some EQ might be adjusted (via a knob) on a track etc. I think the fact that code is projected onto a screen might be more for visual effect than what’s happening.

Well, that’s a great question. And I should start by saying that there is no consensus on how live coding should be done or what it even is.
You’re right that often in front of an audience, very little ‘coding’ is done. This is partly due to the adrenaline and pressure of the moment which can make coding very difficult.
One thing to note is that live coding languages are designed specifically to address this problem. So often have succinct/sugared syntax (eg. TidalCycles’ mini-notation) and also usually quite ‘functional’ or declaritive, so no for loops necessary :slight_smile:

Then there are artists like Heavy Lifting who perform amazing sets (I witnessed one at the end of last year) entirely working ‘from scratch’ - so in that case the designing or creativity happens live in front of the audience and it’s not boring at all :smile:
However, this does take quite a bit of experience and courage to do. At least I feel it does.

I myself often find myself muting/unmuting parts of code or what I call ‘parameterising’ which in a (literally) sticky on-stage situation. But as I do more and more I’ve been able to be more creative and actually write more code.

Then you could debate whether you are really coding with the full power of a turing complete language, or just a sub-set, almost a domain specific language (DSL) - which could be argued to be closer to config than ‘programming’. You could debate that and are welcome to if you can find or start a local meetup (there are many here in London!) - or you could also code, perform and have a lot of fun without necessarily deciding definitively on this weighty topic.
Hope that’s helpful :slight_smile:

I agree with @Synte on the fact that there is not really one way to do live coding and there is different degrees to what people code when they live code. There is quite some languages out there that abstract away much of the “core” of digital music making (programming the sequencer, synthesis, effects for example) and let you focus on the composition/structure during the live performance, which results in programming on a higher level. Have a look at the awesome list: GitHub - toplap/awesome-livecoding: All things livecoding

I like to add that live coding is also about showing your (thought) process. So a live coding performance where someone is actually writing some specific algorithm, which may take a little time, but then sounds, is in many cases not perceived as boring (although this is highly subjective of course). And the projection in this case is therefore also not just a visual effect but actually to show to the audience what is happening, as a gesture of openness. Projecting code also gives an opportunity for interaction with the audience through comments, giving people some insight in what you’re doing if they can’t read the language.

There are even specific events, so called “from-scratch” sessions, where the rule is to start with an empty screen and you have some amount of time (usually 9 minutes or so) to try and come up with something. Of course this is something you can prepare/rehearse for as well, but many try to improve as well. And also here, depending on the language some can program an entire song using a language like Tidal or Mercury, while others might end with a programmed synthesizer working on a lower level in SuperCollider.

On the other hand you could indeed make the comparison with DJ culture in the case that all the code is prepared. There have been projects exploring this, one specifically I recall was called Code Jockeying (CJ-ing), which also had been coined “dead code”, where you can cross-fade between code or have some code prepared as a next “track” (Liveness, Code, and DeadCode in Code Jockeying Practice, CJing Practice: Combining Live Coding and VJing).

But in my opinion live coding is a live performance that has more similarities with electronic musicians using other tools such as Ableton Push, Launchpad or MIDI Controllers, because first of all in most cases the music is made by the performer, not a list of tracks they play from other artists, and the music is generated in real time by the code, allowing you to change your composition and improvise with it. Instead of working with finished mastered stereo tracks.

I also wonder if the comparison to a dev cycle is misplaced. In live coding you are not making a software product. You are making an ephemeral experience. It does not have an end user that needs it to be precise. It is a performance practice rather than a design practice. Feels a bit like apples and oranges.

Live coding is also not exclusive to music.

However the point here about projecting the code resonates. It can feel like a gimmick when not much is actually being changed in the code or in the performance. Those who project code as a conceptual underpinning to demonstrating algorithms as thoughts get lost when some are just projecting their parameter changes.

1 Like

Taking my latest video as an example:

Statements such as /rh = " ... " define sequencing algorithms. These are actual algorithms, although not expressed in typical imperative style. The first of these in the window means:

  • Put placeholders into the bar according to rhythmic durations (pdelta = delta comes from a pattern), where there’s a weighted random (wrand) selection of durations: 1/11 chance of an eighth note, 2/11 chance of a quarter note, 4/11 chance of a dotted-quarter, and 4/11 chance of a half-note.
  • Then, fill in those placeholders with a series of notes, starting at the beginning of the sequence and using at least 4 of the notes (where the exact number is randomly chosen: 4..n).

The second variation modifies this by first changing the placeholders (put in 1 !, then 4-8 *s), and replace ! with a bass note in octaves.

The other main type of statement in this video manipulates effects applied to an instrument’s mixer.

This video was spur of the moment – New Year’s Eve evening, “all right, let me squeeze out one more short performance.” I’m using tricks, and prepared instruments, that I had practiced in other sessions and performances, but the details were unplanned. I didn’t practice the set and perform it – it was, switch on the recorder and hope for the best. No preexisting code, and improvising algorithms quite a bit beyond parameter tweaks.

For some concert performances, I have prepared scripts where I do very little editing, because it takes a long time to get a texture going when improvising code. You just don’t have that kind of time in an 8-10 minute concert piece – have to hit the targets faster. For me, that’s a compromise made for the social situation, but not an ideal. I much prefer to have the time to get myself into a territory I didn’t plan for.

hjh