What aspects of your "live coding" sets do you prepare in advance vs. code on the fly?

“Live coding” communities seem to actually contain artists with quite a wide range of coding preparation preferences. The two extremes of this preparation spectrum would be starting with a completely blank editor upon beginning a set and typing every line of code live (100% improvisatory), and having fully composed code that you simply execute line by line while in a performance (100% composed). I think very few of us live at either of these extremes, and I suspect many of us are actually further along the composed spectrum than the name “live coding” might initially suggest.

Tell us about the aspects of your live coding sets that you prepare in advance vs. execute on the fly. Full spectrum welcome!

I personally perform in TidalCycles. For larger/“more serious” performances, I generally have a “landmark state” per track that is fully composed, at which point I know I will be happy with the sound. The improvisatory elements from my sets generally come from how I bring the various channels in my track in/out, transitions between tracks (these two can be related), applying a number of prewritten functions to the composed states of the channels of my tracks to “mangle” them temporarily (both built-ins and my own custom functions, and by applying them to the core pattern in orders that I haven’t preconceived), and changing the parameter values of applied functions. I’m also fond of using the “transitions” built into Tidal to mangle an ongoing sound rather than actually transition between 2 different sonic ideas. I’ll often structure my sets so that I never really have to change or write a new, core sample pattern live, instead preferring to apply functions to alter the rhythmic structure of the pattern.


Great topic!

Although I always type everything from scratch, it’s almost never completely improvised. In the same way that it is a personal choice to have pre-composed code ready to execute and manipulate, another option is to type in code pretty much verbatim from what I’ve practiced. At some point during the performance I will digress, and explore new directions. And I’m not going to get everything exactly the way I practiced even if I wanted to.

But for me what’s interesting about blank slate live coding is the potential for the audience to follow along with the process. I think it’s easier for some audience members to do this when the code is being typed out in realtime, versus jumping back and forth executing and manipulating pre-written blocks. That said there’s no good reason to privilege audience comprehension over musical results or performer enjoyment… this is just a personal interest.

With that said, I usually have enough in my brain before starting a performance that I can get through a couple of different scenes with some sounds that I like, starting from a blank editor. I also usually have fuzzy ideas about how I’ll transition between these scenes. And it never works out the way I planned, and that’s the part of the fun. Or sometimes utterly painful.

When performing visuals, I’ll always start from a blank screen, but I’ve got a dozen or so small programs that I know are a good jumping off point. Most of them I can recall easily enough, though I’ve started collecting them in a repository so I can refresh my memory when necessary.
The performances themselves are always very exploratory, and reactionary to what the musician/s are doing with almost no pre-prepared code.

I guess that an interesting aspect of advanced preparation is my usage of animated gifs as textures in Improviz. I definitely find that I’m thinking about that before hand and trying to better understand how I can put them to good use. So whilst the code is always written/thought about in the moment, I already have an idea of what I want to do with the gifs that I’m currently using.

1 Like

I think this is one of the most important things to talk about with people who are new to the scene because a lot of people who experience live coding for the first time think that everything in a performance is started from a blank slate. My approach is very much like Charlie’s - no text on screen but there are several ideas I have in my head from practice that I want to explore live on stage, which might end up exactly as I practiced or very very different.

That being said, I perform more frequently as part of a group nowadays and it’s much more of a “free” improvisation. We often sit down before we start and maybe talk about more general aesthetics that we want to achieve during the set and then work around that. This is a lot more fun for me and the feeling of finding a groove and coming up with something new in the moment is very satisfying. It doesn’t always end up like this and there’s risk involved, but it’s a lot less nerve-wracking when you have some friends on stage with you.

1 Like

I prepare almost everything in my sets currently, for a few reasons:

  • I write songs that kind of have to be performed in vaguely predictable sections to work
  • Ixi uses the arrangement of the text/whitespace in physical space to determine time, so a single typo can throw you into a different time signature (this is great great great for songwriting, but super risky for live performance).
  • typing while singing is quite hard. :wink:
  • The only version of ixi that works on my computer is super unstable so however much I pre-prepare there’s always an element of risk anyway.

Probably 90% of it is pre-typed and just evaluated live.

That said, the better I get to know my own songs, the easier it is to sing on autopilot and type at the same time, so I’m gradually starting to introduce some more slightly indeterminate stuff and type some extra melodic/percussion lines live. I also usually live-type effects. My approach to both of these things at the moment is that I’ll leave myself a couple of different options for how to process a part, and pick one depending on the acoustics/mood of the space and how I’m feeling.

I do have one instrumental in my set which is built up from scratch over two pre-sampled drones, but it’s the exception.

A few people have expressed surprise to me that more people aren’t using code to write songs, and I suspect the idea that performances have to be done from a blank screen is part of the reason for that. I didn’t know this community existed when I started writing songs in ixi, so I was kind of creating in a bubble. I’m quite glad I started writing and then found you all later, because I’m not sure I would have thought of doing what I’m doing now if I’d had other influences at all.

I also have an instrumental project that I think I’d like to start doing live-coded things with and although I probably would still use some pre-prepared stuff in that context I quite like the idea of doing more improvisation on stage and I think there might be more space for it there.

Re the audience being able to follow along - I had some feedback from a few friends at non-algorave indie shows that they didn’t know what I was doing, so I’m currently trying a different approach where every section of code (and sometimes individual lines) is commented in detail with what’s happening. Some of the comments describe to me what to type out (or a list of potential options) and the line itself will be left blank, so even though it’s pre-prepared I’m doing a little more than just evaluating, partly just for visual effect/comprehension, and partly because it’s satisfying for me to type something out from scratch for some reason even if it’s pre-written :slight_smile:


So is your blank slate aesthetic also related to audience comprehension? Curious if there are other reasons for using starting with an empty editor when there is “prepared” material involved…

I love this approach! If you decide you want to help the audience out (not by any means a given), and are also using pre-written code, this really makes a lot of sense. Have you seen other live coders adopt this idea? Also seems like a nice way to potentially share your code online and have it be more intelligible to people playing with it.

It’s weird to me that after playing/singing in bands all through my twenties and thirties, now in my forties I sit at my laptop by myself to make music. Really need to work on this.

@charlieroberts and @ryan, thanks for making a clearer distinction between “typed live” and “improvised”. I think it is also useful for new comers to know that live typing can be somewhat practiced. But I can definitely see the benefits in this choice – both for the audience, as typing a line at a time can potentially better help them follow the set, and for the performer, because I imagine it can add an exciting indeterminism to the performance/implementation of a “planned” set, not so different from the thrill of being forced to play an instrument on stage with a band, even when you’ve rehearsed all the major riffs for the song.

@deerful, love your commenting suggestion to improve audience comprehension, even with pre-written lines of code :slight_smile:

A few people have expressed surprise to me that more people aren’t using code to write songs, and I suspect the idea that performances have to be done from a blank screen is part of the reason for that.

@deerful, yes, I definitely think this some holds people back from jumping into the live coding community and making the music with code that best suits their own performance style/song structure. Part of the reason I wanted to start this discussion! And to be fair, I would argue that some of my favorite and most prominent live coders today definitely have structured tracks that they return to in most of their sets, regardless of how much of them they type live!

1 Like

to be honest, I think I’ve always started from a blank slate just because that’s what I’ve always done haha. Because LCL and Improviz make it easy to get quite interesting things happening with very little code I never felt it was necessary to have anything pre-written.
I think it’s also an influence of how the live-coding/algorave scene seemed to me when I first got involved.
That said, as I build out more functionality in Improviz, I could see that I may want to start with something pre-written because the intention with my performance is to explore something more complex. I don’t really see any difference between that, and building the functionality into a library or the language itself.

Heeeey ! I strongly identify with this topic, I love doing what you say, when I started in live coding, learning and doing exercises and sharing and exercising the mind always started from 0, what really made me know my algorithms, after time and Now I feel very happy to compose music for each presentation and make algorithms that allow me to experiment live and improvise, that can be reproduced in one way or another, but with a composition that I consider “ready” for the listeners, I personally love the music and when I go to a concert and listen to someone, I have many expectations, it’s a process of seduction, and when you have a “find” or you’re ecstatic with the music of others, it’s fantastic, I like to share my process, but there’s also Sometimes I also want to tell a visual story, sometimes some people do not care as much as me music, they just listen and watch, hoping to have that “Find” or have fun.

I think I respect the way that live programmers feel happy playing live, what I do not like is when you talk about “good practices” when you play from skratch, I think this demerits the practices of other live programmers who have other processes. I feel good by composing and “typed live”, for a long time I stopped thinking that things are better because they seem to be more complex, in any way, making music, under any medium, it is always a challenge, the challenge of ordering sound on time. Would it be great to eliminate these ideas about whether it is live encoding or not? Both processes are live coding from my point of view.


It depends on the audience and purpose of the performance, but for Algorave-style performances I prepare code for nearly everything. I typically prepare small 5-minute compositions that I can improvise with. However, I also do completely improvised stuff in live streams and in front of live audiences if the performance demands it.

The “live” part of my performance consists of:

  1. Improvisation of the pre-written compositions
  2. Transitions between the pre-written compositions

Improvising on top of pre-written stuff is super fun to me and usually consists of contorted glitches and time warps. By improvising on top of pre-written code, I feel a greater freedom to go nuts, try new functions and experiment with weird stuff. I also really like being able to reproduce familiar elements of compositions that the audience might recognize, but the performance never sounds the same way twice.

Transitions are probably the most difficult part of a live performance. They’re more about careful manipulation of code to result in a smooth or deliberate change rather than it is about improvising.

I guess I would sum up my Algorave approach as a prepared attempt at chaos.

In live streams and in more “critical listening” settings, I will do from-scratch improv. My from-scratch improv usually results in a ton of mistakes, awful-sounding accidents (looking at YOU gabber kick samples), and drastic variations in sound. As long as the audience knows what they’re getting into, I am OK with doing this.


I agree with @kindohm <3 Algorithms programmed to become us crazy and free :exploding_head:

Many listeners avoid common places and recipes with very little capacity for surprise, then, the expectation that generates a new artistic process when it ends in an old style, usually discredits the process, this happens sometimes with the Live Coding from scratch.
if the coder has programming skills in the use of functions and algorithms in a synthetic way to reach interesting sound sequences and the programming language allows this (that is one of the great benefits of TidalCycles) is very exciting; as they said, it depends on the context of performance.

1 Like

I’ve just found out I’m doing a performance on Monday alongside/with some free improvisers. For that, I’m going to start with a blank screen in SuperCollider. The reason that feels appropriate in this situation is the free improv context: so I want to communicate clearly to both the audience and the other players that, musically, I am also starting with a blank slate.

It’s bit of a con, though. I’ve got about twenty folders of carefully edited and curated short samples to draw on, plus about thirty other longer loops. I’ve got two pages of initialisation code for my SuperCollider setup, plus a homebake snippets system that lets me quickly drop in things that are too complicated to remember or type. Plus, as several other people have said, I’ve got rehearsed code in my head to get started with.

Also… unless I was going to open a terminal and start writing a new language from scratch, it’s turtles all the way down, isn’t it? In terms of ‘pre-prepared code’ there’s JITLib, sclang, scsynth, macOS, and all stations below that.


I do think there is a difference between pre-prepared code and pre-prepared music. Like many live coders I’ve spent countless hours over a number of years working on my own live coding environment, which consists of thousands of lines of code and without it, I would not be able to perform. This includes putting together a sample bank and writing synthdefs but I consider this more of a pre-prepared framework as opposed to pre-prepared music. I guess it depends on how much you think a live coding language is part of your music. I think of it as a collection of affordances similar to an instrument / method for writing music, but not as music itself.

I do, however, think of code written with a live coding language as music, or at least a representation of music. So I think pre-preparing code in some way is inescapable, but the way you can pre-prepare your music exists on a much broader spectrum; from not at all (blank screen, completely improvised within the affordances of the language) to very (opening a saved file and executing it line by line with no changes made during the performance). I like to think like this as it puts live coding on much more level playing field with other methods for music making given the same spectrum exists for pianists etc.

@tedthetrumpet I also like to think of my snippets as something akin to jazz licks that I can call upon when it feels right, definitely not a con!

1 Like

I agree that this spectrum exists for live coders. I think free improv from scratch is a difficult skill to hone for any traditional instrumentalist, and to do it well with code is equally difficult. It isn’t the only way to make sound.

Part of my musical background is in jazz music. I think my composition and performance process stems from that experience: e.g. a jazz saxophonist might write a song for their band; the band plays the music but then soloists in the band improvise within the parameters of the written song. Or in some cases the entire band could improvise at once but underneath is the foundation of that original song (or not, it can go off the rails if the band chooses).

Haha absolutely. A solo pianist might be able to improvise from scratch very well, but they are not defining the timbre of their instrument or the playable interface of the instrument on the spot. I don’t see how defining loops or synthdefs ahead of time is much different.

1 Like

Personally, it largely depends on what I wish to express, available time for prep, length of set, etc. Typically, if I’m playing a set of songs that I’ve planned out, I use a “fill in the blank” method where I preset certain information but leave comments in areas that I want to change.

Ex. I’ll pick a synthesizer and preset it’s parameters but I’ll leave comments where I’m supposed to decide what notes or rests to play.
I also have a loose form in mind - consisting of moments or processes that I’d like to hit. (ex. Drone -> Funky beats - > Noise)

Thinking about jazz because @kindohm mentioned it - I feel like improvising with my premade code feels like operating a futuristic tune from a Real Book. Except instead of needing to hit musical moments linearly, like I would normally read music, I can pick and choose from a variety of moments and sound material to explore and improvise on. Also, similarly to @deerful, I leave comments for both myself (ex. instructions/reminders [ex. #fast -> slow]) and the audience (ex. “# funky bass line”) throughout my code.

I work with preset code because I have a lot of ideas to usually get through in a piece and I would rather focus on getting performing them versus preparing them from scratch. I find my method allows me to focus on the other elements I’d rather perform/improvise on during a performance.

Another thought around this is that I wouldn’t make a guitarist put together their instrument before a show but if that’s part of the piece, that’s okay too. That process would just become another part of the piece to experience. This is my general approach to live coding as well.

Great topic!

When I started out, my performances were mostly pre-prepared. I’d have a bunch of pre-written scripts, and would start and stop them with different parameters/sample sets, where some scripts would have a primitive text-based interface for making base patterns to be transformed. I worked with my friend Ade who was doing similar things, and we’d sketch out a kind of graphical score to plan when we’d start and stop scripts, with a sketch of how the set would develop in terms of intensity, and so on… But we’d definitely improvise around that. We’d call this a “generative music performance”.

Then when live coding came along (Ade came up with a live coding system independently, I caught on after we linked up with the good supercollider people when TOPLAP was formed) we had a kind of final ritual performance where we performed with our old system one more time before deleting it all. I think we were totally sold on making code during a performance. I made a live coding editor that pointedly didn’t have a save function for a while, although eventually added one and started building up a library of scripts made in this editor, and would do a mixture of from-scratch live coding and working with these premade scripts. This was mostly out of necessity, I was using a general purpose programming language (Perl), and hadn’t really got straight what algorithmic music was for me, and coming up with a new idea about that during a performance and having time to edit was a bit much. Not impossible when collaborating with one or more people, but live coding from scratch felt like both a super important, but yet impossible ideal…

But while I was super sold on the principle of live coding, the practice just didn’t live up to the dream. There was a lot of procedural logic, involving modulo arithmetic, random numbers (which I’ve never been that keen on), manipulation of state, and some more interesting scripts doing agent-based interaction… In practice I guess I had a nice way of exploring polyrhythm, but nothing really new came out in the 20-30 minute span of an electronic music performance.

I was really inspired by collaborations with an improvising percussionist Alex Garagotche though, who encouraged me to pursue live coding from scratch. We had some great times in practice sessions which never really came out in the live performances we did. This motivated making a new terse language, and collecting together ideas from learning knitting, haskell, reading about Indian Classical Music and Laurie Spiegel’s essay on pattern, Tidal came out.

I suppose for me, the terms “improvisation” and “composition” (along with score, instrument, etc) don’t really apply well to computer music. They are from the world of physical, direct manipulation, and linear conceptions of what music is.

So rather than the spectrum being between those two old terms, I’d put live coding itself at one extreme end of the spectrum, and generative music at the other. Live coding is then the idea that we can make music from scratch, for the moment, by writing code, and generative music is the idea that computers can make music entirely by themselves, unsupervised. Both ideals are impossible, and I think the best music can probably be found between them. But for me, I feel I’ve always tried to work towards the live coding end.

Making tidalcycles has this ideal at its heart, a few people have commented on how to use it best, you really have to give up using it to transcribe music you have in your head, and instead use it to follow an idea somewhere unexpected. This is what I mean by interference pattern - putting two elements together in such a way that something unexpected is very likely to come out. In fact, if I finish a performance without learning something new about tidal, that feels really bad, and I then realise it’s time to add new functions or sounds to my system, to freshen it up and open up new possibilities to explore.

So I think live coding is about listening, learning and discovering something new. Popular music culture is stale as hell, and in dire need of new ideas… I think we’re here to find them.

That said, it’s important to fully recognise that you don’t need an audience to live code. I think the best tidal music is probably coming from people who are live coding alone, really developing an idea in the studio to an advanced state, then bringing it to an audience to enjoy… and as discussed above, working out strategies for live coding with that well crafted, pre-worked-out idea. I’d like to do more of that too, and do have some set pieces I probably pull up too much.

But really, my favourite feeling ever is finding a new idea in the intense, intimate situation of a live, loud, live coded performance. It’s risky, and pulling it off seems to involve e.g. getting good sleep the night before, eating healthy food during the day, going for a good walk beforehand, preferably working within a well-practiced collaboration, and not drinking… But it’s great when it works.


On the flip side, I am imagining going back to a 100% blank slate approach with a simple environment and a small set of key samples. The prepare-a-track-ahead-of-time approach is growing more difficult by the hour.


I love this question! I find that with La Habra, I spend a lot of time trying to work towards the ideal of a blank SVG and no prepared animations, but not too much boring time watching me type and think about math. :laughing: To that end, the language itself has a number of primitives (paths for shapes, etc.) and then some more combinations I have added in for common effects (lines building down the screen, random dots effect). Both can be accessed with snippets so I don’t have to type a ton, but it isn’t so terse that it is opaque to folks to who watch. Right now I also usually prepare the animations, but I am working to get away from that / write better shorthands.

Like @melodyloveless, I also tend to have a plan of approach, but then find out how I am going to get there while I am typing.

I think there is value in improvisation and so that is my dream goal, but I do think working slowly towards it keeps livecode open to people who may be intimidated at first.

1 Like

"and not drinking… ", it’s absolutely true hahahaha