Why re-invent music environments?

Serious tools question here - why do people spend so much time re-inventing music environments?

Evan is in new territory. I am doing something very different. Why make another way to program a synth?

Is live coding stuck in a loop? (Ha)


Moved as a digression from “What is this live coding system that you have made?” that I thought deserved to be it’s own topic, hope you don’t mind, @sicchio!


@sicchio seems like almost every participant in the live coding system thread gave at least some motivation for their work, so I gotta turn the question around… what did you find problematic about the motivations that were provided? With the caveat that I (and I assume most people in that thread) were posting fairly casually…

I can see Charlie is replying at the same time, lets see who gets there first…

I guess a related question is, why develop a live coding system further when you can already make music with it? That’s easier to answer, to make it ‘better’, but actually I’ve done some extensive rewrites to change the underlying representation in a live coding environment, which has made some things easier, and other things more difficult. For example moving from discrete time steps (like a step sequencer, great for polymeter and techno) to rational time (where any step can be subdivided, great for polyrhythm and any genre other than techno ;).

So I think one answer is about trade-offs. Thomas Green’s “Cognitive Dimensions of Notation” is all about mapping the trade-offs in the design of programming languages… By making a language more terse and easier to change, it might map less well to a particular problem world, or become more error prone. There is no ideal language, we can’t really separate the notation from the music, making the notation marks out a space for thinking in.

Indeed Thor Magnusson talks about one of his languages, the Threnoscope as an individual musical piece. If someone else performs with it, from his perspective I suppose they aren’t making a new piece, they are performing the Threnoscope, by Thor Magnusson. Having watched Thor perform with it, I think I agree… It does feel complete in its own right.

I don’t feel the same way about Tidal, it’s more of an open system than a piece, and people using it can make it their own. But I think we can imagine an idiomatic Tidal performance and an idiomatic Foxdot performance, and they’d have their particular flavours…

But maybe what we’re moving towards is live coding languages that are much more open to change… That naturally change through use, just as human languages do. I hope for this, I love the idea of live coding languages becoming a new way of working with a computer, almost entirely unrelated to ‘general purpose’ programming languages. I think we have to make a lot more live coding systems before we get there

You said this was a ‘tools’ question, but I think the only thing I fully agree with in the TOPLAP manifesto is that languages aren’t about tools, but thoughts…


I don’t really see motivations in the posts, I see technical requirements. I want to know why bother making a language at all. The high level reasons.

For example, my motivation is to make dance that a human may not conceive of but a computer might. I want to do this in a way that still resembles choreographic thinking and language.

If live coding is about thoughts, where are the concepts that will lead to new music, new instruments, new ideas about sound?

1 Like

Another good question!!

The things I make tend not too concerned with sound at all. They are concerned with timbre, but personally, timbre isn’t really about sound, but perceiving movement via sound. When I listen to a drum being played, I feel like I’m listening to patterns of movement, not the sonic air pressure waves that carries it. So rather than making languages tightly coupled with sound synthesis, I represent timbre in terms of movement in multidimensional action space. As such, Tidal isn’t really a language for sound, or music, but for any kind of time-based movement.

In my mind, my current main project (tidal) is not too concerned with sequences either. Sequences are a starting point, and there’s probably a lot of mileage in exploring its minilanguage for sequencing (which is based on a tabla notation), but I often feel that part of tidal gets too much attention. There’s more to computation than sequencing. The interesting part for me is transforming and combining sequences. That’s what I think of as ‘pattern’, it’s related to timbre really - it’s where you perceive how something was (or could have been) made, via its inner symmetries and interferences, rather than some abstract raw experience of the thing.

So, timbre and pattern. Another idea I’d like to bring to tidal sooner rather than later is bringing the morphology of words into the semantics. There’s no onomatopoeia in computer music languages, and considering that we’re dealing with language and sound, I feel there really should be. Similarly (and I realise I’m heading towards the ‘abandoned projects’ thread here), I want to bring proximity and spatial arrangement into the semantics of tidal (as per my old ‘texture’ experiments). Both of these are about naturalising programming languages, making them more complete languages with gesture and prosody as well as discrete syntactical structures.


Such a good question. I’ll have a stab at my motivation.

Programming has always been a form of self-expression for me. I strongly associate my freedom with programming. Hence I don’t want to be forced to express myself in a particular way as designed by someone else. Programming is empowering and I want to it to facilitate my growth, discovery and freedom.

I find something new is often learnt musical or programmatically when trying to invent new things. Or even building new bits on top of existing musical environments. I’m not sure if thats re-invention.

So I guess my “why’s” are all personal reasons rather than thinking about building something for other people. I’m not doing anything new in creating a new musical environment other than perhaps throwing away the idea of pre-design, mass produced software for personal & customised software where the journey in creation is important. Everyone wants their musical instrument to be something special and personal.


Probably because I am another LISP-nerd, I am very much on @josephwilk’s wavelength for why I make my own system (visuals, but I assume the question wasn’t really just about music environments). I want to work with a thing that does just what I want. In my case, I also wanted a different set of aesthetics than the standards available; and it is actually very difficult if not totally impossible to make the standard shader/VJ style images with La Habra. But I think the fact that i just did my own thing rather than repurpose p5 or something was about the engagement of making my own tool and the back & forth as I perform, tweak, repeat.


I guess my approach is slightly different than “because I like to make my own things” — Even tho, making my own things is enjoyable, I’ve looked all over for that specific tool I felt would best serve my livecoding needs.

After not-finding it for long enough I decided to just build it, but had someone else did it before me, I would have simply started using that instead. So I definitely think there are many more unexplored sectors of livecoding that has yet to be tapped into, I really don’t think that livecoding is stuck in a loop. LC is much different than how it was just a year ago and can only dream about what it might be like next year.

As plain as it sounds, part of my motivation was born out of frustration about what was already there.

I worked with DAWs a lot, and at some point I found it pretty frustrating and uninspiring that my piece sounds the same everytime I hit play.
So, similar to what @sicchio said, I was looking for a system that might produce something that a human might not, or not in the same way. To put it more simply, I was looking for a system that would surprise me, that would allow me to delegate some of the responsibility to some other entity.

Another motivation was my fascination with some kind of bootstrapping, being able to reason about the structure of music in different terms, on a meta-level and shape your system along those thoughts, starting at a relatively low level.

Lastly, when I run out of ideas what to play or compose (which happens more often than i’d like to admit), i still can tinker around with the system, and build something that can help me later. To me it’s a comforting thought that there’s always something I can do. That’s something that motivates me to go on.


“Programming isn’t just a tool,” or even instrument-building – it’s also composition. (Traditional instruments also influence composition, which is why they’ve also been reinvented – think John Cage’s prepared piano, etc.) With programming, the delineation between composition and instrument is less distinct. So as we conceive new compositions (aural or visual), we naturally develop new environments.

Thought of another way, even traditional instruments always had specificity. A violin by one luthier can not only sound, but also play, differently than another, with different players choosing different violins. So even without composition as a consideration, there’s always a desire for customization of the instrument. And that can change over time. I used to design my visual instruments with a very high tolerance for my innate clumsiness! In later projects, I notice I’m less obsessed with that, for better or worse… :wink:

The only thing constant in life is mange… (oops, typo)… change.

Well, for me programming languages are tools, but saying that something is a tool is in no way pejorative for me. To me, toolmaking is in no way inferior as an art form compared to using tools.

I don’t think that many people (except @yaxu ) really answered the question - there’s a big difference between making a language (e.g. a new form of communication, a shift in the culture) vs. programming a musical piece of software. [Note: that’s not saying either is inferior! Just qualitatively different]

For example, ixi lang is a software environment but also a very different way of thinking about live music creation that most other software platforms. And I’d argue that tidal is as well.

One thing I’ve found is that creating a new syntax/language gives you new perspective. It seems a bit obvious in retrospect, but when I started to look at 3D printing movements over time and how to describe them, it made a lot of sense to do that using concepts/units of time rather than distance (as we do currently). Which gets to thinking about strokes/lines and speeds rather than exact dimensions and technical blueprints.

I’d like to hear more rationale for the different musical systems, why they needed to think so differently.


If we’re talking about environments, I think it’s much more valuable to contribute to an existing project than to build your own, unless there is some intrinsic limitation to other projects. Of course, maybe someone just really wants to express themselves with Perl or something.

If we’re talking languages, I’m pretty out of my depth academically, but anecdotally the most expressive languages I’ve used for creating music/visuals/etc are usually terse and opinionated in ways that reflect the mental models the author(s) use to understand what they are creating. They tend to produce results more expediently than a general purpose language, but generally only if your mental model agrees with the author’s.

Designing a language, or even a library, for creating art seems to me to be a form of self-expression in itself. There’s an odd public intimacy to sculpting a tool that represents the unique way the author thinks about their own process.

When I’m reading code, I try to imagine the author’s thought process. I love hearing about the techniques, mental models, and principles behind peoples’ creative processes, and so stepping into a new live-coding framework/langauge is exciting to me, especially when that framework/language seems alien.

I guess that doesn’t really answer the question, but to me it seems natural to want to create a system or language that is a symbolic representation of the ideas and models that drive your creative process. From that perspective, I think there’s always room for one more live-coding tool or language as long as there are people with new ideas and perspectives.

Or maybe it’s 1:00am and I’m hitting that point in the night where I echo platitudes and think I’m saying something interesting. It’s hard to say, really.


Could you elaborate on the morphology of words into semantics. I’m quite curious to understand what you are trying to explain. It took me a while to consider music as a language. I could no go beyond the semantic problem of the definition of the meaning of an musical object. In the pragmatic of Peirce, it impossible to evaluate a note or rhythm as a neutral signification. Now, I understood that those evaluation are cliche and somewhat cultural because music have several layers of analysis. This said, maybe your idea is way different, but still curious.