What is this live coding system that you have made?

This is a new category on the site, for discussion around the creation and development of live coding systems, both imaginary and real.

I thought a good starting point for discussion would be for us to try to describe what weā€™ve made. What is it, what are the driving ideas and principles behind it, what is the experience of using it?

This is a surprisingly difficult question to answerā€¦ I think that often when we make things we start off with an idea, but as we develop something that original idea can be lost as the thing weā€™re making comes to life and we follow it somewhere new. Thatā€™s my experience anyway, and Iā€™m still not fully clear on what Tidal is, although a bit clearer after a recent rewriteā€¦

So briefly, I think Tidal is a language for exploring interference patterns over time, based on the manipulation of behaviour, rather than data. I could go into a lot more detail here but for now Iā€™m much more interested to hear what other people think their system is about!

6 Likes

Great topic! Also, I found your description of Tidal very rich to think about.

I was always very interested in the aspects that made each music particular, I mean the elements that emerge in a particullar language, most of the times -I believe- by its determination (things that could also be interpreted as limitations). So, since I already had a background in classical and popular music, I was captivated by synthesis, programing, modulation, algorithms and all that kind of stuff in electronic music.

I was spontaneously doing live coding in Pure Data without knowing that it was actually a thing with a name, lots of languages, artists and a global comunity. I was always experimenting with a simpler kit of objects that allow me the most without having to use the mouse too much or having to create more objects. [expr~] was starting to feel like my best friend. Then, about two years ago, I knew the bytebeat technique and thought ā€œthis is it!ā€. I developed my own system based on bytebeat but -in a way of speaking- extending or re-adapting its possibilities.

There is this ā€œeleganceā€ property, like an aesthetic value in science, of making a point or getting a result in the most economic and simpler way. This is what still amaze me and what I found most interesting of Rampcode (and bytebeat, of course). All output (synthesis, scales, rhythms, randomness, modulation, etc.) is the result of the same one line of code, an expression with a single input variable.

Nonetheless, it is extremely complex if you donā€™t have a lot of knowledge in a lot of fields so Iā€™m always thinking about creating a simpler system with similar capabilities.

5 Likes

I started developing FoxDot a few years ago pretty much as a response to SuperColliderā€™s pattern library. It just seemed cumbersome to start from scratch and do very simple things but its sound synthesis engine is just too good to use anything else! I wanted the terse-ness of TidalCycles but I also wanted to think less about ā€œtimeā€ when I was live coding. When I see people use Tidal to its potential - its amazing how flexible and elegant it is, but I just couldnā€™t get the hang of it. I think I was a bit stuck in my ways of thinking about music as data. This was also pre-SuperDirt so triggering SynthDefs in Tidal wasnā€™t possible. So I started making FoxDot.

I wanted it to feel like I was conducting a digital orchestra where I could tell every person individually what to do, but could also tell them to work together using shared data. Itā€™s quite a fun way to make music but it can also be quite rigid and inflexible at times.

Itā€™s been through a whole bunch of changes as I learned more about programming, live coding, and music as a whole. The syntax has changed a bit and its become less ā€œhackyā€ over the years but through feedback from the community itā€™s grown and improved too. Thereā€™s some things I wish I had done differently and want to change but donā€™t have the energy to do with it just yet. I still think it fits its original purpose as a more simplistic approach to interactive programming of music but I have begun to understand the importance of the manipulation of events in time and I think thatā€™s where FoxDot is probably heading a little bit. It started as just a little personal experiment with a plain white text box like this:

image

But now I see people from all around the world using it who wanted to make music the same way I did. I love seeing all the different approaches people have to live coding and how personal some of these can be - so Iā€™m looking forward to seeing the other posts in this thread!

5 Likes

Iā€™ve created a livecoding system for 3D printers (Marlin-based) to explore how we can livecode physical objects (not just graphics an sound). Iā€™ve just created some installation docs and examples, Iā€™d love it if people started using it a bit more, now that itā€™s pretty stable: LivePrinter

You see some experiments (including printing in air) over here

11 Likes

I have to say I really love this project, taking 3D printing as something which I think has a reputation for being slow, wasteful, limiting and disconnected from material, and making it direct, efficient, expressive and all about experimenting with material in spaceā€¦ and the live code approach works so well. Itā€™s also nice because itā€™s messy, weird, unexpected, but probably has huge practical useā€¦

1 Like

I work on a family of systems that are typically browser-based and use .js as the end-user language. These are all being folded into a new version of https://gibber.cc, that Iā€™ll hopefully be getting out the door towards the end of this summer.

The original goals of Gibber were to use a general-purpose programming language for creative coding, while providing useful abstractions that applied to both the audio and visual domains. I also did a lot of work exploring browser-based/networked coding (asynchronous / synchronous editing, shared clock, shared editors, remote code execution etc.) although a lot of these features broke when I migrated gibberā€™s server and Iā€™ve somehow never quite got around to fixing them :frowning:

In the next version of Gibber, there will still be an emphasis on a/v coding (Iā€™ve created a number of new libraries / systems towards this goal, most recently a library/environment for live coding ray marchers http://charlieroberts.github.io/marching/playground/) but Iā€™m hoping to add more support for mini-languages and other non-JS featuresā€¦ starting with support for the Tidal mini-language.

1 Like

I have a livecoding system called SoundB0ard, which is at https://github.com/sideb0ard/SBShell

I had been aware of live coding for years, from seeing Alex perform with perl back in the early 2000s and was also aware of the development of Tidal, which always had me curious about writing my own music software.

I started dabbling in programming audio events in ~2014, working with node.js and synchronizing clients via rabbitmq (https://github.com/sideb0ard/Codetraxx), then closer to sbsh, I had an early repl written in Go which could combine sine waves (https://github.com/sideb0ard/CMDSine).
In 2016 I started a job which required me to work with C/C++ and thats when i started working on SBsh, partially to have a project to learn C.

Iā€™ve now been working on it quite consistently since then, and itā€™s quite fully featured - itā€™s a command line repl, modelled after a unix shell, with a couple of metaphors from there - you can type ā€˜psā€™ to see the running processes (instruments and algorithms), you use ā€˜lsā€™ to list the sample directories, you can cp patterns from one sound generator to the next.

From the command line you can launch several kinds of sound generators - two kinds of drum machines, one which sequences samples, one which sequences a couple of oscillators; three synths - one subtractive synth based on a minimoog, one fm synth based on a dx100, and a sample based synth. The actual sound generation algorithms for the synth designs were all learned from reading Will Pirkle books - http://www.willpirkle.com/about/books/ ; and I have a sample looper / granulator which is inspired by reading Curtis Roadsā€™ Microsound book, and also lots of tips from this Robert Henke article about his Granulator II - http://monolake.de/technology/granulator.html

Aside from sound generators, i have pattern generators, which are different sources for 16 step rhythms - i have something called markov generators, which are based on probabilities of common patterns. I have generators based on bitshift operations, and euclidean rhythms. These pattern generators can be applied to the sound generators.

The third building block are event generators. These are time based algorithms, such as ā€œevery 4 16th do such and suchā€ or ā€œover 2 bar osc back and forth between these two valsā€. Every parameter of every generator can be programmed, so you can change any aspect of the sound, pattern or event generators.

I discovered Ableton Link about a year into the development of Soundb0ard and wanted to use the library to be able to synchronize with others. It took a bit of effort, but i actually ended up replacing my own timing engine with Ableton Link, so the integration is now solid, and I can jam with others using copies of SoundB0ard or Ableton.

Finally you can arrange all of these generators into scenes, which allows a bit more compositional control.

Future plans. I feel like Iā€™ve learned C pretty well over the past few years, and itā€™s now time to embrace the full power of modern C++14/17/20 standards. Iā€™m particularly excited about using ranges. As much as i enjoy the bash-ism of the shell, i have come to appreciate the more editor focussed aspect of other live coding systems, and plan to implement vim integration. I feel like iā€™ve only scratched the surface of writing synths, and would like to explore that area further. I do also like the terseness of the functional approach, which i think i may copy to some extent too, I plan on writing my own mini lisp dsl, based quite heavily on having read http://www.buildyourownlisp.com/

So many areas to research, this project is literally my allotment, where i go to relax and play with new ideas!

4 Likes

Thanks Alex! Also, I realised that youā€™ve been livecoding printers (of a different sort) tooā€¦ and the weaving is of course a related inspiration.

I think itā€™s at a point where I can start demonstrating practical use, but as always thereā€™s a ways to go.

I made and use a small system for live-coding audio in the C programming language: https://mathr.co.uk/clive

I originally started making music with trackers like OctaMED, then moved to Pure-data, but got frustrated by its limitations when live-coding (in particular inserting objects in the middle of a DSP chain has been historically difficult).

Clive (named after both ā€œC, liveā€ and a misnomer from primary school days - I pronounce Clive with a K) grew out of the desire for several things:

  • two-phase edit/commit cycle: make many changes and only make them ā€œliveā€ all at once when satisfied

  • edit audio DSP processing code down to the sample level, allowing manipulation of synthesis and feedback processes

  • realtime-safe compilation/hotswapping (no clicks/dropouts)

It uses a ā€œwhole programā€ compilation model, which can make compilation have quite high latency once the code gets more complex (it may also prevent collaborative editing). Press save and the code is recompiled, when that finishes the code is hotswapped into the engine.

Itā€™s much more suited to experimentation with signal-based synthesis than patterns/sequencing/samples, maybe it could be a nice idea to add OSC support and combine with an additional event-based environment. I also want to try out C++, to see if I can get something like SuperCollider3ā€™s multi-channel expansion without compilation time exploding.

Linux only, with some early work-in-progress / proof-of-concept for cross-compiling from Linux desktop to Raspberry Pi and Bela (only audio so far, other IO remains todo).

3 Likes

For the past year I have been working on a little Haskell library called TimeLines, bringing together Tidalā€™s Functional-Reactive approach towards live coding (in) time and SuperColliderā€™s idea of controlling synthesis parameters, in parallel, over time.

In a way itā€™s very similar to @gabochiā€™s rampcode, in that everything is a direct result of a function of time, only it sits one conceptual layer higher: instead of the actual samples that get sent to the DAC, itā€™s the parameters of synthsesis processes that are being controlled by these continuous-domain signals, much like in a modular synthesizer. For now those processes are SuperCollider synths, but itā€™s possible to also control other MIDI-, OSC-, or CV-capable software and hardware, acting as a sort of master sequencer.

TimeLines was born out of a desire for three things:

  1. Complete modularity: Everything is either a signal, or a function from and to (most of which are also normalised). Synths can be freely patched between them, and can read the signals of other synthsā€™ parameters. Since signals are not an arbitrary data type, but rather a simple wrapper around numerical algebraic functions, the user can choose between using mathematical notation (100 * (1 + 0.1 * sin (2*pi*t))**2), or more idiomatic Haskell syntax (mul 100 $ exp 2 $ range 0.9 1.1 $ sin $ 2*pi*t * fast 3 beat), or any mix of the two.
  2. Low-level synthesis control: Since every parameter can and has to be explicitly controlled, the user is encouraged (or forced?) to think about and engage with whatā€™s happening to the sound. This can be a double-edged sword, as there are more things to worry about and pay attention to, but there is no limit to how many layers of abstraction can be built on top of the core framework. Sure, having to explicitly control 4 parameters to synthesize a kick from scratch is not ideal for live performance, but thereā€™s no reason why that canā€™t be hidden behind a kick atk rel function. The idea is to provide a low-level scaffold that can allow each user to build their own little framework for how they want to compose, perform, and reason about music. TimeLines aims to be as versatile and customisable as possible, being equally suitable for grid-based techno and free-form experiments.
  3. Easy navigation of time: there are two main modes at the moment, finite and infinite sessions. The former is only concerned with a fixed window of time, much like selecting a region of a track in a DAW. That window can be either be looped or one-shot triggered, and keyboard controls for that are provided (at the moment just in Emacs). This is suitable for studio composition and production, as it ensures that you can jump around a track and things will magically (i.e. because of the stateless functional approach) fall into place. The latter is like most other environments, in that time is constantly and infinitely increasing. This means that signals can have ever-changing behaviours, which may or may not loop, and interesting interference patterns may be observed.

The project is still in a relatively early stage and thereā€™s tons of room for expansion, so any feedback and/or contributions would be more than welcome! In particular, Iā€™m hoping to add these in the near future:

  1. Support for playing samples, either by manually scrubbing or triggering.
  2. Real-time evaluation and communication of a signalā€™s values through OSC, where each signal can have its own sampling rate (from manual-trigger-only to audio rate). At the moment these values are sampled at fixed intervals and written to .w64 files, for SuperCollider to index through at the appropriate rate. This would simplify SynthDefs and make it much easier to control other software and devices.
  3. Support for using external OSC or MIDI CC to control signals in real time (the above would help decrease reaction latency).
  4. Optimisation of the user-defined algebraic expressions by using an intermediate tree representation. Instead of add (mul 0 $ sqr $ 2*pi* slow 2 bar) $ fromList [4, 4, 4, 4] $ mod1 $ t/10 evaluating all operations, it could be examined and simplified to a constant 4. Some progress on this is already being made in the wip branch of the repo.
  5. Attaching various metadata to signals, which can document various properties such as range, type (constant, step-like, sinusoidal, linear, exponential etc.), whether itā€™s looping or not and more.
4 Likes

I am currently making a new tool specifically for my choreographic work. Basically I want a language that is more akin to how I would talk to a dancer in the studio during the creation/rehearsal process. The idea is that this will be easier for live coding a performance score which a dancer will interpret (watch my previous work to see examples - people seem to not understand I mean dance, not dance music).

The current output is photographic images within a web browser, but I aim to develop this to be used with video and possibly motion capture. Tagging movement ā€˜samplesā€™ has become a very tedious task in this process. Iā€™ve employed machine learning and undergraduate research assistants and the tagging process is still ongoing. Sound and music people have it way easier.

The language itself currently has three elements - the movement, timing, and phrasing. The movement is the photos that have been tagging with descriptive terms such as lunge, reach, plie, etc etc. The phrasing is dance compositional terminology such as retrograde or accumulation. The timing aspect I am still figuring out. If the timing of a loop or an individual image will determine the speed or rhythm. I am also trying to figure out how to differ between something like a hold and a sustain with still images. We will see soon!

4 Likes

Iā€™ve been working on node-red-music (NRM) for the last couple of years. I started live coding with gibber, and Iā€™ve looked at sonic-pi and tidal a bit. NRM is graphical and has good collaboration and interoperability. This ā€˜manifestoā€™ was written directly to answer Alexā€™s question about driving ideas and principles:

NRM has the following aims:

  • To be accessible. It has been tried it out with success in quite a few UK schools, both secondary (11-) and primary (-11).
  • To be transferable, both in the tech and musical domains. Learning
    from NRM should be able to translate out of its own context. It sits
    on top of node-red, which itself sits on top of JavaScript. The
    basic syntax is graphical, but digging deeper the syntax relies on
    JSON. Snippets of pure JavaScript can be used, but are not
    necessary. From the musical perspective it tries to use standard
    musical notation e.g. scales start at 1, not 0. There are excellent
    domain-specific languages out there e.g. tidal, sonic-pi, but we
    emphasise transferability above brevity.
  • To encourage collaboration. It is easy to synchronise across
    different instances. Many people see tech as an isolating and
    uncreative domain, we hope to address that misconception.
  • To inter-work with other musical systems. Live sampling and replay is
    supported, it is also to easy to interact with other
    programs/systems via OSC, MIDI, WebSockets, UDP or whatever.
  • To support alternative interfaces. Node-red is targeted at IoT
    applications. Using this support it is easy to control music
    via virtual interfaces such as twitter, or other web
    services. Physical devices can be controlled by or used to control
    NRM code, and many gadgets have contributed support via
    dedicated node types.
  • To be open source. It builds on the work of others that have made their work freely available, and the node-red community is constantly building new ways of interacting. We often overlook the positive societal benefits of the open source movement, and model it offers for human progress through sharing and collaboration.
2 Likes

2 posts were split to a new topic: Re-inventing music environments

Wellā€¦at this point itā€™s mostly abandoned and so maybe belongs in the other thread, but like @claude I was making a system for live coding in C: https://github.com/ebuswell/livec. I kept being unsatisfied by other environments, since i wanted the ability to create and try new synthesis algorithms, not just tweak existing ones. Iā€™ve since realized that Iā€™m much more interested in creating languages for liveness, and the notion of liveness in general, than Iā€™m interested in doing livecoding as a musician. I still have aspirations, and I study and build languages in other non-live ways so I try to keep up with this community. But probably all you can expect from me is barely working prototypes :wink:.

The live coding shell was pretty simple: A file was watched and when it was changed was compiled into a dynamic library and linked into the symbol table at runtime. The shell program would run a new thread for int main() in the file. Any functions were (somewhat) automatically wrapped with dispatch functions, so that the new function would be called, but currently executing old functions would continue unchanged. The more interesting bit turns out to have been all the support structures I ended up building into it: atomickit, a library for atomic operations; libwitch, a library for currying functions and being able to actually copy functions in memory and create dispatch functions and such; and sonicmaths, a library of common synthesis routines and patterns. Those are all on my github, too, if anyone is interested in nabbing some underlying C structure for another language.

1 Like

Gonna do some thread necromancy here. I built Cybin because I was dissatisfied with the limitations of existing music programming environments and frameworks.

Most of the popular offerings are very live-oriented and rely on SuperCollider or some other client-server architecture where the code that represents the structure of the music is separated from the code responsible for making sound in a way that discourages users from writing code that canā€™t be executed and rendered in real time.

Cybin was meant to provide an aggressively simple and flexible system to write music where
live-coding and offline composition/rendering are both first-class citizens. Ideally, it would provide all the necessary tools to empower users in this respect, but without forcing them to use any of those tools or otherwise prescribing specific methodologies or ideologies.

Unfortunately, around the same time I was writing Cybin I was getting very into visuals, and decided to extend Cybinā€™s philosophies to graphics.

Adding graphics capabilities put me in dependency hell, and Cybin ended up deviating away from simplicity as I attempted to support 3D graphics and offline visual rendering using ffmpeg syscalls and sketchy OpenGL lib usage.

I eventually realized that graphical capabilities were outside of the scope of what I was trying to do, and that they were seriously hurting performance without adding anything that couldnā€™t be done with existing tools.

I decided to remove all dependencies but libsndfile and libsndio, and then switched the audio system from libsndio to JACK, which drastically improved reliability, performance, simplicity, and modularity.

I just switched the repo over to my rewrite branch that will eventually become master. Cybin has MIDI support for the first time, and OSC support is coming.

Iā€™m going to be improving the C++ codebase so that Cybin can be optionally be built without external dependencies, and Iā€™m starting a video series where I rebuild the standard library one effect/synth at a time in way that is much simpler and in line with Cybinā€™s philosophy.

I built Cybin for myself, for my own purposes, and in that respect I think it has always been a success. Iā€™ve got two tracks on a yet-to-be-named-or-released album that I couldnā€™t have written without it.

As a tool for others, it has been by all reasonable metrics a failure, as Iā€™m unaware of anyone actively using it for anything. This isnā€™t surprising to me, as I havenā€™t done much to foster a community around it and itā€™s too often been a playground for flights of fancy as I figure out its strengths and weaknesses. I donā€™t see this as much of a problem overall, but I hope to resolve it as a side-effect of using it publicly as a teaching tool and testbed for ideas.

I donā€™t think Iā€™ve ever been more pleased with a technical/creative endeavor of my own than this, and Iā€™ve learned way more from building Cybin than I could have predicted. I never expected to build a live coding system, and even when I was building Cybin I was convinced that somewhere along the line I would hit a roadblock that would stop the project in its tracks.

ā€¦but I didnā€™t. And now I have a useful tool that Iā€™ve always wanted. I donā€™t think I could, or more importantly would have made it without the support and inspiration of this community.

2 Likes

I was interested in making synthesis algorithms too rather than changing parameters and thatā€™s how I made Rampcode! Feel free to try it and give me some feedback, that would be great. There is a guide, examples and a recent Atom plug-in.

Practicing with javascript and ES6 Made a livecoding framework sequencer with javascript objects, https://github.com/axelkramble/Cuack/ Iā€™m highly motivated by tidalcycles, itā€™s still in beta, but I hope in the future it could be collaborative.
also, I recently made a regex framework for text parsing and looping, sending OSC messages, https://github.com/axelkramble/tinalla

I thought it would be worth sharing this project by David Alexander of toyboxaudio.com here. I introduced David to live coding and advised on how he could integrate it into his practice as a Reaktor developer, helped him get it accepted as a poster at Audio Mostly 2019, and launch it as an open source library.

The project is called LiveCore and essentially it adds live coding features to Reaktor Core which is the low level dataflow language in the Reaktor platform. Hereā€™s a quick demo video:

Advantages:

  • Reaktor is a highly robust, fine tuned engine that sounds awesome. Itā€™s basically impossible to crash inside the Reaktor Core context.
  • LiveCore maintains state every clock tick (!) and restores it when the graph recompiles, which happens pretty much instantaneously. A consequence of this is that undo/redo = recompile, leading to what weā€™ve been calling ā€œperformative recompilationā€.
  • There is no meaningful distinction between pattern and sound. So your mixer can feedback directly into your sequencer or whatever. I love this.
  • Because program state is stored just nearby, weā€™ve found that you can actually consider cross-patching between your patch and the state/memory itself. ā€œMemory modulationā€?
  • You can easily patch in controllers/MIDI/OSC and host it in a DAW.

Disadvantages:

  • Reaktor itself is closed source. We are considering how to take the lessons learned from this into other / new systems. But still, secret sauce magical DSP goodness will remain elusive.
  • Saving program state has a high overhead which limits graph size (this is also a creative constraint depending on your perspective). You can divide state update rate down if you want, with consequences.

Overall the tradeoffs explored by this system are extremely rewarding in terms of sonic output. Thereā€™s also something aesthetically appealing about low level + graphical dataflow + live system that taps into the embodiment of code discussion.

LiveCore source and docs: https://github.com/freeeco/livecore

Long post ahead!* Most interesting part at the bottom :stuck_out_tongue:

Iā€™ve been researching different interaction models and uses for livecoding. First I built a roughly block-based UI dataflow engine for controlling shader uniforms from a MIDI control surface and VJing to live MIDI streams from drum machines etc.
(there was a link to clips of the results and some hints of the UI here, but Iā€™m only allowed two links that I need further down :smile:)

After a many fruitless attempts at building a node-based language with keyboard controls, I stumbled upon a different idea and context, an ā€œimmediate mode livecoding environmentā€ for scripting direct-manipulation tools in CAD and Vector tools. I built a rough proof of concept in love2d and showed it around a bit:

  • 5 minute presentation: https://youtu.be/JXgZJosmme4?t=34m43s
  • PoC video (text heavy): https://youtu.be/3_gDRfFtPEQ
  • PoC video (voiceover but very early): https://youtu.be/zlG01j462A4

The main idea here was to write a single piece of code that is continuously executed, with state management being abstracted away using Immediate-Mode UI techniques. It worked pretty well and I moved on from the PoC (https://github.com/s-ol/watch-cad) to implementing the concept as am InkScape tool. Itā€™s now half working but I put the project on hold because I canā€™t find a good purpose for this tool, and developing a tool without practice and an aim is hard (and pointless). I still think that the IM-scripting idea is powerful, but I havenā€™t found the right place for it. I would be very interested to know if anyone can think of something here.

Finally, some weeks ago I revisited my shader-livecoding aspirations. I split off a hot-reloading shader viewer with OSC controls for the uniforms, and then decided to focus on a new tool purely for the logic/control layer. After some conversations with a friend and a SuperCollider workshop I found a direction: a (new?) text-based interaction mode between the user and the livecoding system.

Iā€™m a heavy keyboard user and always appreciated the expressivity of text-based livecoding. I also use an obscure editor (kakoune), so Iā€™m often unhappy with the ā€œstandardā€ text editing capabilities in integrated systems. On the other hand, I often prefer dataflow logic and some other apsects usually found on the visual side of the spectrum. A particular thing I noticed about the Tidal, SuperCollider etc. interaction is that the code left in the editor is completely separate from what is being executed, and the state of the program and performance are completely invisible to the user. Evaluating lines generally causes long-running processes in the background, that are often completely inaccessible afterwards unless extra care is taken to label them initially, or by killing everything.

I came up with a different approach, taking inspiration from tools like ORCA, where the program and itā€™s execution environment are coalesced into one. The idea is pretty simple: Instead of evaluating strings of code that are edited in a file, a complete file or buffer is edited and evaluated. All the expressions in the buffer execute continuously while they are present in the source code, and when an expression is removed, it stops executing. The syntax tree can be freely changed (expressions added, moved, removed) in any way and all expressions that are unchanged will continue executing, while changed expression have the chance of updating gracefully. This is enabled by tagging every expression with a number representing its identity in order to correlate it across evaluations. Whenever the file is evaluated, all new expressions are instantiated and tagged, and the source code is modified accordingly and written back to disk. This way the system is compatible with any text editor (that can reload files).

You can see a demo from two days ago here:

Source: https://github.com/s-ol/alivecoding

My implementation is in really early stages, but everything I mentioned above works perfectly and it feels like a really promising approach to me. Iā€™d love to hear what you think of this and whether anyone has been doing or seen something similar :smile:

1 Like

By way of introductionā€¦ since mid-2014 Iā€™ve been working on a pattern language interfacing with my own sequencing support objects in SuperCollider. One benefit of that approach is that it gives me access to some compositional algorithms I had developed years ago, outside of a live coding context. (In particular, I have a harmony processor that allows for richer textures and some feeling of harmonic movement ā€“ and I just added microtuning to that, a couple of months ago.)

Itā€™s all implemented in SuperCollider. Code strings submitted to the interpreter in the normal way go through the interpreterā€™s preprocessor, which rewrites my syntax into SC code.

In general terms, it seeks to bridge ā€œnotation as codeā€ and ā€œalgorithms as code.ā€ Notation as code is mainly inspired by ixilang and also has perhaps some similarities to FoxDotā€™s sample player (though I wasnā€™t aware of FoxDot when I started this project).

/kick = "oooo";
/snare = " - -";
/hh = ".-.-.-.-";

There is also pitch notation, with octave, accidental and articulation modifiers:

/changeKey.(\cdor);
/bass = "1  1.|1.1.3'~6|x4''. 9~|7'x3'.4.";  // (dividers for clarity)

acid-bass

The paradigm, then, is placing events at specific times within the bar (time cyclesā€¦ though they can be different durations). The 4''. above lives at time 2.25 and has duration 0.5.

Generators are functions that modify the current list of timed events. Generators themselves have an onset time and a duration and operate within that boundary.

// 1. Put one open hat on an off-beat eighth (fork offsets the ins() grid)
// 2. Fill the remaining eighths with closed hats
// 3. Sprinkle in a few 16ths (the last ins must go to 16ths because all of the 8ths are full)
/hh = "\fork(" \ins("-", 1, 1)|||")::\ins(".", 8, 0.5)::\ins(".", 1..3, 0.25)";

Generators donā€™t have to be metric ā€“ Iā€™ve gotten really nice results lately with a nonmetric rhythm generator.

I canā€™t say itā€™s spectacularly innovative ā€“ it doesnā€™t solve any ā€œbig problemsā€ of HCI (it replicates the problem s-ol cites in the last post, that stray code is left in the window), and Iā€™d guess Tidalā€™s way of combining functions is more general. But itā€™s successful in that I wanted an environment that I could open it in a matter of seconds and make something listenable with a minimum of preparation, without being limited to a single style (i.e., expressive and improvisational). E.g., for my last video the other day, I took 15-20 minutes to test a couple of ideas, but otherwise it was just, turn it on and go. So itā€™s getting there.

No screencast here, but this is me playing my system and Lastboss (Tate McNeil) playing his iPad-based system, synced up by Ableton Link. Very little rehearsal for this, but we could respond to each other.

hjh