Synchronizing with multiple Languages?

I guess i’m not the first who worked on that, but currently we’re trying to collaborate in an environment with multiple languages (TidalCycles, plain ScLang, my own language, etc).

The idea would be using an OSC clock that simply sends a ping on every beat over UDP broadcast (with PureData, for example), but that seems harder than you would think. Some clients seem to receive something, others don’t.

I know there were attempts like TopClock a while ago, but i’m not sure what became of those.

Anybody got experience with that ?

1 Like

Over in the TidalCycles forum someone wrote something about PTPd but i’d still be missing the step how to actually sync the music to that ?

I moved the topic to “making liveness”, it seems to fit better, hope that’s OK!

Yes there have been a few efforts. TopClock had a lot of eyes on it but went nowhere really.

PTPd is a general protocol for syncing system clocks on a LAN (similar to ntpd, but that’s for WANs), which is part of the battle, but you still need a protocol for sharing tempo relative to that.

Various live coding systems have their own protocol for syncing.

Most promising for pan-system sync are EspGrid from @dktr0 and Link from AbleTon.

EspGrid is designed with live coding in mind, is free/open source, and is easy to implement. There’s an ongoing but slow effort to get it working reliably cross-platform. It feels like it’s nearly there but there are a few bugs in the way of being fully usable in the field (please do correct me David).

Link has a lot of commercial support, and one of the implementations (C++) is available under a GPL license, but it’s not really a free/open source project - they don’t take patches under the GPL, so that they can maintain a proprietary license as well. Nonetheless, it works very well, is getting core SuperCollider support and makes collaborating with a wide range of software straightforward.

There’s also systems like the multi-user Troop and Extramuros editors (I think both work with foxdot, tidal, supercollider, sonic pi) where all the code gets run on one machine, getting around the need to sync at all, as long as you’re all running the same language.

Hmm ok i’d say multi-platform, multi-language would be mandatory. I’ve tried hacking up Troop for multi-language capabilities but it proved to be fairly awkward to use, and only really solved the “multiple coders on one screen” issue rather than the synchronization issue.

I’ve only had a quick glimpse at EspGrid, but what puzzled me is the pull-based nature of it, so I have the feeling it’s a solution for a different problem than the one I’m thinking about ? Maybe what i’m talking about is alignment rather than synchronization.

The way I internally sync things in my system is that the entity to be aligned (say, a bassline) just waits until the entity it’s aligned to (say, the beat) produces an event and then starts, instead of starting to play immediately when I execute the code.

A synchronization system as I’d envision it would rather have a push-based metronome, so that I (or every player, in fact) could say “don’t execute the code now, wait until the next metronome beat comes in”.

The things that EspGrid seems to focus on, like querying a tempo or a timestamp, are of lesser importance for me. We (the local TOPLAP node in BCN) usually collaborate in the same room, rather than remotely. Sharing a tempo is not that much of an issue in that context, because you could simply agree on one.

Link seems to have some alignment capabilities, but i haven’t had time so far to look into it. Also i’d prefer an open solution any time.

So, that being said, I guess there’s no out-of-the-box solution for the alignment problem right now?

Well, it’s a bit embarrassing that such a system doesn’t exist, and part of the problem seems to be that everyone has a different idea what time is.

I’m sure EspGrid is capable of what you describe, however.

I guess to synchronise you need to know rate (tempo), a shared reference clock (e.g. ptpd, or espgrid does this itself), and enough info to work out where you are in some metric cycle, probably a timestamp for when the tempo last changed and what the beat number was then. I think link also has the notion of an ‘epsilon’, being the a metrical duration in beats to align to.

I think EspGrid is the future but we need to spend some time on making it work reliably cross platform.

There we have the problem already, i don’t really work with cyclical time :sweat_smile:

EDIT: Nontheless, probably you’re right that the EspGrid approach could work for alignment, now that i think about it. I’ll try to compile it.

Yep I think the clue is in the name espgrid :slight_smile:

Hi all, this seems to be quite long-standing problem to all performing groups and well, there is no complete answer AFAIK.

EspGrid is very robust and I would love to implement it in some usable scenario (we did try it, no luck). What I found the most problematic is adjusting SuperCollider’s inner server to align with external clock… SCserver is a beast, and this is overall is not trivial task.

My experience is that the deeper one gets into general concepts of synchronization between computers (and all the protocols) it always gets only messier)

I am posting what we tried and partially successfully launched live. Our setup was basically 3x Supercollider on two windows machines plus one linux. The system called NetProxy acts two ways:

  • A) Remote play where everyone connected to a group receives and sends definitions to others, so it allows sort of livecoding together (even with concept of shared document).

  • B) Local play, where it keeps multiple SuperCollider’s ProxySpaces tempos in sync (multiple computers in one place / performance scenario)

We did not use it for some time, and I am not sure if it is working anymore… there is a lack of documentation and we quite abandoned it.

But it is still opened question for us, if and how to use sync in-between players / OSes / servers softwares



(edit: pasted wrong link to sources)

Tidal is again sync-able via the link protocol, using the ‘carabiner’ bridge.

Link support is also coming to the next version of supercollider.

Hi all,

Yaxu and I did some maintenance on EspGrid with Tidal during the past week, and it seems to be holding up so far… Also re: EspGrid and SC - I think the basic trick is to use the quark we wrote and which is in the usual Quarks repository.


Hi again,

Just thought I’d add a further technical note with reference to the discussion above of “push” versus “pull” based metronomes, in case it is helpful to any one. Basically, I’m addressing the inter-related questions of why EspGrid uses the representation of metre it does, and why it uses a (mostly) poll/pull system rather than pushing out pulses/beats.

A basic “problem” with sending (pushing) a message (ie. a metronome pulse) whose interpretation is “do something now” is that the message always takes time to deliver and process. This time can be quite non-deterministic - essentially the “now” of your pulse is not quite as defined as one might imagine from the concept “now”. A common solution within the push paradigm is to add a time tag to the pulse/trigger so that things happen at a slightly later but better synchronized moment. However, with such a solution, the interpretation of the message is no longer “do this now” but rather “do this at this time” - the exact time that the message is delivered is no longer pertinent, and we are already moving towards representing the metre rather than “performing” it as an event.

A second “problem” with sending (pushing) such a pulse message is that if you need to do things between the pulses you need, one way or another, to have information about the metric grid beyond “this pulse happens at this time”. To get this info in a push-based system either you track the time between the pulses yourself and infer the information, or you add the information to the pulse message itself. The former case is not without its merits (after all, in some sense its what we do when we hear beats), but its a bit strange to infer something that is (elsewhere) already known, and it also doesn’t cover edge cases like drastic changes in tempo (which also take us by surprise perceptually). In the latter case, as with the first “problem”, you are again moving towards providing a constant representation of the alignment of the metric grid.

Basically, both compensating for the latency of moving messages around and doing things in between the beats leads towards sharing metre as declarative representations of a temporal grid rather than imperative “do this now” commands.

A third set of considerations motivates the normal way of getting this metre representation from EspGrid, whereby “clients” (ie. Tidal, SuperCollider quark, etc) send a message to poll EspGrid for the metre and then get a response. Since responses are only sent when requested you don’t have the problem of zombie processes getting messages they will never consume (eg. when a consuming application is closed or crashes). Also it is a simple way of giving metre-consuming applications control over when and how they want this information. Repeated polling is a basic strategy against lost packets as well. The messages use only the most basic and universal parts of the OSC spec - which is why, for example, time is represented as two int32s - anything else sacrifices either precision or universal transmissability (many OSC applications, including some very “fancy” ones, don’t grok int64 in their OSC code).

That said, chat and a few other non-tempo things in EspGrid do still use a subscription-based push system instead, at present. Part of me thinks it would be better to move everything over to the “pull” query-and-response system, though, for clarity/consistency.


1 Like

@dktr0 thanks for the update and information !

One thing that’d be really nice to have regarding EspGrid would be a slightly more detailed documentation regarding the sematics of the time values returned by espgridd.

I’d like to implement a client for my own system, but I can’t really wrap my head around the meaning of the returned time values. It says they are reference values, but what do they refer to ?

Maybe i just have some different preconceptions about time or something, but to me it’d be really helpful if there’d be some clarifications, or examples on how to calculate the next point in time where something should happen …

@parkellipsen I’ll assume you’re mostly asking about /esp/tempo/r - the response you get when you ask EspGrid about the tempo.

That message will always be something like this: esp/tempo/r [on :: int32] [tempo :: float32] [seconds :: int32] [nanoseconds :: int32] [n :: int32]

The message means that the beat n occurred (or occurs) at the time indicated by seconds + nanoseconds (called referenceTime hereafter - in doing the addition do take account that one is in seconds and other is in nanoseconds). You can tell when other beats occur with a little math. For example, if beat 0 occurred at time 450 and the tempo is 60 bpm then beat 1 will occur at time 451 and beat 7 will occur at time 457, etc. (currentTimeInBeats = ((currentTimeInSeconds - referenceTime) * (tempo/60)) + n ).

The time values in /esp/tempo/r are in the POSIX/1970 epoch (sometimes available as the “system” time) in different environments. There is also /esp/tempoCPU/q and /esp/tempoCPU/r where the only difference is that the time values are against the motherboard’s monotonic clock - in theory this is less noisy and preferred but in practice it’s hard to use the monotonic values meaningfully in many live coding environments. If the things you are trying to synchronize are mostly expressed in some other clock (for example, time since process start which is common in SC, time since audio callbacks start which is common in ChucK) then a translation is needed and there are different ways of doing that. A simple method is just to find the difference between the system/posix/epoch 1970 clock and your clock and periodically update it as they drift.

If you are working in SuperCollider, by chance, I strongly recommend using the quark which already provides a TempoClock subclass that syncs with EspGrid (EspClock). You could use EspClock in your system and have a SC TempoClock that just works the way it normally does, but with EspGrid. No need to worry about any of the above!..


@dktr0 Thanks, that clarifies things …

so beat 0 always happens at the time when espgridd is started (expressed as a unix timestamp + nanoseconds).

The time correction, so to speak, happens in the nanoseconds value (which is the only value that changes, unless the tempo is changed).

So basically the correction happens by correcting the starting point, not the next point …

Unfortunately i have to put in the work myself, as i’m working in Common Lisp and don’t even have a tempo clock, in the strict sense …

@parkellipsen well beat 0 would be “exactly” when espgridd is started if you only had one instance of espgridd. If you have multiple instances interacting then beat 0 would be when the first instance was started.

The reference time can and does move in both seconds and nanoseconds because of (a) changing estimates of your clock’s difference to other clocks and (b) different beat numbers being used as the reference beat (different values of n). Generally speaking, when the tempo changes, the reference beat is a beat close to the point of the change.

Changing the reference point (which doesn’t have to be the “starting point” - anyway, in a metric grid there can be beats before 0…) also changes the next point and all subsequent projected points.

Gosh, I wish I had known about EspGrid about a year+ ago – it would have saved me dozens of hours on what ended up being a failed project (GitHub - jamshark70/ddwOSCSyncClocks: Simple, minimalistic master-slave clock sync across the network for SuperCollider).

Why it failed was instructive. It uses incoming and outgoing messages processed in sclang itself to calculate the difference between “now” locally and “now” on the reference clock. But sclang is non-preemptive so it may be late processing incoming messages. This was not supposed to affect the reading of “now locally” but, depending on platform, it did. Rather a lot.

  • MacOS: Fine, actually, no significant problem.
  • Linux as the slave: Measurement of network latency was very small as expected, but the clock readings included up to 100 ms jitter, in a very peculiar pattern that nobody could ever explain. With a manual “bias” fudge factor, it got acceptable results but I dislike that it required manual configuration.
  • Windows: I saw poor sync because of misreported audio driver latency (like, really, it was 2019 when I tried this with some students and Windows still can’t get latency right???) but worse, I personally saw some unexplained delay in code execution on one student’s machine affect the time measurement, drop bad data into the Kalman filter and take seconds to recover.

So I’ve had better results with Ableton Link (which runs in its own threads, so sclang activities can’t block it).

For ease of configuration, nothing beats Link.

Because EspGrid’s communication between machines runs in a separate executable, I’m sure it does better than mine, provided that the SC side OSC messaging doesn’t suffer the same problems I encountered.

Btw one useful feature of the master-clock approach (impossible with Link) is to run multiple masters with tempo relationships and have clients attach to specific masters, for polymetric sync.

You could sync beats and ignore information about meter… unless even the concept of a beat is too cyclical (but it seems impossible to me to sync anything without some concept of a pulse – even syncing seconds is a 60 bpm pulse).