I've watched a lot of live coding tools out of interest for the last few years, and as much as I'd like to adopt them in my music making it's not clear to me what they can add to my production repertoire compared to the existing tools (DAWs, hardware instruments, playing by hand, etc).
The coding aspect is novel I'll admit, and something an audience may find interesting, but I've yet to hear any examples of live coded music (or even coded music) that I'd actually want to listen to. They almost always take the form of some bog-standard house music or techno, which I don't find that enjoyable.
Additionally, the technique is fun for demonstrating how sound synthesis works (like in the OP article), but anything more complex or nuanced is never explored or attempted. Sequencing a nuanced instrumental part (or multiple) requires a lot of moment-to-moment detail, dynamics, and variation. Something that is tedious to sequence and simply doesn't play to this formats' strengths.
So again, I want to integrate this skill into my music production tool set, but aside from the novelty of coding live, it doesn't appear well-suited to making interesting music in real time. And for offline sequencing there are better, more sophisticated tools, like DAWs or trackers.
100% agree.
I think this format of composition is going to encourage a highly repetitive structure to your music. Good programming languages constrain and prevent the construction of bad programs. Applying that to music is effectively going to give you quantization of every dimension of composition.
I'm sure its possible to break out of that but you are fighting an uphill battle.
> I've watched a lot of live coding tools out of interest for the last few years, and as much as I'd like to adopt them in my music making it's not clear to me what they can add to my production repertoire compared to the existing tools (DAWs, hardware instruments, playing by hand, etc).
Aside from the novelty factor (due to very different UI/UX) and the idea that you can use generative code to make music (which became an even more interesting factor in the age of LLMs), I agree.
And even the generative code part I mentioned is a novelty factor as well, and isn't really practical for someone who actually makes music as their end-goal (and not someone who is just experimenting around with tech or how far one can get with music-as-code UIUX).
Procedural generation can be useful for finding new musical ideas. It's also essential in specific genres like ambient and experimental music, where the whole point is to break out of the traditional structures of rhythm and melody. Imagine using cellular automata or physics simulations to trigger notes, key changes, etc. Turing completeness means there are no limits on what you can generate. Some DAWs and VSTs give you a Turing complete environment, e.g. Bitwig's grid or Max/MSP. But for someone with a programming background those kinds of visual editors are less intuitive and less productive than writing code.
Of course, often creativity comes from limitations. I would agree that it's usually not desirable to go full procedural generation, especially when you want to wrangle something into the structure of a song. I think the best approach is a hybrid one, where procedural generation is used to generate certain ideas and sounds, and then those are brought into a more traditional DAW-like environment.
Look into the JUCE framework for building your own tools. I was using MaxMsp for a while, but would always think to myself "This would be so much easier to accomplish in pure code". So, I started building some bespoke VST's.
There's a learning curve for sure, but it's not too bad once you learn the basics of how audio and MIDI are handled + general JUCE application structure.
Two tips:
Don't bother with the Projucer, use the CMAke example to get going. Especially if you don't use XCode or Visual Studio.
If your on a Mac, you might need to self-sign the VST. I don't remember the exact process, but it's something I had to do once I got an M4 Mac.
Fair point, and that's the challenge in both the software's abilities and the creator's skills.
If you see it as yet another instrument you have to master, then you can go pretty far. I'm finding myself exploring rhythms and sounds in ways I could never do in a DAW so fast, but at the same time I do find limiting a lot of factors, especially sequencing.
So far I haven't gotten beyond a good sounding loop, hence the name "loopmaster", and maybe that's the limit, which is why I made a 2 deck "dual" mode in the editor, so that it can be played as a DJ set where you don't really need that much progression.
That said, it's quite fun to play with it and experiment with sounds, and whenever you make something you enjoy, you can export a certain length and use it as a track in your mix.
My goal is certainly to be able to create full length tracks with nuances and variations as you say, just not entirely sure how to integrate this into the format right now.
Feedback[0] is appreciated!
I've seen a couple of TikToks with someone doing live coding with this same tool and it was really cool to watch because they really knew it well, but like you said it was bog-standard house/techno.
Every generation of musicians for the past 8 decades has had the same thoughts. What live coding tools for synthesis offers you is an understanding of the nature of generational technology.
Consider this: there are teenagers today, out there somewhere, learning to code music. Remember when synthesisers were young and cool and there was an explosion of different engines and implementations?
This is happening for the kids, again.
Try to use this new technology to replicate the modern, and then the old sound, and then discover new sounds. Like we synth nerds have been doing for decades.