logoalt Hacker News

jostersyesterday at 9:45 PM0 repliesview on HN

While the author explicitly wanted Claude to be in the creative lead here, I recently also thought about how LLMs could mirror their coding abilities in music production workflows, leaving the human as the composer and the LLM as the tool-caller.

Especially with Ableton and something like ableton-mcp-extended[1] this can go quite far. After adapting it a bit to use less tokens for tool call outputs I could get decent performance on a local model to tell me what the current device settings on a given track were. Imagine this with a more powerful machine and things like "make the lead less harsh" or "make the bass bounce" set off a chain of automatically added devices with new and interesting parameter combinations to adjust to your taste.

In a way this becomes a bit like the inspiration-inducing setting of listening to a song which is playing in another room with closed doors: by being muffled, certain aspects of the track get highlighted which normally wouldn’t be perceived as prominently.

[1]: https://github.com/uisato/ableton-mcp-extended