A few months ago I spoke with the frontman of a local Boston band from the 1980s, who recently re-released a single with the help of AI. The source material was a compact cassette tape from a demo, found in a drawer. He used AI to isolate what would've been individual tracks from the recording, then cleaned them up individually, without AI's help.
Does that constitute "wholly or in substantial part"? Would the track have existed were it not for having that easy route into re-mastering?
I understand what Bandcamp's trying to do here, and I generally am in support of removing what we'd recognize as "fully AI-generated music", but there are legitimate creative uses of AI that might come to wholly or substantially encompass the output. It's difficult to draw any lines line on a creative work, by just by nature of the work being creative.
(For those interested - check out O Positive's "With You" on the WERS Live at 75 album!)
That's not AI generated at all. Using acoustic models to stem out individual sections from a recording is not creating new material (and I wouldn't even describe that as "AI" despite what I'm sure a lot of the tools offering it want us to believe).
That feels legit to me. We have been using software to isolate individual instruments from a recording for a while.
I think any line is necessarily going to be arbitrary, a blanket ban on any ML model being used in production would be plainly impossible -- using Ozone's EQ assistant or having a Markov chain generate your chord progressions could also count towards "in substantial part", but are equally hard to object to.
But we also live with arbitrary lines elsewhere, as with spam filters? People generally don't want ads for free Viagra, and spam filters remain the default without making "no marketing emails" a hard rule.
The problem isn't that music Transformers can't be used artfully [1] but that they allow a kind of spam which distribution services aren't really equipped to handle. In 2009, nobody would have stopped you from producing albums en masse with the generative tech of the day, Microsoft's Songsmith [2], but you would have had a hard time selling them - but hands-off distribution services like DistroKid and improved models makes music spam much more viable now than it was previously.
[1] I personally find neural synthesis models like RAVE autoencoders nifty: https://youtu.be/HC0L5ZH21kw
[2] https://en.wikipedia.org/wiki/Microsoft_Research_Songsmith as ...demoed? in https://www.youtube.com/watch?v=mg0l7f25bhU
This is very similar to, "am I ripping people off if I just get the LLM AI to make a few grammar fixes in my own writing?"
If the Beatles can use AI to restore a poorly recorded cassette tape of John Lennon playing the piano and singing at his dinner table, I think it's alright if other bands do it, too.
>The source material was a compact cassette tape from a demo, found in a drawer.
Was this demo his, or someone else’s IP? If he is cleaning up or modifying his own property, not a lot of people have a problem with that.
If it is someone else’s work, then modifying with AI doesn’t change that.
I think they just don’t want AI generated works that only mash up the work of other artists, which is the default of AI generated stuff.
The example you present seems fairly straightforward to my intuition, but I think your point is fair.
A harder set of hypotheticals might arise if music production goes the direction that software engineering is heading: “agentic work”, whereby a person is very much involved in the creation of a work, but more by directing an AI agent than by orchestrating a set of non-AI tools.
Ya, "AI" is too broad a term. This was already possible without "AI" as we know it today, but of course it was still the same idea back then. I get what you're saying, though: would he have bothered if he'd have to have found the right filters/plugins on his own? idunno.
That's not "generating" the music with AI - that's isolating the tracks of existing music. Probably not generative AI at all, and depending on who you ask, not even AI.
This is why it is to these generative AI companies' benefit that 'AI' becomes a catchall term for everything, from what enemies are programmed to do in video games to a spambot that creates and uploads slop facebook videos on the hour.
thats sounds more like unsupervised learning via one of the bread and butter clustering algorithms. I guess that is technically AI but its a far cry from the transformers tech thats actually got everyone's underwear in knots.
> It's difficult to draw any lines line on a creative work, by just by nature of the work being creative.
If you want to be some neutral universal third party sure. If you're OK with taking a position, the arbitrariness actually makes it much easier. You just draw the line you want.
Creativity demands limitation, and those limitations don't have to be justified.
You don't make it clear whether the music on that tape was "generated" "by AI", only that it was post-processed in such a way.
Where does it stop. My dad is a decent guitarist but a poor singer (sadly, I'm even worse). He has written some songs, his own words, some guitar licks or chords as input material and AI turning it into a surprisingly believable finished piece. To me it's basically AI slop but he's putting in a modest amount of effort for the output.
I think it makes some sense to allow leeway for intelligent "signal processing" using AI (separating out individual tracks, clean-up, etc) vs generating new content with AI.
Similarly, say, for video editors, using AI to more intelligently rotoscope (especially with alpha blending in the presence of motion blur - practically impossible to do it manually), would be a great use of AI, removing the non-creative tedium of the process.
It's not clear where the line is though. I was quite impressed with Corridor Crew's (albeit NVidia+Puget-sponsored) video [1] where they photographed dolls, motion-captured human actors moving like the dolls, and transferred the skeletal animation and facial expressions to those dolls using GenAI. Some of it required nontrivial transformative code to accommodate a skeleton to a toy's body type. There's a massive amount of tedium being removed from the creative process by GenAI without sacrificing the core human creative contribution. This feels like it should be allowed -- I think we should attempt to draw clearer lines where there are clearly efficiency gains to be had to have less "creative" uses be more socially acceptable.
[1]: https://youtu.be/DSRrSO7QhXY