Tangent:
I assume this “AI-generated” music is created the same way an LLM generates text: use samples from a corpus strung together into a new [derivative] output.
But it seems plausible that algorithmic generation can be used at any stage of the process. How much disclosure do we (listeners) require? At what point is it unacceptable “AI-generated” music?
The answers are going to be subjective. And human. And dealing with this, I think, is going to take a direction like the “typewriters in college” headline from a few days ago - human involvement, low automation … things that don’t scale.
My understanding is music generation is more like stable diffusion. It generates a waveform as an image, then turns it into an audio file.
> use samples from a corpus strung together into a new [derivative] output.
That’s kind of how the music industry produces music these days. There are a few song writers that write for most artists, music producers who sample other music to string together songs for most artists etc. That’s why most music sounds the same and why AI generated music can be indistinguishable from mainstream music.