Something I'm particularly interested in understanding is where the tipping point here is. At what point is a prompt or the input that accompanies a prompt enough for the result to be copyrightable?
This project is the absolute extreme: I handed over exactly 8 prompts, and several of those were just a few words. I count the files on disk as part of the prompts, but those were authored by other people.
Emil's JustHTML project involved several months of work and 1,000+ commits - almost all of the code was written by agents but there was an enormous amount of what I"d consider "human-authored expressive elements" guiding that work.
Many of my smaller AI-assisted projects use prompts like this one:
> Fetch https://observablehq.com/@simonw/openai-clip-in-a-browser and analyze it, then build a tool called is-it-a-bird.html which accepts a photo (selected or drag dropped or pasted) and instantly loads and runs CLIP and reports back on similarity to the word “bird” - pick a threshold and show a green background if the photo is likely a bird
It was a short prompt, but the Observable notebook it references was authored by me several years ago. The agent also looked at a bunch of other files in my tools repo as part of figuring out what to build.
I think that counts as a great deal of "human-authored expressive elements" by me.
Something I'm particularly interested in understanding is where the tipping point here is. At what point is a prompt or the input that accompanies a prompt enough for the result to be copyrightable?
This project is the absolute extreme: I handed over exactly 8 prompts, and several of those were just a few words. I count the files on disk as part of the prompts, but those were authored by other people.
The US copyright office say "the resulting work is copyrightable only if it contains sufficient human-authored expressive elements" - https://perkinscoie.com/insights/update/copyright-office-sol... - but what does that actually mean?
Emil's JustHTML project involved several months of work and 1,000+ commits - almost all of the code was written by agents but there was an enormous amount of what I"d consider "human-authored expressive elements" guiding that work.
Many of my smaller AI-assisted projects use prompts like this one:
> Fetch https://observablehq.com/@simonw/openai-clip-in-a-browser and analyze it, then build a tool called is-it-a-bird.html which accepts a photo (selected or drag dropped or pasted) and instantly loads and runs CLIP and reports back on similarity to the word “bird” - pick a threshold and show a green background if the photo is likely a bird
Result: https://tools.simonwillison.net/is-it-a-bird
It was a short prompt, but the Observable notebook it references was authored by me several years ago. The agent also looked at a bunch of other files in my tools repo as part of figuring out what to build.
I think that counts as a great deal of "human-authored expressive elements" by me.
So yeah, this whole thing is really complicated!