That's quite a stretch, and untested in court.
At least a monkey is an unambiguous autonomous entity. A LLM is a - heck of a complicated - piece of software, and could very well be ruled a tool like any other
I mean, aren't we all bragging about autonomous agents doing the coding for us? I don't see how that's remotely a stretch.
The legal question was "did a human author the work"?
Tested all the way up to the Supreme Court, who declined to hear an appeal, so the precedent stands in the context of AI output.
https://www.reuters.com/legal/government/us-supreme-court-de...
It's still early, but this is absolutely going to be precedent used in a software related case, and it's going to lead to fun times with SOX/PCI style compliance issues, where developers will have to attest that merges did not use AI so compliance can ensure repos don't pass a threshold where there's too much LLM code.