SCOTUS hasn't ruled on any AI copyright cases yet. But they've said in Feist v Rural (1991) that copyright requires a minimum creative spark. The US Copyright Office maintains that human authorship is required for copyright, and the 9th Circuit in 2019 explicitly agreed with the law that a non-human animal cannot hold any copyright.
Functionally speaking, AI is viewed as any machine tool. Using, say, Photoshop to draw an image doesn't make that image lose copyright, but nor does it imbue the resulting image with copyright. It's the creativity of the human use of the tool (or lack thereof) that creates copyright.
Whether or not AI-generated output a) infringes the copyright of its training data and b) if so, if it is fair use is not yet settled. There are several pending cases asking this question, and I don't think any of them have reached the appeals court stage yet, much less SCOTUS. But to be honest, there's a lot of evidence of LLMs being able to regurgitate training inputs verbatim that they're capable of infringing copyright (and a few cases have already found infringement in such scenarios), and given the 2023 Warhol decision, arguing that they're fair use is a very steep claim indeed.
The lack thereof (of human use). Prompts are not copyrightable thus the output also - not. Besides retelling a story is fair use, right? Otherwise we should ban all generative AI and prepare for Dune/Foundation future. But we not there, and we perhaps never going to be.
So the LLM training first needs to be settled, then we talk whether retelling a whole software package infringes anyone's right. And even if it does, there are no laws in place to chase it.