Which pretty much every line in it was written similarly somewhere else before, including an explanation and is somehow included in the massive data set it was trained on.
So far i have asked the AI some novel questions and it came up with novel answers full of hallucinated nonsense, since it copied some similarly named setting or library function and replaced a part of it's name with something i was looking for.
And this training data somehow includes an explanation of how these individual lines (with variable names unique to my application) work together in my unique combination to produce a very specific result? I don't buy it.
And...
> pretty much
Is it "pretty much" or "all"? The claim that the LLM simply has simply memorized all of its responses seems to require "all."