Sure, that's true as well. But I don't see this as a substantive response given that the only people making unsupported claims in this thread are those trying to deflate LLM capabilities.
- OP asked for someone to make a logical argument for the separation of “training” from “model”
- I made the argument
- You cherry picked an argument against my specific example and made an appeal to emergent complexity
- I pointed out that emergent complexity isn’t limitless
- “the only people making unsupported claims in this thread are those trying to deflate LLM capabilities”
So, to review this thread