> If you ask it "given these arbitrary metrics, what is the best business plan for my company?" It'd be really hard to verify the result. I'd be hard to verify the result from anyone for that matter, even specialists.
Hard to verify something so subjective, for sure. But a specialist will be applying intelligence to the data. An LLM is just generating random text strings that sound good.
The source for my claim about LLMs not summarizing but abbreviating is on hn somewhere, I'll dig it out
Edit: sorry, I tried but couldn't find the source.
> But a specialist will be applying intelligence to the data. An LLM is just generating random text strings that sound good.
I'd only make such a claim if I could demonstrate that human text is a result of intelligence and LLMs not, because really, what's the actual difference? How isn't LLM "intelligent" when it can clearly help me make sense of information? Note that this isn't to say that it's conscious or not. But it's definitely intelligent. The text output is not only coherent, it's right often enough to be useful.
Curiously, I'm human, and I'm wrong a lot, but I'm right often enough to be a developer.