As someone who was an engineer on the original Copilot team, yes I understand how tech works.
You don’t know how your own mind “understands” something. No one on the planet can even describe how human understanding works.
Yes, LLMs are vast statistical engines but that doesn’t mean something interesting isn’t going on.
At this point I’d argue that humans “hallucinate” and/or provide wrong answers far more often than SOTA LLMs.
I expect to see responses like yours on Reddit, not HN.
> I expect to see responses like yours on Reddit, not HN.
I suppose that says something about both of us.
[dead]
[dead]
Before one may begin to understand something one must first be able to estimate the level of certainty. Our robot friends, while really helpful and polite, seem to be lacking in that department. They actually think the things we've written on the internet, in books, academic papers, court documents, newspapers, etc are actually true. Where the humans aren't omniscient it fills the blanks with nonsense.