logoalt Hacker News

LatencyKillsyesterday at 5:28 PM5 repliesview on HN

As someone who was an engineer on the original Copilot team, yes I understand how tech works.

You don’t know how your own mind “understands” something. No one on the planet can even describe how human understanding works.

Yes, LLMs are vast statistical engines but that doesn’t mean something interesting isn’t going on.

At this point I’d argue that humans “hallucinate” and/or provide wrong answers far more often than SOTA LLMs.

I expect to see responses like yours on Reddit, not HN.


Replies

6510yesterday at 7:23 PM

Before one may begin to understand something one must first be able to estimate the level of certainty. Our robot friends, while really helpful and polite, seem to be lacking in that department. They actually think the things we've written on the internet, in books, academic papers, court documents, newspapers, etc are actually true. Where the humans aren't omniscient it fills the blanks with nonsense.

show 1 reply
gishhyesterday at 5:35 PM

> I expect to see responses like yours on Reddit, not HN.

I suppose that says something about both of us.

show 1 reply
socrateswasoneyesterday at 11:32 PM

[dead]

szundiyesterday at 6:04 PM

[dead]

youwatnowyesterday at 7:33 PM

[flagged]

show 1 reply