logoalt Hacker News

futuraperditatoday at 7:06 AM6 repliesview on HN

What worries me is that _a lot of people seem to see LLMs as smarter than themselves_ and anthropmorphize them into a sort of human-exact intelligence. The worst-case scenario of Utah's law is that when the disclaimer is added that the report is generated by AI, enough jurists begin to associate that with "likely more correct than not".


Replies

kylecazartoday at 2:17 PM

Maybe it's just my circle, but anecdotally most of the non-CS folks I know have developed a strong anti-AI bias. In a very outspoken way.

If anything, I think they'd consider AI's involvement as a strike against the prosecution if they were on a jury.

show 2 replies
dataflowtoday at 12:41 PM

One problem here is "smarter" is an ambiguous word. I have no problem believing the average LLM has more knowledge than my brain; if that's what "smarter" means, them I'm happy to believe I'm stupid. But I sure doubt an LLM's ability to deduce or infer things, or to understand its own doubts and lack of knowledge or understanding, better than a human like me.

show 1 reply
Marha01today at 9:04 AM

> a lot of people seem to see LLMs as smarter than themselves

Well, in many cases they might be right..

show 4 replies
intendedtoday at 7:23 AM

Reading how AI is being approached in China, the focus is more on achieving day to day utilty, without eviscerating youth employment.

In contrast, the SV focus of AI has been about skynet / singularity, with a hype cycle to match.

This is supported by the lack of clarity on actual benefits, or clear data on GenAI use. Mostly I see it as great for prototyping - going from 0 to 1, and for use cases where the operator is highly trained and capable of verifying output.

Outside of that, you seem to be in the land of voodoo, where you are dealing with something that eerily mimics human speech, but you don't have any reliable way of finding out its just BS-ing you.

show 3 replies
KronisLVtoday at 1:28 PM

> a lot of people seem to see LLMs as smarter than themselves

I think the anthropomorphizing part is what messes with people. Is the autocomplete in my IDE smarter than I am? What about the search box on Google? What about a hammer or a drill?

Yet, I will admit that most of the time I hear people complaining about how AI written code is worse than that produced by developers, but it just doesn't match my own experience - it's frankly better (with enough guidance and context, say 95% tokens in and 5% tokens out, across multiple models working on the same project to occasionally validate and improve/fix the output, alongside adequate tooling) than what a lot of the people I know could or frankly do produce in practice.

That's a lot of conditions, but I think it's the same with the chat format - people accepting unvalidated drivel as fact, or someone using the web search and parsing documents and bringing up additional information that's found as a consequence of the conversation, bringing in external data and making use of the LLM ability to churn through a lot of it, sometimes better than the human reading comprehension would.

show 1 reply
charcircuittoday at 8:32 AM

AI is smarter than everyone already. Seriously, the breadth of knowledge the AI possesses has no human counterpart.

show 8 replies