logoalt Hacker News

n4r901/21/20253 repliesview on HN

> LLMs are not designed to be truthful - they are designed to make stuff up

I like to say that they are designed to be "convincing".

> Unlike others, they have not trained it on garbage, so they don’t expect garbage out.

Debatable!


Replies

Terretta01/21/2025

How many Rs are in the word strawberry?

None of these words (plausible, hallucination, convincing) seem appropriate.

An LLM seems more about "probable". There's no truth/moral judgment in that term.

When weighting connections among bags of associated terms (kind of like concepts) based on all the bags the model was repeatedly browbeaten with, LLMs end up able to unspool probable walks through these bags of associated terms.

It's easy to see how this turns out to work "well" for bags of terms (again, sort of concepts) often discussed in writing, such as, say, Christian apologetics.

Instead of the complaint and examples he blogged, he should have dumped a position article he agrees with, and a position article he disagrees with, into it, and asked it to compare, contrast, contextualize, opine, and then (for kicks) reconcile (using SOTA Claude or OpenAI). He's using it as a concordance, he could have used it as, well, apologetics: a systematic (not token based) defense of a position.

Because breaking down the bags into our alphabet and our words isn't really how LLMs shine. Smash together concepts like atoms and you can actually emit novel insights.

This article describes something LLMs are bad at — a fancy "how many Rs in strawberry".

show 2 replies
san1t101/21/2025

I've heard them best described as 'mansplaining as a service'