logoalt Hacker News

evaneliaslast Saturday at 4:08 PM3 repliesview on HN

[flagged]


Replies

rafabulsinglast Saturday at 5:19 PM

Those are different levels of abstraction. LLMs can say false things, but the overall structure and style is, at this point, generally correct (if repetitive/boring at times). Same with image gen. They can get the general structure and vibe pretty well, but inspecting the individual "facts" like number of fingers may reveal problems.

dahartlast Saturday at 4:19 PM

That seems like straw man. Image generation matches style quite well. LLM hallucination conjures untrue statements while still matching the training data style and word choices.

show 1 reply
wpmlast Saturday at 4:40 PM

This is a bad faith argument and you know it.

show 1 reply