logoalt Hacker News

TheJoeManyesterday at 3:25 PM4 repliesview on HN

That first image, “Structure Prompts with XML”, just screams AI-written. The bullet lists don’t line up, the numbering starts at (2), random bolding. Why would anyone trust hallucinated documentation for prompting? At least with AI-generated software documentation, the context is the code itself, being regurgitated into bulleted english. But for instructions on using the LLM itself, it seems pretty lazy to not hand-type the preferred usage and human-learned tips.


Replies

raframyesterday at 3:58 PM

No, it’s two screenshots from Anthropic documentation, stitched together: https://platform.claude.com/docs/en/build-with-claude/prompt...

The post even links to that page, although there’s a typo in the link.

show 2 replies
Calavaryesterday at 3:44 PM

It looks like a screenshot from the Claude desktop app, so I don't think the author is trying to disguise the AI origin of the marerial

croesyesterday at 5:27 PM

You just hallucinated the content is AI generated.

show 1 reply
doctorpanglossyesterday at 6:45 PM

There must be an OpenClaw YouTube video helping people post to hacker news, or something, because the front page is overrun with AI slop like this article, that makes no sense anyway. The author literally has no idea what any of this stuff means.