logoalt Hacker News

Terr_05/15/20252 repliesview on HN

> inability to self-reflect

IMO the One Weird Trick for LLMs is recognizing that there's no real entity, and that users are being tricked into a suspended-disbelief story.

In most cases cases you're contributing text-lines for a User-character in a movie-script document, and the LLM algorithm is periodically triggered to autocomplete incomplete lines for a Chatbot character.

You can have an interview with a vampire DraculaBot, but that character can only "self-reflect" in the same shallow/fictional way that it can "thirst for blood" or "turn into a cloud of bats."


Replies

layer805/15/2025

Not to mention that vampires don’t reflect. ;)

show 1 reply
Sharlin05/15/2025

This is a tired semantic argument that does not bring any insight into the discussion. A token-predictor could still be trained to predict the tokens “I’m not sure what you mean because of points x, y, and z; could you elaborate?”

show 8 replies