logoalt Hacker News

Terr_05/15/20251 replyview on HN

> Dracula can't make decisions that affect the real world, so LLMs already break key aspects and assumptions of the 'Document Simulator'.

Nonsense, we are already surrounded by mindless algorithms (and their outputs) that "affect the real world" because many of us have full-time jobs ensuring it happens! "

When someone uses a SimCity-esque program to generate a spreadsheet used for real-world bus schedules, does that "break key aspects and assumptions of a traffic simulator"? Does the downstream effect elevate it to a microcosm of tiny lives? Nope!


Replies

famouswaffles05/15/2025

You’re talking past the point I was making.

My point about Dracula isn't just that he's fictional, but that he cannot make decisions that have unscripted consequences in the real world, nor can he engage in a novel, interactive conversation. Dracula, as a character, only "acts" or "speaks" as an author (or game designer, etc.) has already written or programmed him to. He has no independent capacity to assess a new situation and generate a novel response that affects anything beyond his fictional context. If I "talk" to Dracula in a game, the game developers have pre-scripted his possible responses. The text of Dracula is immutable.

A LLM, by contrast, performs fresh inference every time it’s prompted: it weighs competing continuations and selects one. That selection is a bona-fide decision (a branch taken at run-time). The “document-simulator” picture collapses that distinction, treating a dynamic decision process as if it were a block of pre-written prose. It's just nonsensical.

Your SimCity example is open loop: the simulation runs, a human inspects the results, and then decides whether to publish new bus schedules. Nothing in the simulator is tasked with interrogating the human, updating its model of their intent, or steering the outcome. In production LLM systems the loop is often closed: the model (often with tool-wrapper code) directly drafts emails, modifies configs, triggers API calls, or at minimum interrogates the user (“What city are we talking about?”) before emitting an answer.

Your argument is tired and semantical because it fails at the most fundamental level - It's not even a good analogy.

show 1 reply