logoalt Hacker News

ben_wyesterday at 9:24 PM2 repliesview on HN

Because they don't yet know how to "just stop emitting so much hot air" without also removing their ability to do anything like "thinking" (or whatever you want to call the transcript mode), which is hard because knowing which tokens are hot air is the hard problem itself.

They basically only started doing this because someone noticed you got better performance from the early models by straight up writing "think step by step" in your prompt.


Replies

mikepurvistoday at 2:48 AM

I would guess that by the time a response is being emitted, 90% of the actual work is done. The response has been thought out, planned, drafted, the individual elements researched and placed.

It would actually take more work to condense that long response into a terse one, particularly if the condensing was user specific, like "based on what you know about me from our interactions, reduce your response to the 200 words most relevant to my immediate needs, and wait for me to ask for more details if I require them."

show 1 reply
Terr_yesterday at 9:29 PM

IMO it supports the framing that it's all just a "make document longer" problem, where our human brains are primed for a kind of illusion, where we perceive/infer a mind because, traditionally, that's been the only thing that makes such fitting language.

show 1 reply