logoalt Hacker News

goatloveryesterday at 10:29 PM1 replyview on HN

The LLMs are doing this via chat, not by physically standing in a room inferring context. You have to prompt the LLM that you're in a room next to someone saying it's cold, the most likely answer being a desire to have temperature turned up. Of course that won't always be the case. Could be an inside joke, could be a comment with no intent to have the heat adjusted, could be a room where the heat can't be adjusted, could be a reference to someone's personality bringing down the temperature so to speak.


Replies

23dsfdsyesterday at 11:38 PM

Precisely.. this is what the bozo AI-accelerants don't understand.

What LLM's are is almost like a hacked-means of intuition. Its very impressive no doubt. But ultimately it isn't even close to what the well-trained human can infer at lightning speed when combined with intuition.

The LLM producers really ought to accept their existing investments are ultimately not going to yield the returns necessary for a viable self-sustaining business when accounting for future reinvestment needs, and instead move their focus towards understanding how to marry the human and LLM technology. Anthropic has been better on this front of course. OAI though? Complete diasaster.

show 1 reply