logoalt Hacker News

coldteatoday at 6:52 AM0 repliesview on HN

Whether it has "real understanding" is a question for philosophy majors. As long as it (mechanically, without "real understanding") still can perform actions to escape containment, and do malicious stuff, that's enough.

LLMs are machines trained to respond and to appear to think (whether that's 'real thinking' or text-statistics fake-thinking') like humans. The foolish thing to do would be to NOT anthropomorphize them.