logoalt Hacker News

cmeacham98today at 4:56 AM1 replyview on HN

What? No. An LLM cannot reason, at least not what we think of when we say a human can reason. (There are models called "reasoning" models as a marketing gimmick.)

TFA describes a port of a Linux driver that was literally "an existing example to copy".


Replies