logoalt Hacker News

getpokedagainyesterday at 8:52 PM2 repliesview on HN

We are anthropomorphizing whenever we refer to prompts as instructions to models. They predict text not obey our orders.


Replies

DiogenesKynikostoday at 12:53 AM

> They predict text not obey our orders.

Those are the same thing in this case. The latter is just an extremely reductionist description of the mechanics behind the former.

gigatreeyesterday at 9:04 PM

That’s not how language works, just how engineers think it works

show 1 reply