We are anthropomorphizing whenever we refer to prompts as instructions to models. They predict text not obey our orders.
> They predict text not obey our orders.
Those are the same thing in this case. The latter is just an extremely reductionist description of the mechanics behind the former.
That’s not how language works, just how engineers think it works
> They predict text not obey our orders.
Those are the same thing in this case. The latter is just an extremely reductionist description of the mechanics behind the former.