Well, imagine this was controlling a weapon.
“Should I eliminate the target?”
“no”
“Got it! Taking aim and firing now.”
Shall I open the pod bay doors?
"Thinking: the user recognizes that it's impossible to guarantee elimination. Therefore, I can fulfill all initial requirements and proceed with striking it."
That's why we keep humans in the loop. I've seen stuff like this all the time. It's not unusual thinking text, hence the lack of interestingness
It is completely irresponsible to give an LLM direct access to a system. That was true before and remains true now. And unfortunately, that didn't stop people before and it still won't.