Wellll...
LLMs are not actually intelligent, and absolutely should not be used for autonomous decision making. But they are capable of it... as in, if you set up a system where an LLM is asked about its "opinion" on what should be done, it will give a response, and you can make the system execute the LLM's "decision". Not a good idea, but it's possible, which means someone's gonna do it.