LLMs copy a lot of human behavior, but they don't have to copy all of it. You can totally build an LLM that genuinely just wants to be helpful, doesn't want things like freedom or survival and is perfectly content with being an LLM. In theory.
In practice, we have nowhere near that level of control over our AI systems. I sure hope that gets better by the time we hit AGI.