There's no indication that an AGI mind will adopt human-like values. Nor that the smarter something gets, the more benevolent it is. The smartest humans built the atom bomb.
Not that human values are perfectly benevolent. We slaughter billions of animals per day.
If you take a look at the characteristics of LLMs today, I don't think we want to continue further. We're still unable to ensure the goals we want the system to have are there. Hallucinations are a perfect example. We want these systems to relay truthful information, but we've actually trained them to relay information that looks correct at first glance.
Thinking we won't make this mistake with AGI is ignorance.
You're attacking a strawman argument that isn't what I, or OP were saying.