Useful intelligence does not require sentience.
As far as I know, none of LLM models are sentient nor are possible to be in the near future.
I also do not assume so called AGI to be sentient. Merely to be a human level skilled intellectual worker.
In absence of ethical dilemmas of this calibre for the foreseeable future let’s focus on the economy side of things in this particular comment chain.
It must very comforting to be able to decided a "human level worker" isn't sentient.
It makes things so clean.