Thinking and consciousness don’t by themselves imply emotion and sentience (feeling something), and therefore the ability to suffer. It isn’t clear at all that the latter is a thing outside of the context of a biological brain’s biochemistry. It also isn’t clear at all that thinking or consciousness would somehow require that the condition of the automaton that performs these functions would need to be meaningful to the automaton itself (i.e., that the automaton would care about its own condition).
We are not anywhere close to understanding these things. As our understanding improves, our ethics will likely evolve along with that.
>Thinking and consciousness don’t by themselves imply emotion and sentience...
Sure, but all the examples of conscious and/or thinking beings that we know of have, at the very least, the capacity to suffer. If one is disposed to take these claims of consciousness and thinking seriously, then it follows that AI research should, at minimum, be more closely regulated until further evidence can be discovered one way or the other. Because the price of being wrong is very, very high.