That reduces humans to the homo economicus¹:
> "Self-interest is the main motivation of human beings in their transactions" [...] The economic man solution is considered to be inadequate and flawed.[17]
An important distinction is that a human can *not* make pure rational decisions, or use complex deductions to make decisions on, such as "if I do X I will go to jail".
My point being: if AI were to risk jail time, it would still act different from humans, because (the current common LLMs) can make such deductions and rational decisions.
Humans will always add much broader contexts - from upbringing, via culture/religion, their current situation, to past experiences, or peer-consulting. In other words: a human may make an "(un)ethical" decision based on their social background, religion, a chat with a pal over a beer about the conundrum, their ability to find a new job, financial situation etc.
> a human may make an "(un)ethical" decision based on their social background, religion, a chat with a pal over a beer about the conundrum, their ability to find a new job, financial situation etc.
The stories they invent to rationalise their behaviour and make them feel good about themselves. Or inhumane political views ie fascism which declares other people worth less, so it's okay to abuse them.