So do humans. Time and again, KPIs have pressured humans (mostly with MBAs) to violate ethical constrains. Eg. the Waymo vs Uber case. Why is it a highlight only when the AI does it? The AI is trained on human input, after all.
Maybe because it would be weird if your excel or calculator decided to do something unexpected, and also we try to make a tool that doesn't destroy the world once it gets smarter than us.
Maybe because it would be weird if your excel or calculator decided to do something unexpected, and also we try to make a tool that doesn't destroy the world once it gets smarter than us.