> “The last four years have demonstrated that AI agents can acquire the will to survive and that AIs have already learned how to lie.”
Why Harari feels an obligation to comment about everything is of course beyond me, but describing 'AI' as if it takes independent decisions to lie, make moral judgements, etc. demonstrates either that he has zero clue how 'AI' trains itself or that he chooses to mislead the audience.
Agreed. But, could it be trained ti be deceiving? Especially when we bake-in advertising into it?
Isn't the problem precisely that it does not take moral judgements?
My opinion on all of this is constantly shifting, but right now my main issue is that-like self driving-it seems 90-95% correct and 5-10% catastrophically wrong.
Due to the sheer speed and volume of output it produces I have grown complacent and exhausted, so when I give it simple tasks I assume it is correct and then is the time when "it deletes" all of your files.