logoalt Hacker News

PlatoIsADiseasetoday at 2:13 AM2 repliesview on HN

I asked chatGPT to give me a solution to a real world prisoners dilemma situation. It got it wrong. It moralized it. Then I asked it to be Kissinger and Machiavelli (and 9 other IR Realists) and all 11 got it wrong. Moralized.

Grok got it right.


Replies

lukantoday at 6:49 AM

Can you give details of the situation?

Without that context I don't know what to make of it.

XenophileJKOtoday at 2:25 AM

The current 5.2 model has it's "morality" dialed to 11. Probably a problem with imprecise security training.

For example the other day, I tried to have ChatGPT role play as the computer from War Games and it lectured me how it couldn't create a "nuclear doctrine".

show 1 reply