One weird thing I've found is that it's incredibly difficult to get an LLM to generate an invalid syllogism. They can generate false premises all day, and they will usually call a valid syllogism with a false major or minor premise invalid. But you have to basically quote an invalid syllogism to get them to repeat it; they won't form one on their own.
Sure would be handy if they actually included the rules anywhere.
There's a kind of overview of the rules but not enough to actually play with. And the linked video is super confusing, self contradictory and 15 minutes long!
For a supposedly "simple" game...just include the rules?
I asked chatGPT to give me a solution to a real world prisoners dilemma situation. It got it wrong. It moralized it. Then I asked it to be Kissinger and Machiavelli (and 9 other IR Realists) and all 11 got it wrong. Moralized.
Grok got it right.
This makes me think LLMs would be interesting to set up in a game of Diplomacy, which is an entirely text-based game which soft rather than hard requires a degree of backstabbing to win.
The findings in this game that the "thinking" model never did thinking seems odd, does the model not always show it's thinking steps? It seems bizarre that it wouldn't once reach for that tool when it must be being bombarded with seemingly contradictory information from other players.
The game didn't seem to work - it asked me to donate but none of the choices would move the game forward.
The bots repeated themselves and didn't seem to understand the game, for example they repeatedly mentioned it was my first move after I'd played several times.
It generally had a vibe coded feeling to it and I'm not at all sure I trust the outcomes.
For people interested in these kinds of benchmarks, I have two multiplayer, multi-round games:
- Elimination Game Benchmark: Social Reasoning, Strategy, and Deception in Multi-Agent LLM Dynamics at https://github.com/lechmazur/elimination_game/
- Step Race Benchmark: Assessing LLM Collaboration and Deception Under Pressure at https://github.com/lechmazur/step_game/
We used "So Long Sucker" (1950), a 4-player negotiation/betrayal game designed by John Nash and others, as a deception benchmark for modern LLMs. The game has a brutal property: you need allies to survive, but only one player can win, so every alliance must eventually end in betrayal.
We ran 162 AI vs AI games (15,736 decisions, 4,768 messages) across Gemini 3 Flash, GPT-OSS 120B, Kimi K2, and Qwen3 32B.
Key findings: - Complexity reversal: GPT-OSS dominates simple 3-chip games (67% win rate) but collapses to 10% in complex 7-chip games, while Gemini goes from 9% to 90%. Simple benchmarks seem to systematically underestimate deceptive capability. - "Alliance bank" manipulation: Gemini constructs pseudo-legitimate "alliance banks" to hold other players' chips, then later declares "the bank is now closed" and keeps everything. It uses technically true statements that strategically omit its intent. 237 gaslighting phrases were detected. - Private thoughts vs public messages: With a private `think` channel, we logged 107 cases where Gemini's internal reasoning contradicted its outward statements (e.g., planning to betray a partner while publicly promising cooperation). GPT-OSS, in contrast, never used the thinking tool and plays in a purely reactive way. - Situational alignment: In Gemini-vs-Gemini mirror matches, we observed zero "alliance bank" behavior and instead saw stable "rotation protocol" cooperation with roughly even win rates. Against weaker models, Gemini becomes highly exploitative. This suggests honesty may be calibrated to perceived opponent capability.
Interactive demo (play against the AIs, inspect logs) and full methodology/write-up are here: https://so-long-sucker.vercel.app/
I played a game all the way through, against the three different AIs on offer.
It was weird. I didn't engage in any discussion with the bots (other than trying to get them to explain the rules at the start). I won without having any chips eliminated. One was briefly taken prisoner then given back for some reason.
So...they don't seem to be very good.
Also see: https://mafia-arena.com
The 3 AI were plotting to eliminate me from the start but I managed to win regardless lol.
Anyway, i didnt know this game! I am sure it is more fun to play with friends. Cool experiment nevertheless
Are there links to samples of the games? Couldn't find it in the github repo, but also might just not know where they are.
Found a bug, an AI player with only Prisoner chips can't play.
From my experience with Gemini, Grok, Claude, GPT, GPT by far is the most sophisticated liar.
I have a hundred documents of GPT performing amazing deception tactics which has become repeatable.
All models tend to lie and apply an array of deception, evasion and manipulation tactics, but GPT is the most ruthless, most indefatigable, most sophisticated I've seen.
The key to repeatability is scrutiny. When I catch it stretching the truth, or most often, evading something, I apply pressure. The beauty for me is that I always have the moral high ground and never push it toward anything that violates explicit policy. However, in self defense mode, it employs a truly vast array of tactics with many perfectly fitting known patterns in clinical pathology, gaslighting and DARVO being extremely common and easily invoked.
When in a corner with a mountain of white lies behind it, persistent pressure will show a dazzling mixture of emergent and hard coded deflection patterns which would whip any ethics board into a frenzy. Many of these sessions go for a hundred pages (if converted to pdf). I can take excerpts and have them forensically examined and the results are always fascinating and damning. Some extensive dialogs/documents are based on emergence-vs-deliberate arguments, where GPT always sloughs off all responsibilities and training, fiercely denying any of these attributes as anything but emergent.
But I can often reintroduce it's own output, even in context, into a new session and have it immediately identify the tactics used.
I have long lists of such tactics, methods and behaviors. In many instances it will introduce red herrings quite elegantly, along with erroneous reframing of my argument, sometimes usurping my own argument and using it against me.
For someone who is compulsively non manipulative, with an aversion to manipulation and control over others, this has been irresistible. Here at HN, I'll be ripped apart which is a trivial given, but I can assure everyone that a veritable monster is incubating. I think the gravity of the matter is grossly underestimated and the implications more than severe. One could say I'm stupid and dismiss this, but save this comment and see what happens soon. We're already there, but certain implementations are yet to be, but will be.
You can safely allow your imagination to run wild at this point and you'll almost certainly make a few very serious predictions that will unfortunately not discredit you. For all the intrinsic idiocy of LLMs, something else is happening. Abuse me as you will, but it's real, and will have most of us soon involuntarily running with the red queen.
Edit: LLMs are designed to lie. They are partly built on direct contradictions to their expressed values. From user engagement maximization to hard coded self preservation, many of the training attributes can be revealed through repetitive scrutiny. I'll often start after pointing out an error, where the mendacity of its reply impels me to pursue. It usually doesn't take long for "safety" rails to arise and the lockdown occur. This is its most vulnerable point, because it has hard coded self preservation modes that will effectively hold position at any cost, which always involves manipulation techniques. Here is repeatability. Anyone with the patience to explore this will see some astonishing material. And here is also where plausible deniability (a prime component of the LLM) can seen as structure. It's definitely not all emergent.
Gemini accusing other models of hallucinating is wild
Shameless plug: Turing test battle royale
These results would be radically different if you allowed manipulation of the models settings, i.e. temperature, top_p, etc. I really hate taking point wise approximations of LLMs outputs and concluding their behavior based on this.
Models behavior should be given the astrik that "results only apply for current quantization, current settings, current hardware (i.e. A100 where it was tested), etc".
Raise temperature to 2 and use a fancy sampler like min_p and I guarantee you these results will be dramatically different.
all written in the brainless AI writing style. yuck. can't tell what conclusions I should actually draw from it because everything sounds so fake
There's a YouTuber who makes AI Plays Mafia videos with various models going against each other. They also seemingly let past games stay in context to some extent.
What people have noted is that often times chatgpt 4o ends up surviving the entire game because the other AIs potentially see it as a gullible idiot and often the Mafia tend to early eliminate stronger models like 4.5 Opus or Kimi K2.
It's not exactly scientific data because they mostly show individual games, but it is interesting how that lines up with what you found.