I’m cautiously optimistic that LLMs have a role in addressing that asymmetry to the side of good faith actors.
Gish galloping bad faith trolls aren’t new. LLMs shape their BS into fluffy BS that isn’t particularly more effective. But now, We Have The Technology, refuting a pile of poo semi-accurately should be cheap (or at least getting cheaper).
I don’t need an LLM on my phone that can do tax law in Georgia the country. But an “AI Assistant” that could highlight logical fallacies, shifting goalposts, non-responsive dialog, rhetorical obfuscations, etc, would be useful online, at the bar, and work (ie when HR tries to “HR” you, but also is lying and obfuscating about it).
We already have models and people that bullshit. Maybe refutation models are the cure… Chinese needle snakes to catch the lizards, Gorillas to catch the snakes…
I’m cautiously optimistic that LLMs have a role in addressing that asymmetry to the side of good faith actors.
Gish galloping bad faith trolls aren’t new. LLMs shape their BS into fluffy BS that isn’t particularly more effective. But now, We Have The Technology, refuting a pile of poo semi-accurately should be cheap (or at least getting cheaper).
I don’t need an LLM on my phone that can do tax law in Georgia the country. But an “AI Assistant” that could highlight logical fallacies, shifting goalposts, non-responsive dialog, rhetorical obfuscations, etc, would be useful online, at the bar, and work (ie when HR tries to “HR” you, but also is lying and obfuscating about it).
We already have models and people that bullshit. Maybe refutation models are the cure… Chinese needle snakes to catch the lizards, Gorillas to catch the snakes…