Thanks, this helped crystallize something for me: the play the AI labs are making is anti-fragile (in the Nassim Taleb sense):
> The very act of resisting feeds what you resist and makes it less fragile to future resistance.
At least along certain dimensions. I don't think the labs themselves are antifragile. Obviously we all know the labs are training on everything (so write/act the way you want future AIs to perceive you), but I hadn't really focused on how they're absorbing the innovation that they stimulate. There's probably a biological analog...
Well there are many, and I quote this AI response here for its chilling parallels:
> Parasitic castrators and host manipulators do something related. Some parasites redirect a host’s resources away from reproduction and into body maintenance or altered tissue states that benefit the parasite. A classic example is parasites that make hosts effectively become growth/support machines for the parasite. It is not always “stimulate more tissue, then eat it,” but it is “stimulate more usable host productivity, then exploit it.” (ChatGPT 5.4 Thinking. Emphasis mine.)
You have a point but current LLM architectures in particular are very fragile to data poisoning [1,2].
Instead of anti-fragility, I'd point you to the law of requisite variety instead. You'll notice that all AI improvements are insanely good for a week or two after launch. Then you'll see people stating that 'models got worse'. What happened in fact is that people adapted to the tool, but the tool didn't adapt anymore. We're using AI as variety resistant and adaptable tools, but we miss the fact that most deployments nowadays do not adapt back to you as fast.