I know a fair deal about the subject of chess AI, but when I was reading this and I didn't understand. I was polarized, was I reading a mastermind that was way above my level? Or someone way too confident that learned enough buzzwords through an LLM to briefly delude someone else other than themselves?
A quick visit at the homepage suggests that it's probably the latter. I don't want to be rude, not posting out of malice, but if someone else was reading this and was trying to parse it, I think it might be helpful to compare notes and evaluate whether it's better to discard the article altogether.
Curious, what has you believe that? As someone who doesn't know much about chess AI, I was mostly able to follow along, and figured there were simply some prereqs the author wasn't explaining (e.g. distillation, test-time search, RLVR). If the article is deeply confused in some way I would indeed like to calibrate my BS detectors.
This comment is another example of an "llm psychosis" that is currently occuring in common discourse.
The mass delusion of, "I don't understand what I'm reading, therefore it must be produced by an llm."
I think it's a pretty serious problem. Not that llm text exists on the internet, but that reasonable people are reflexively closed off to creativity because the mere existence of the possibility that something is created by an llm is in their minds grounds for disqualification.
The chess people seemed to think my article was reasonably accurate. But I'm not really sure.