> “I’ve played around with it and I’ve tried experimenting, but I haven’t found it helpful at all. Basically zero. It’s not helpful to me. And I’ve used various systems and so on, the paid models and so on.”
Eh... I don't know. It's hard for me to believe such absolutist takes. Especially since other proeminent mathematicians (i.e. Tao) have spoken highly of the help and value they get from these systems.
I also see this kind of takes in my field. Especially from "senior" people, 20+ years of experience. The problem is that, when pressed, their "trying it out" is often the most basic, naive, technically atrocious type of testing. "I tried coding with it but it's useless" -> they tried once, 3 years ago, in a chatgpt web interface, and never touched it since.
I think there's a lot of disconnect between the overhypers and deniers. Both are wrong, IMO, and unfortunately both are extremely vocal while also being stubborn and unwilling to try different things. But my gut feeling is that if in 2025/26 someone says "basically zero" they are wrong. Or naive. Or make strawman arguments. And ignorable.
[dead]
I've been using it very recently. Not for mathematics , but for programming. And while Claude Opus is much more likely to admit mistake ("You're absolutely correct!" instead of "That's fine") when I correct it, it does require correcting, and has been incapable at grasping complex problems. I can't trust it to produce correct code for complex problems, because when I did, the solution turned out to be wrong. Plausible looking, and certainly producing some results, but not the correct ones.
It has still been useful, because parts of the problem have been solved before, and in some cases it can accurately reproduce those parts, which has been massively helpful. And it can help me understand some aspects of the problem. But it can't solve it for me. It has shown that that's simply beyond its capabilities.