o1 is impressive, I tried feeding it some of the trickier problems I have solved (that involved nontrivial algorithmic challenges) over the past few months, and it managed to solve all of them, and usually came up with slightly different solutions than I did, which was great.
However what I've found odd was the way it formulated the solution was in excessively dry and obtuse mathematical language, like something you'd publish in an academic paper.
Once I managed to follow along its reasoning, I understood what it came up with could essentially be explain in 2 sentences of plain english.
On the other hand, o1 is amazing at coding, being able to turn an A4 sheet full of dozens of separate requirements into an actual working application.
One place where all LLMs fail hard is in graphics programming. I try on and off since the release of ChatGPT 3 and no model manages to coherently juggle GLSL Shader Inputs, their processing and the output. It fails hard at even the basics.
I guess it's because the topic is such a cross between fields like math, cs, art and so visual, maybe for a similar reason LLMs do so poorly with SVG ouput, like the unicorn benchmark: https://gpt-unicorn.adamkdean.co.uk/
Do you mean o1-preview or the current o1? I rarely get anything really useful out of the current one ($20 subscription, not the 200 one). They seem to have seriously nerfed it.
> actual working application
Working != maintainable
The things that ChatGPT or Claude spit out are impressive one-shots but hard to iterate on or integrate with other code.
And you can’t just throw Aider/Cursor/Copilot/etc at the original output without quickly making a mess. At least not unless you are nudging it in the right directions at every step, occasionally jumping in and writing code yourself, fixing/refactoring the LLM code to fit style/need, etc.