> We don't know what models they use, how system prompt changes or what are the actual rate limits (Yet Anthropic will become 1 trillion dollars company in a moment).
Not just that, but there’s really no way to come to an objective consensus of how well the model is performing in the first place. See: literally every thread discussing a Claude outage or change of some kind. “Opus is absolutely incredible, it’s one shotting work that would take me months” immediately followed by “no it’s totally nerfed now, it can’t even implement bubble sort for me.”
> See: literally every thread discussing a Claude outage or change of some kind. “Opus is absolutely incredible, it’s one shotting work that would take me months” immediately followed by “no it’s totally nerfed now, it can’t even implement bubble sort for me.”
Funny: I’m literally, at this very moment, working on a way to monitor that across users. Wasn’t the initial goal, but it should do that nicely as well ^^
We find it incredibly hard to articulate what separates a productive and effective engineer from a below average one. We can't objectively measure engineer's effectiveness, why would we thing we could measure LLMs cosplaying as engineers?
I feel like if I start something from scratch with it it gets what feels like 80% right, but then it takes a lot more time to do the last 20%, and if you decide to change scope after or just be more specific it is like it gets dumber the longer you work with it. If you can think truly modular and spend a ton of time breaking your problem in small units, and then work in your units separately then maybe what it does could be maintainable. But even there I am unsure. I spent an entire day trying to get it to do a node graph right - like the visual of it - and it is still so so. But like a single small script that does a specific small thing, yeah, that it can do. You still better make sure you can test it easily though.