[dead]
Is this, along with the comments by the other green usernames on this post, an AI-generated comment? Apologies if it isn't, AIs are trained on human writing and all that, but they're jumping out at me.
Edit: I see another green comment was flagged for AI, might be indicative of something, but why so many green comments on this thread specifically?
* Dashes
* Triplets
* X isn't Y, it's Z
* X but Y
* Wording that looks good at first pass, but when you read closely actually makes no sense in the context of the discussion: "fixing the symptom instead of the root cause"
Flagged.
I'm very much an AI bear but I do think one interesting outcome is going to be that LLMs will stumble upon some weird ways of doing things that no human would have chosen that turn out to be better (Duff's device-level stuff) and they will end up entering the human programming lexicon.
These are the same kinds of issues often seen with human junior engineer work.
Lints, beautifiers, better tests?
Eh, but if you're in an organization you tune your AGENTS.md, CLAUDE.md, AI code reviews, etc. to have your human driven or automated AI generated code fit the standards of your organization. I don't need models to be smart enough to aggressively try to divine how the organization wants them to do, the users will indeed make that happen. So this post is maybe a little bit over the top.
I am literally right now tuning my PR, Claude instructions, and PR instructions to match our standards.
Funny enough I'm having the opposite problem where Claude is lowering its rating of my PR because my testing, documentation, and error handling is better than the other code in the repository so it doesn't match and therefore gets a worse grade.
I don't need it to try any harder without explicit instructions.
> it's code that solves the problem in a way no human would choose
but is it better than than the way a human would choose? And does it matter?
A compiler may write assembly in a way that no humans would choose either. And in the early days of compilers, where most programmers would still hand-weave assembly, they would scoff at these generated assemblies as being bad.
Not to mention that in games like go, the "AI" choosing moves that no humans would choose meant it surpassed humans!
In other words, solving a problem "in a way humans would choose to" is distinct from just solving a problem, and imho, not always required at all.