> Example is unexpected data that doesn’t match expectations. Can’t fault the AI for those bugs.
I don't understand, how can you not fault AI for generating code that can't handle unexpected data gracefully? Expectations should be defined, input validated, and anything that's unexpected should be rejected. Resilience against poorly formatted or otherwise nonsensical input is a pretty basic requirement.
I hope I severely misunderstood what you meant to say because we can't be having serious discussions about how amazing this technology is if we're silently dropping the standards to make it happen.
I don't understand, how can you not fault AI for generating code that can't handle unexpected data gracefully?
Because I, the spec writer, didn't think of it. I would have made the same mistake if I wrote the code.
yeah you're spot on - the whole "can't fault AI for bugs" mindset is exactly the problem. like, if a junior dev shipped code that crashed on malformed input we'd send it back for proper validation, why would we accept worse from AI? I keep seeing this pattern where people lower their bar because the AI "mostly works" but then you get these silent failures or weird edge case explosions that are way harder to debug than if you'd just written defensive code from the start. honestly the scariest bugs aren't the ones that blow up in your face, it's the ones that slip through and corrupt data or expose something three deploys later