logoalt Hacker News

naaskingyesterday at 11:23 AM2 repliesview on HN

> since the AI is writing the tests it's obvious that they're going to pass

That's not obvious at all if the AI writing the tests is different than the AI writing the code being tested. Put into an adversarial and critical mode, the same model outputs very different results.


Replies

t43562yesterday at 2:43 PM

IMO the reason neither of them can really write entirely trustworthy tests is that they don't have domain knowledge so they write the test based on what the code does plus what they extract from some prompts rather than based on some abstract understanding of what it should do given that it's being used e.g. in a nuclear power station or for promoting cat videos or in a hospital or whatever.

Obviously this is only partially true but it's true enough.

It takes humans quite a long time to learn the external context that lets them write good tests IMO. We have trouble feeding enough context into AIs to give them equal ability. One is often talking about companies where nobody bothers to write down more than 1/20th of what is needed to be an effective developer. So you go to some place and 5 years later you might be lucky to know 80% of the context in your limited area after 100s of meetings and talking to people and handling customer complaints etc.

show 1 reply
disiplusyesterday at 1:52 PM

Even if its a different session it can be enough. But that said i had times where it rewrote tests "because my implementation was now different so the tests needed to be updated" so you have to prompt even that to tell it to not touch the tests.

show 1 reply