logoalt Hacker News

seanmcdirmidyesterday at 11:26 PM0 repliesview on HN

That's true. Never let the AI know about the code it wrote when writing the test for sure. Write multiple tests, have an arbitrator (also AI) figure out if implementation or tests are wrong when tests fail. Have the AI heavily comment code and heavily comment tests in the language of your spec so you can manually verify if the scenarios/parts of the implementations make sense when it matters.

etc...etc...

> In other words, if “the ai is checking as well” no one is.

"I tried nothing, and nothing at all worked!"