logoalt Hacker News

kneel25today at 2:34 PM3 repliesview on HN

> After the initial translation, I ran multiple passes of adversarial review, asking different models to analyze the code for mistakes and bad patterns.

I feel like you just know it’s doomed. What this is saying is “I didn’t want to and cannot review the code it generated” asking models to find mistakes never works for me. It’ll find obvious patterns, a tendency towards security mistakes, but not deep logical errors.


Replies

herrkanintoday at 2:38 PM

Your argument is just as applicable on human code reviewers. Obviously having others review the code will catch issues you would never have thought of. This includes agents as well.

show 3 replies
u_samatoday at 2:54 PM

That is what the testing suite is there to check, no?

show 2 replies
cardanometoday at 3:05 PM

Yeah, I lost all interest in the ladybird project now that it is AI slop.

No one wants to work with this generated, ugly, unidiomatic ball of Rust. Other than other people using AI. So you dependency AI grows and grows. It is a vicious trap.