logoalt Hacker News

SkyPunchertoday at 4:39 PM2 repliesview on HN

I've noticed this as well. I had some time off in late January/early February. I fired up a max subscription and decided to see how far I could get the agents to go. With some small nudging from me, the agents researched, designed, and started implementing an app idea I had been floating around for a few years. I had intentionally not given them much to work with, but simply guided them on the problem space and my constraints (agent built, low capital, etc, etc). They came up with an extremely compelling app. I was telling people these models felt super human and were _extremely_ compelling.

A month later, I literally cannot get them to iterate or improve on it. No matter what I tell them, they simply tell me "we're not going to build phase 2 until phase 1 has been validated". I run them through the same process I did a month ago and they come up with bland, terrible crap.

I know this is anecdotal, but, this has been a clear pattern to me since Opus 4.6 came out. I feel like I'm working with Sonnet again.


Replies

rubicon33today at 4:43 PM

There is a huge difference between greenfield development and working with an existing codebase.

I'm not trying to discredit your experience and maybe it really is something wrong with the model.

But in my experience those first few prompts / features always feel insanely magical, like you're working with a 10x genius engineer.

Then you start trying to build on the project, refactor things, deploy, productize, etc. and the effectiveness drops off a cliff.

show 2 replies
lelanthrantoday at 5:53 PM

> A month later, I literally cannot get them to iterate or improve on it.

Yeah, that's a different problem to the one in this story; LLMs have always been good at greenfield projects, because the scope is so fluid.

Brownfield? Not so much.