logoalt Hacker News

pikeryesterday at 10:01 PM1 replyview on HN

I am working on a sub 100KLOC Rust application and can't productively use the agentic workflows to improve that application.

On the other hand, I have tried them a number of times in greenfield situations with Python and the web stack and experienced the simultaneous joy and existential dread of others. They can really stand new projects up quick.

As a founder, this leaves me with what I describe as the "generation ship" problem. Is it possible that the architecture we have chosen for my project is so far out of the training data that it would be faster to ditch the project and reimplement it from scratch in a Claude-yolo style? So far, I'm convinced not because the code I've seen in somewhat novel circumstances is fairly mid, but it's hard to shake the thought.

I do find chatting with the models incredibly helpful in all contexts. They are also excellent at configuring services.


Replies

cyanydeezyesterday at 10:48 PM

I'm surprised there's no more attempts to stablize around a base model, like in stable diffusion, then augment those models with LoRas for various frameworks, and other routine patterns. There's so much going into trying to build these omnimodels, when the technology is there to mold the models into more useful paradigms around frameworks and coding patterns.

Especially, now that we do have models that can search through code bases.