logoalt Hacker News

porridgeraisinyesterday at 5:52 AM1 replyview on HN

IMO, it's an expectations vs reality thing.

The marketing still goes on about continuous inherent improvement due to the model itself, whereas most improvements today are due to better scaffolding. The key now is to build tooling around these LLMs to make them reliably productive - whatever level that may be at.

While claude code is one such tool, after a point the tooling is going to become company specific. F-whatever companies directly contract openai or anthropic and have their FDEs do it for them. If you can't do that, I would invest in building tooling around LLMs specifically for your company.

Note that LLMs are approximate retrieval machines. You still need a planner* and a verifier around it. Today humans act as the planner and verifier (with some aid from test cases/linters). Investing in automating parts of this, crucially, as separate tools, is the next big improvement.

* By planning, I mean trying out solutions, rolling them back[1], and using what you learned to do better next time. The solution search process. Context management also falls under this.

[1] and no, LLMs going "wait no..." doesn't count.


Replies

rimliuyesterday at 10:00 AM

it is past reality vs. current reality. The only expectation here was not to see it degrade that much.