> Making a prompt library useful requires iteration. Every time the LLM is slightly off target, ask yourself, "What could've been clarified?" Then, add that answer back into the prompt library.
I'm far from an LLM power user, but this is the single highest ROI practice I've been using.
You have to actually observe what the LLM is trying to do each time. Simply smashing enter over and over again or setting it to auto-accept everything will just burn tokens. Instead, see where it gets stuck and add a short note to CLAUDE.md or equivalent. Break it out into sub-files to open for different types of work if the context file gets large.
Letting the LLM churn and experiment for every single task will make your token quota evaporate before your eyes. Updating the context file constantly is some extra work for you, but it pays off.
My primary use case for LLMs is exploring code bases and giving me summaries of which files to open, tracing execution paths through functions, and handing me the info I need. It also helps a lot to add some instructions for how to deliver useful results for specific types of questions.
I'm interested to see where we'll land re: organizing larger codebases to accommodate agents.
I've been having a lot of fun taking my larger projects and decomposing them into various structures made from nix flakes. If I launch claude code in a flake devshell it has access to only those tools, and it sees the flake.nix and assumes that the project is bounded by the CWD even though it's actually much larger, so its context is small and it doesn't get overwhelmed.
Inputs/outputs are a nice language agnostic mechanism for coordinating between flakes (just gotta remember to `nix flake update --update-input` when you want updated outputs from an adjacent flake).
I've been running with the idea for a few weeks, maybe it's dumb, but I'd be surprised if this kind of rethinking didn't eventually yield a radical shift in how we organize code.
> Aside: Why are LLMs good at greenfield?
I have the complete opposite experience, where once some patterns already exist 2-3 times in the codebase, the LLMs start to accurately replicating them instead of trying to solve everything as one-off solutions.
> You can’t be inconsistent if there are no existing patterns.
"Consistency" shouldn't be equated to "good". If that's your only metric for quality and you don't apply any taste you'll quickly end of with a unmaintainable hodgepodge of second-grade libraries if you let an LLM do its thing in a greenfield project.
LLMs are so good at telling me about things I know little to nothing about, but when when I ask about things I have expert knowledge on they consistently fail, hallucinate, and confidently lie...
> Here's a LLM literacy dipstick: ask a peer engineer to read some code they're unfamiliar with. Do they understand it? ... No? Then the LLM won't either.
Of course, but the problem is the converse: There are too many situations where a peer engineer will know what to do but the agent won't. This means that it requires more work to make a codebase understandable to a human than it does to make it understandable to an agent.
> Moving more implementation feedback from human to computer helps us improve the chance of one-shotting... Think of these as bumper rails. You can increase the likelihood of an LLM reaching the bowling pins by making it impossible to land in the gutter.
Sort of, but this is also a little similar to claiming that P = NP. Having a an efficient way to reliably check if a solution is correct is not the same at all as a reliable way to find a solution. It's the theory of computation that tells us that it probably isn't. The likelihood may well be higher yet still not high enough. Even though theoretically NP problems are strictly easier than EXPTIME ones, in practice, in many situations (though not all) they are equally intractable.
In fact, we can put the claim to the test: there are languages, like ATS and Idris, that make almost any property provable and checkable. These languages let the programmer (human or machine) position the "bumper rails" so precisely as to ensure we hit the target. We can ask the agent to write the code, write the proof of correctness, and check it. We'd still need to check that the correctness property is the right one, but if the claim is correct, coding agents should be best at writing code in ATS or Idris, accompanied by correctness proofs. Are they?
Obviously, mileage mauy vary dependning on the task and the domain, but if it's true that coding models will get significantly better, then the best course of action may well be, in many cases, to just wait until they do rather than spend a lot of effort working around their current limitations, effort that will be wasted if and when capabilities improve. And that's the big question: are we in for a long haul where agent capabilities remain roughly where they are today or not?
> This is the garbage in, garbage out principle in action. The utility of a model is bottlenecked by its inputs. The more garbage you have, the more likely hallucinations will occur.
Good read but I wouldn't fully extend the garbage in, garbage out principle to the LLMs. These massive LLMs are trained on internet-scale data, which includes a significant amount of garbage, and still do pretty good. Hallucinations are due to missing or misleading context than from the noise alone. Tech debt heavy code bases though unstructured still provides information-rich context.
The issues raised in this article are why I think highly-opinionated frameworks will lead to higher developer productivity when using AI assisted coding
You may not like all the opinions of the framework, but the LLM knows them and you don’t need to write up any guidelines for it.
It's like people are rediscovering the most basic principles: E.g. that documentation ("prompt library") is usecho, or that well-organized code leads to higher velocity in development.
Is it not the case that "production level code" coming out of these processes makes the whole system of coder-plus-machine weaker?
I find it to be a good thing that the code must be read in order to be production-grade, because that implies the coder must keep learning.
I worry about the collapse in knowledge pipeline when there is very little benefit to overseeing the process...
I say that as a bad coder who can and has done SO MUCH MORE with llm agents. So I'm not writing this as someone who has an ideal of coding that is being eroded. I'm just entering the realm of "what elite coding can do" with LLMs, but I worry for what the realm will lose, even as I'm just arriving
Biggest change to my workflow has been to break down projects to smaller parts using libraries. So where I in the past would put everything in the same code base I now break down stuff that can be separate to its own libraries (like wrapping an external API). That way the AI only needs to read the docs for the library instead of having to read all the code when working on features that use the API.
Why do none of these ever touch on token optimization? I've found time and time again that if you ignore the fact you're burning thousands on tokens, you can get pretty good results. Things like prompt libraries and context.md files tend to just burn more tokens per call.
I have a somewhat different take on this (somewhat captured in the post linked below).
IMO, the best way to raise the floor of LLM performance in codebases is by building meaning into the code base itself ala DDD. If your codebase is hard to understand and grok for a human, it will be the same for an LLM. If your codebase is unstructured and has no definable patterns, it will be harder for an LLM to use.
You can try to overcome this with even more tooling and more workflows but IMO, it is throwing good money after bad. it is ironic and maybe unpopular, but it turns out LLMs prove that all the folks yapping about language and meaning (re: DDD) were right.
DDD & the Simplicity Gospel:
https://oluatte.com/posts/domain-driven-design-simplicity-go...
Its kind of crazy that the knee jerk reaction to failing to one shot your prompt is to abandon the whole thing because you think the tool sucks. It very well might, but it could also be user error or a number of other things. There wouldn't be a good nights sleep in sight if I knew an LLM was running rampant all over production code in an effort to "scale it".
Using AugmentCode's Context Engine you can get this either through their VSCode/JetBrains plugins, their Auggie command line coding agent or by registering their MCP server with your local coding agent like Claude Code. It works far better than painstakingly stuffing your own context manually or having your agent use grep/lsp/etc to try and find what it needs.
Just over the weekend, I decided to shell out for the top tier Claude Code to give it a try... definitely an improvement over the year I spent with Github CoPilot enabled on my personal projects (mostly an annoyance more than a help that I eventually disabled altogether).
I've seen some impressive output so far, and have a couple friends that have been using AI generation a lot... I'm trying to create a couple legacy (BBS tech related, in Rust) applications to see how they land. So far mostly planning and structure beyond the time I've spent in contemplation. I'm not sure I can justify the expense long term, but wanting to experience the fuss a bit more to have at least a better awareness.
STAN'd to the top.
Decent article but it feels like a linkedin rehashing of stuff the people at the edge have already known for a while.
You can't get away from the engineering part of software engineering even if you are using LLMs. I have been using Claude Opus 4.5, and it's the best out of the models I have tried. I find that I can get Claude to work well if I already know the steps I need to do beforehand, and I can get it to do all of the boring stuff. So it's a series of very focused and directed one-shot prompts that it largely gets correct, because I'm not giving it a huge task, or something open-ended.
Knowing how you would implement the solution beforehand is a huge help, because then you can just tell the LLM to do the boring/tedious bits.
I’ve ended up with a workflow that lines up pretty closely with the guidance/oversight framing in the article, but with one extra separation that’s been critical for me.
I’m working on a fairly messy ingestion pipeline (Instagram exports → thumbnails → grouped “posts” → frontend rendering). The data is inconsistent, partially undocumented, and correctness is only visible once you actually look at the rendered output. That makes it a bad fit for naïve one-shotting.
What’s worked is splitting responsibility very explicitly:
• Human (me): judge correctness against reality. I look at the data, the UI, and say things like “these six media files must collapse into one post”, “stories should not appear in this mode”, “timestamps are wrong”. This part is non-negotiably human.
• LLM as planner/architect: translate those judgments into invariants and constraints (“group by export container, never flatten before grouping”, “IG mode must only consider media/posts/*”, “fallback must never yield empty output”). This model is reasoning about structure, not typing code.
• LLM as implementor (Codex-style): receives a very boring, very explicit prompt derived from the plan. Exact files, exact functions, no interpretation, no design freedom. Its job is mechanical execution.
Crucially, I don’t ask the same model to both decide what should change and how to change it. When I do, rework explodes, especially in pipelines where the ground truth lives outside the code (real data + rendered output).
This also mirrors something the article hints at but doesn’t fully spell out: the codebase isn’t just context, it’s a contract. Once the planner layer encodes the rules, the implementor can one-shot surprisingly large changes because it’s no longer guessing intent.
The challenges are mostly around discipline:
• You have to resist letting the implementor improvise.
• You have to keep plans small and concrete.
• You still need guardrails (build-time checks, sanity logs) because mistakes are silent otherwise.
But when it works, it scales much better than long conversational prompts. It feels less like “pair programming with an AI” and more like supervising a very fast, very literal junior engineer who never gets tired, which, in practice, is exactly what these tools are good at.
This highlights a missing feature of LLM tooling, which is asking questions of the user. I've been experimenting with Gemini in VS Code, and it just fills in missing information by guessing and then runs off writing paragraphs of design and a bunch of code changes that could have been avoided by asking for clarification at the beginning.
I wrote this forever ago in AI terms :) https://getstream.io/blog/cursor-ai-large-projects/
But the summary here is that with the right guidance, AI currently crushes it on large codebases.
What is the current state of LCMs (large code models)? I.e. models that operate on the AST and not on text tokens.
Or why you shouldn't....
As the models have progressively improved (able to handle more complex code bases, longer files, etc) I’ve started using this simple framework on repeat which seems to work pretty well at one shorting complex fixes or new features.
[Research] ask the agent to explain current functionality as a way to load the right files into context.
[Plan] ask the agent to brainstorm the best practices way to implement a new feature or refactor. Brainstorm seems to be a keyword that triggers a better questioning loop for the agent. Ask it to write a detailed implementation plan to an md file.
[clear] completely clear the context of the agent —- better results than just compacting the conversation.
[execute plan] ask the agent to review the specific plan again, sometimes it will ask additional questions which repeats the planning phase again. This loads only the plan into context and then have it implement the plan.
[review & test] clear the context again and ask it to review the plan to make sure everything was implemented. This is where I add any unit or integration tests if needed. Also run test suites, type checks, lint, etc.
With this loop I’ve often had it run for 20-30 minutes straight and end up with usable results. It’s become a game of context management and creating a solid testing feedback loop instead of trying to purely one-shot issues.