logoalt Hacker News

susamyesterday at 8:38 PM14 repliesview on HN

It is a bit weird to see LLMs suddenly being presented as the reason to follow what are basically long standing best practices.

'You must write docs. Docs must be in your repo. You must write tests. You must document your architecture. Etc. Etc.'

These were all best practices before LLMs existed and they remain so even now. I have been writing extensive documentation for all my software for something like twenty years now, whether it was for software I wrote for myself, for my tiny open source projects or for businesses. I will obviously continue to do so and it has nothing to do with:

> AI changes the game

The reason is simply that tests and documentation are useful to humans working on the codebase. They help people understand the system and maintain it over time. If these practices also benefit LLMs then that is certainly a bonus, but these practices were valuable long before LLMs existed and they remain valuable even now regardless of how AI may have changed the game.

It is also a bit funny that these considerations did not seem very common when the beneficiaries were fellow human collaborators, but are now being portrayed as very important once LLMs are involved. I'd argue that fellow humans and your future self deserved these considerations even more in the first place. Still, if LLMs are what finally motivate people to write good documentation and good tests, I suppose that is a good outcome since humans will end up benefiting from it too.


Replies

seanwilsonyesterday at 10:20 PM

> It is a bit weird to see LLMs suddenly being presented as the reason to follow what are basically long standing best practices.

Maybe it's the speed of LLM iteration that makes the benefit more immediately obvious, vs seeing it unfold with a team of people over a longer time? It's almost like running a study?

I have a similar reaction to strong static types being advocated to help LLMs understanding/debugging code, catching bugs, refactoring... when it's obvious to me this helps humans as well.

Curious how "this practice helps LLMs be more productive" relates to studies that try to show this with human programmers, where running convincing human studies is really difficult. Besides problems with context sizes, are there best practices that help LLMs a lot but not humans?

show 1 reply
jamie_cayesterday at 11:57 PM

I see this as not just internal API/architecture/code documentation, but product documentation too. We maintain internal docs about how our product is used for our support, implementation, and sales teams to reference.

Right now it's hosted externally (in our "blessed" knowledge base) but if it could be pulled into the repo, and we set an AI reviewer on every pull request to sanity check that if the changes we're doing have a material impact on the feature as described in those docs that it should be flagged (or changes proposed) that'd be a nice win for keeping them up-to-date, and it's easy enough to publish markdown as html or even script an update to the canonical site when we merge to main.

ronsoryesterday at 8:57 PM

AI means that you cannot defer software design until you've written half code; you cannot defer documentation to random notes at the end.

It has the effect of finally forcing people to think about the software they're making, assuming they care about quality. If they didn't, then it's not practically different from an insecure low-code app or something copy-pasted from 15 year old StackOverflow answers.

basilikumyesterday at 8:50 PM

> The reason is simply that tests and documentation are useful to other humans working on the codebase.

Including future you

show 1 reply
8noteyesterday at 11:15 PM

there's an implicit ownership change, from having technical writers own the documentation, to including it as part of the commit.

when things are tiny or resource constrained, the same people are doing each task regardless, but "technical writer" is a real job around documentation and manual writing, so there's at least sometimes some real decoupling between the code and outwards facing documents.

that also covers for cases where people can write code well, but who's english(or whatever the target documentation language is) is shaky at best.

forrestthewoodsyesterday at 9:00 PM

> It is a bit weird to see LLMs suddenly being presented as the reason to follow what are basically long standing best practices.

About 95% of the work needed to make LLMs happy is just general purpose better engineering. Units tests? Integration tests? CI? API documentation? Good example? All great for humans too!

I consider this largely a good thing. It would be much worse if the changes needed for Happy LLMs were completely different than what you want for Happy Humans! Even worse would be if they were mutually exclusive.

It's a win. I'll take it.

benatkinyesterday at 8:42 PM

Well, it's timely because there's a docs platform that has surged in popularity, and it really is not a good idea for most of those who need technical docs to be using a SaaS that approximates Squarespace.

pooploop64yesterday at 9:32 PM

Lately I have seen a lot of things coming full circle like this in a way that always seems positive for humans as well.

Many doomers are running around saying the future is grim because everything will be made for AI agents to use rather than humans. But so far everything done to push that agenda has looked more like a big de-enshittification.

Another one is Model Context Protocol, which brings forth the cutting edge (for 1970) idea of using a standard text based interface so that separate programs can interoperate through it.

If the cost of having non-user-hostile software is to let AI bros run around thinking they invented things like stdin and documentation, I'm all for it at this point.

If any AI bros are reading this here's another idea. Web pages that use a mostly static layout and a simple structure would probably be a lot easier for AI to parse. And google, it would be really beneficial to AI agents if their web searches weren't being interfered with by clickjacking sites such as Pinterest.

ErroneousBoshyesterday at 10:30 PM

> These were all best practices before LLMs existed and they remain so even now

Okay, so what, should I be moving my docs out of the repo or something?

How should I make it as hard as possible for LLMs to make any use of or suggestions about my documentation?

formerly_provenyesterday at 9:56 PM

There's a pattern where people create AI-specific infrastructure for coding agents which is essentially instantly obsolete because it's pointless. Stuff like most MCPs (instead of just using a CLI), agent-specific files (CLAUDE.md, AGENTS.MD, github-instructions.NET etc.) etc.

> You should have a good, concise introduction to the codebase that allows anyone to write and test a simple patch in under 15 minutes.

Yeah, that's the CONTRIBUTING file.

sublinearyesterday at 9:27 PM

I agree and would go one step further. The way people are now talking to LLMs to write code is the way we need them to plan and discuss in meetings with humans.

Everything regarding AI-assisted development is basically training wheels for the young people coming into the workplace.

j45yesterday at 9:21 PM

LLMs are making it more possible to maintain.