>None of the three can easily be captured in code, but are trivial to capture as documentation.
Posts like this frustrate me. Not because of what they ask, but because of what they incorrectly assume. They assume that documentation can provide enough context, and that human knowledge is not needed.
Every bit of written documentation can and will be misinterpreted. And perfect clarity is impossible. A well-written ADR does not eliminate all ambiguity, because there is too much cultural context around the writing of the ADR that attempting to read it from some other cultural vantage point leads to bad assumptions. We can find this basic lesson from reading law (2nd, 14th amendments to the constitution), history (what did happen after Muhammad died?), philosophy (what in the world is Plato's cave talking about?), or theology (how should we translate Ephesians 5:22-33 and what does that mean) outside its original context with other people.
Just writing things down and thinking an AI is going to later perfectly understand what the intent of the author is... patently ridiculous. I do not intend to dismiss the idea that we should probably document more, but the idea that the AI can just take our documentation and competently understand all the decisions represented in them is ludicrous.
> They assume that documentation can provide enough context, and that human knowledge is not needed.
It's funny actually, because I fully agree with your reasoning. The only part were we differ is whether that's assumed, or even implied.
No documentation means running fully on tribal knowledge, or institutional knowledge if you prefer. Even if you capture your intent, imperfect and incomplete, in as little as 2 paragraphs, you'll get durable recorded memory, and intent you'll be able to reference. It does not eliminate ambiguity, but it adds framing, direction, and friction.
The examples are great, and they serve really well to prove another point that I intentionally left out: writing is not a one-shot activity. Documentation is living and should be treated as such. Unless it receives proper care continuously, it will wither and die. That could very well be the topic of a future post!
Thank you for reading and for providing thoughtful feedback!
Pulling histograms from my dictation software, it's apparent that I speak for sixty minutes at a time multiple times a day at 200-350 wpm aggregate just... trying to communicate all the nuance about a tiny, _tiny_ subject area, for an agent to implement or write in plans/docs... and then despite the best efforts of both me and LLMs, that can be and usually still _is_ misinterpreted.
Whether bulk or terse, highly precise with words or highly nuanced in how it's communicated... I don't think any of that gets you to a place where the documentation is a substitute for asking the person.
People are trying to do this with meetings as well and certainly they help but code that's written plus meetings plus an architect like person yammering on endlessly about the nuance that went into it still is often not enough to capture the detail in earnest – and especially not in a way that won't be misinterpreted.
Perhaps if AI becomes truly superhuman in all of the relevant areas to the point that it makes the decisions just as well as the person in the chair... _then_ we might solve this by having it instantly pattern recognize the solution and the why, but until and unless we reach that day, I think what you're saying is very true.
>Every bit of written documentation can and will be misinterpreted.
Yes, humans (and human languages) are flawed and lossy.
>A well-written ADR does not eliminate all ambiguity,
True: no docs can ever eliminate all ambiguity (on a decent sized project at least).
But this entire argument seems to be "letting perfect be the enemy of the good". Documentation doesn't have to be perfect or 100% unambiguous to be useful.
People would often ask me why I would just read the code for a project rather than ask whoever wrote it. At the end of 30 minutes talking to someone, I may not have learned anything. Or I may have learned the wrong thing because they wrote it 3 years ago and forgot about it.
At the end of 30 minutes I either understand how the project works at a high level or know enough to know I'm not going to be improving anything with this project, ever.
Having worked at enterprise with 100+ engineers across multiple teams with complex reporting lines, I definitely agree with this. There's a lot of nuance behind decisions that docs simply can't capture. I mean in theory you can probably write every considerations that led you to make a certain decision, e.g.:
- I'm writing this service even though team X has built the same thing, because my team lead doesn't trust team X since the last time we depended on their service 3 years ago, they had a major downtime that screwed us up big time
- This service is using AWS Lambda simply because I think it's cool, despite the fact that the company has a dedicated team running k8s stack with argocd, argo rollouts, KEDA, etc for the entire company
- Service Y is written in this particular way because it's a service that is shared with another team that came from a company that was acquired, and they wouldn't use it unless we write it this particular way, and making the top execs happy is more important than dealing with a small tech debt (this is probably true)
But no one is going to write these in their RFC. But Sarah knows.