I write documentation for a living. Although my output is writing, my job is observing, listening and understanding. I can only write well because I have an intimate understanding of my readers' problems, anxieties and confusion. This decides what I write about, and how to write about it. This sort of curation can only come from a thinking, feeling human being.
I revise my local public transit guide every time I experience a foreign public transit system. I improve my writing by walking in my readers' shoes and experiencing their confusion. Empathy is the engine that powers my work.
Most of my information is carefully collected from a network of people I have a good relationship with, and from a large and trusting audience. It took me years to build the infrastructure to surface useful information. AI can only report what someone was bothered to write down, but I actually go out in the real world and ask questions.
I have built tools to collect people's experience at the immigration office. I have had many conversations with lawyers and other experts. I have interviewed hundreds of my readers. I have put a lot of information on the internet for the first time. AI writing is only as good as the data it feeds on. I hunt for my own data.
People who think that AI can do this and the other things have an almost insulting understanding of the jobs they are trying to replace.
And that's exactly the same for coding!
Coding is like writing documentation for the computer to read. It is common to say that you should write documentation any idiot can understand, and compared to people, computers really are idiots that do exactly as you say with a complete lack of common sense. Computers understand nothing, so all the understanding has to come from the programmer, which is his actual job.
Just because LLMs can produce grammatically correct sentences doesn't mean they can write proper documentation. In the same way, just because they are able to produce code that compiles doesn't mean they can write the program the user needs.
Well said. I try to capture and express this same sentiment to others through the following expression:
“Technology needs soul”
I suppose this can be generalized to “__ needs soul”. Eg. Technical writing needs soul, User interfaces need soul, etc. We are seriously discounting the value we receive from embedding a level of humanity into the things we choose (or are forced) to experience.
Thanks so much for this!
Nicely written (which, I guess, is sort of the point).
The hard part is the slow, human work of noticing confusion, earning trust, asking the right follow-up questions, and realizing that what users say they need and what they actually struggle with are often different things
your ability to articulate yourself cleanly comes across in this post in a way that I feel AI is trying to be and never quite reaches as well.
I completely agree that the ambitions of AI proponents to replace workers is insulting. You hit the nail on the head with pointing out that we simply dont write everything down. And the more common sense / well known something is the less likely it is to be written down, yet the more likely it might be needed by an AI to align itself properly.
See also: librarians, archivists, historians, film critics, doctors, lawyers, docents. The déformation professionnelle of our industry is to see the world in terms of information storage, processing, and retrieval. For these fields and many others, this is like confusing a nailgun for a roofer. It misses the essence of the work.
I like the cut o' your jib. The local public transit guide you write, is that for work or for your own knowledge base? I'm curious how you're organizing this while keeping the human touch.
I'm exploring ways to organize my Obsidian vault such that it can be shared with friends, but not the whole Internet (and its bots). I'm extracting value out the curation I've done, but I'd like to share with others.
Why shouldn't AI be able to sufficiently model all of this in the not far future? Why shouldn't have it have sufficient access to new data and sensors to be able to collect information on its own, or at least the system that feeds it?
Not from a moral perspective of course, but the technical possibility. And the overton window has shifted already so far, the moral aspect might align soon, too.
IMO there is an entirely different problem, that's not going to go away just about ever, but could be solved right now easily. And whatever AI company does so first instantly wipes out all competition:
Accept full responsibility and liability for any damages caused by their model making wrong decisions and either not meeting a minimum quality standard or the agreed upon quality.
You know, just like the human it'd replace.
Your philosophy reminds me of my friend Caroline Rose. One of Caroline's claims to fame was writing the original Inside Macintosh.
You may enjoy this story about her work:
https://www.folklore.org/Inside_Macintosh.html
As a counterpoint, the very worst "documentation" (scare quotes intended) I've ever seen was when I worked at IBM. We were all required to participate in a corporate training about IBM's Watson coding assistant. (We weren't allowed to use external AIs in our work.)
As an exercise, one of my colleagues asked the coding assistant to write documentation for a Python source file I'd written for the QA team. This code implemented a concept of a "test suite", which was a CSV file listing a collection of "test sets". Each test set was a CSV file listing any number of individual tests.
The code was straightforward, easy to read and well-commented. There was an outer loop to read each line of the test suite and get the filename of a test set, and an inner loop to read each line of the test set and run the test.
The coding assistant hallucinated away the nested loop and just described the outer loop as going through a test suite and running each test.
There were a number of small helper functions with docstrings and comments and type hints. (We type hinted everything and used mypy and other tools to enforce this.)
The assistant wrote its own "documentation" for each of these functions in this form:
"The 'foo' function takes a 'bar' parameter as input and returns a 'baz'"
Dude, anyone reading the code could have told you that!
All of this "documentation" was lumped together in a massive wall of text at the top of the source file. So:
When you're reading the docs, you're not reading the code.
When you're reading the code, you're not reading the docs.
Even worse, whenever someone updates the actual code and its internal documentation, they are unlikely to update the generated "documentation". So it started out bad and would get worse over time.
Note that this Python source file didn't implement an API where an external user might want a concise summary of each API function. It was an internal module where anyone working on it would go to the actual code to understand it.
Replacement will be 80% worse, that's fine. As long as it's 90% cheaper.
See Duolingo :)
Are you working in the legal field or is that separate? How big is your company?
In every single discussion AI-sceptics claim "but AI cannot make a Michelin-star five-course gourmet culinary experience" while completely ignoring the fact that most people are perfectly happy with McDonald's, as evidenced by its tremendous economic and cultural success, and the loudest complaint with the latter is the price, not the quality.
I think you fundamentally misunderstand how the technology can be used well.
If you are in charge of a herd of bots that are following a prompt scaffolding in order to automate a work product that meets 90% of the quality of the pure human output you produce, that gives you a starting point with only 10% of the work to be done. I'd hazard a guess that if you spent 6 months crafting a prompt scaffold you could reach 99% of your own quality, with the odd outliers here and there.
The first person or company to do that well then has an automation framework, and they can suddenly achieve 10x or 100x the output with a nominal cost in operating the AI. They can ensure that each and every work product is lovingly finished and artisanally handcrafted , go the extra mile, and maybe reach 8x to 80x output with a QA loss.
In order to do 8-80x one expert's output, you might need to hire a bunch of people to do segmented tasks - some to do interviews, build relationships, the other things that require in person socialization. Or, maybe AI can identify commonalities and do good enough at predicting a plausible enough model that anyone paying for what you do will be satisfied with the 90% as good AI product but without that personal touch, and as soon as an AI centric firm decides to eat your lunch, your human oriented edge is gone. If it comes down to beancounting, AI is going to win.
I don't think there's anything that doesn't require physically interacting with the world that isn't susceptible to significant disruption, from augmentation to outright replacement, depending on the cost of tailoring a model to the tasks.
For valuable enough work, companies will pay the millions to fine-tune frontier models, either through OpenAI or open source options like Kimi or DeepSeek, and those models will give those companies an edge over the competition.
I love human customer service, especially when it's someone who's competent, enjoys what they do, and actually gives a shit. Those people are awesome - but they're not necessary, and the cost of not having them is less than the cost of maintaining a big team of customer service agents. If a vendor tells a big company that they can replace 40k service agents being paid ~$3.2 billion a year with a few datacenters, custom AI models, AI IT and Support staff, and totally automated customer service system for $100 million a year, that might well be worth the reputation hit and savings. None of the AI will be able to match the top 20% of human service agents in the edge cases, and there will be a new set of problems that come from customer and AI conflict, etc.
Even so. If your job depends on processing information - even information in a deeply human, emotional, psychologically nuanced and complex context - it's susceptible to automation, because the ones with the money are happy with "good enough." AI just has to be good enough to make more money than the human work it supplants, and frontier models are far past that threshold.
Spot on! I think LLM's can help greatly in quickly putting that knowledge in writing, including using it to review written materials for hidden prerequisite assumptions that readers might not be aware of that. It can also help newer hires in how to write and more clearly. LLM's are clearly useful in increasing productivity, but management that think that they even close to ready to replace large sections of practically any workforce are delusional.
I don't write for a living, but I do consider communication / communicating a hobby of sorts. My observations - that perhaps you can confirm or refute - are:
- Most people don't communicate as thoroughly and complete - written and verbal - as they think they do. Very often there is what I call "assumptive communication". That is, sender's ambiguity that's resolved by the receiver making assumptions about what was REALLY meant. Often, filling in the blanks is easy to do - as it's done all the time - but not always. The resolution doesn't change the fact there was ambiguity at the root.
Next time you're communicating, listen carefully. Make note of how often the other person sends something that could be interpreted differently, how often you assume by using the default of "what they likely meant was..."
- That said, AI might not replace people like you. Or me? But it's an improvement for the majority of people. AI isn't perfect, hardly. But most people don't have the skills a/o willingness to communicate at a level AI can simulate. Improved communication is not easy. People generally want ease and comfort. AI is their answer. They believe you are replaceable because it replaces them and they assume they're good communicators. Classic Dunning-Kruger.
p.s. One of my fave comms' heuristics is from Frank Luntz*:
"It's not what you say, it's what they hear." (<< edit was changing to "say" from "said".)
One of the keys to improved comms is to embrace that clarify and completeness is the sole responsibility of the sender, not the receiver. Some people don't want to hear that, and be accountable, especially then assumption communication is a viable shortcut.
* Note: I'm not a fan of his politics, and perhaps he's not The Source of this heuristic, but read it first in his "Words That Work". The first chapter of "WTW" is evergreen comms gold.
sounds like a bunch of agents can do a good amount of this. A high horse isn’t necessary
>insulting
As as writer, you know this makes it seem emotional rather than factual?
Anyway, I agree with what you are saying. I run a scientific blog that gets 250k-1M users per year, and AI has been terrible for article writing. I use AI for ideas on brainstorming and ideas for titles(which ends up being inspiration rather than copypaste).
…says every charlatan who wanted to keep their position. I’m not saying you’re a charlatan but you are likely overestimating your own contributions at work. Your comment about feeding on data - AI can read faster than you can by orders of magnitude. You cannot compete.
The problem is that so many things have been monopolized or oligopolized by equally-mediocre actors so that quality ultimately no longer matters because it's not like people have any options.
You mention you've done work for public transit - well, if public transit documentation suddenly starts being terrible, will it lead to an immediate, noticeable drop in revenue? Doubt it. Firing the technical writer however has an immediate and quantifiable effect on the budget.
Apply the same for software (have you seen how bad tech is lately?) or basically any kind of vertical with a nontrivial barrier to entry where someone can't just say "this sucks and I'm gonna build a better one in a weekend".