I don't understand the point of automating note taking. It never worked for me to copy paste text into my notes and now you can 100x that?
The whole point of taking notes for me is to read a source critically, fit it in my mental model, and then document that. Then sometimes I look it up for the details. But for me the shaping of the mental model is what counts
First of all, this is more than just note taking. It appears to be a (yet another) harness for coordinating work between agents with minimal human intervention. And as such, shouldn’t part of the point be to not have to build that mental model yourself, but rather offload it to the shared LLM “brain”?
Highly debatable whether it’s possible to create anything truly valuable (valuable for the owner of the product that is) with this approach, though. I’m not convinced that it will ever be possible to create valuable products from just a prompt and an agent harness. At that point, the product itself can be (re)created by anyone, product development has been commodified, and the only thing of value is tokens.
My hypothesis is that “do things that don’t scale”[0] will still apply well into the future, but the “things that don’t scale” will change.
All that said, I’ve finally started using Obsidian after setting up some skills for note taking, researching, linking, splitting, and restructuring the knowledge base. I’ve never been able to spend time on keeping it structured, but I now have a digital secretary that can do all of the work I’m too lazy to do. I can just jot down random thoughts and ideas, and the agent helps me structure it, ask follow-up questions, relate it to other ongoing work, and so on. I’m still putting in the work of reading sources and building a mental model, but I’m also getting high-quality notes almost for free.
Totally agree re note taking. We treat our notes way too lightly, just as an attic or a basement leads to hoarding more stuff than you'll ever need.
Most things do not need to end up in your notes, and LLMs add too much noise, one that you likely never personally verify/filter out at all.
JA Westenberg made a good video essay about it a few days ago:
I thought this was parody at first as well for a redundant useless product as it was named after the redundant useless product of the same name from The Office (Wuphf.com)
The few scientific studies out there actually show a degradation of output quality when these markdown collections are fully LLM maintained (opposed to an increase when they’re human maintained), which I found fascinating.
I think the sweet spot is human curation of these documents, but unsupervised management is never the answer, especially if you don’t consciously think about debt / drift in these.
i've been running a variation of the _llm writes a wiki_ since late february. i run it on a sprite (sprites.dev from fly.io), it's public but i don't particularly advertise it. i completely vibe coded the shit out of it with claude. the app side and the content. the app side makes the content accessible to other agent instances, lists some documents at the root, provides search function, and let's me read it on a browser with nice typography if i want to, as opposed to raw markdown.
it's neat, i can create a new sprite/whatever, point claude at the root, and tell it to setup zswap and it will know exactly how to do so in that environment. if something changes, and there's some fiddling to make it work, i can ask it to write a report and send it in to fold into the existing docs.
Man, there's no point in replying. You are argueing with a non-human therefore the conversation is without meaning and impact and thus a waste of time and energy.
[dead]
Then you have never worked at a large enough codebase or across enough many projects?
I think there‘s a serious issue with people using AI to do an immense amount of busywork and then never look at it again. Colossal waste.