> You can’t just ask AI to dump, you need to vaguely describe what design elements you think are important
Right. And that’s what I’ve tried to do but I am not confident it’s captured the most critical info in an efficient way.
> I can’t imagine asking AI to change some code without having a description of what the code does. You could maybe reverse engineer that, but that would basically be generating the documents after the fact.
This is exactly how I’ve been using AI so far. I tell it to deeply analyze the code before starting and it burns huge amounts of tokens relearning the same things it learned last time. I want to get some docs in place to minimize this. That’s why I’m interested in what a subagent would respond with because that’s what it’s operating with usually. Or maybe the compressed context might be an interesting reference.
You can save the analysis and those are your docs. But your workflow has to maintain them in sync with the code.
I have no idea about token cost working for a FAANG, it’s a blind spot for me. One of these days I’m going to try to get QWen coder going for some personal projects on my M3 Max (I can run 30b or even 80b heavily quantized), and see if I can get something going that’s thrifty with the resources provided by a local LLM.