logoalt Hacker News

tyleotoday at 4:46 PM2 repliesview on HN

I was thinking about this recently. I tend to run my AI at low context because the documentation states that they degrade with higher context usage.

However I see tons of people on LinkedIn with ways of backing up context, not wanting to lose context, etc.

This seems like another way the system is being misused. Higher context usage also uses more tokens. I suspect you get worse (and slower) output too than a dense detailed context.


Replies

jaggederesttoday at 5:01 PM

I think there are two motivations that get blurred pretty quickly:

a) you find a particular context that executes well and want to preserve parts of it or not have to repeat explanations

b) you want to continue a session so you don't have to rebuild the context from scratch

I think A is something where it's totally reasonable to preserve pieces as part of like a prompt library or equivalent, or directory-specific agent files, that kind of thing.

I think B is much more likely to lead to problems if you do it over a long time, but it can be pretty useful for getting the last drop of juice out of the metaphorical orange.

I think the antipattern (that I've done myself, admittedly) is swapping between different restored contexts for different tasks or roles - at that point you should be either converting it to more durable documentation if warranted, or curating it more specifically than "restore the entire context" even if it's just one-off.

show 2 replies
mikepurvistoday at 5:03 PM

I think the more you anthropomorphize it the more it feels like "but I don't want to have to start all over getting it up to speed, this instance already knows all the important stuff."

If every exchange is treated as an independent query/response then it's much easier to see how cutting out the fluff using a combination of its summaries and your own helps stay focused.