logoalt Hacker News

FurstFlytoday at 12:41 PM1 replyview on HN

Okay, I like how it reduces token usage, but it kind of feels that, it will reduce the overall model intelligence. LLMs are probabilistic models, and you are basically playing with their priors.


Replies

sheiyeitoday at 1:32 PM

If you take meaningless tokens (that do not contribute to subject focus), I don't see what you would lose. But as this takes out a lot of contextual info as well, I would think it might be detrimental.