If given A and not B llms often just output B after the context window gets large enough.
It's enough of a problem that it's in my private benchmarks for all new models.
That's just general context rot, and the models do all sorts of off the rails behavior when the context is getting too unwieldy.
The whole breakthrough with LLM's, attention, is the ability to connect the "not" with the words it is negating.
That's just general context rot, and the models do all sorts of off the rails behavior when the context is getting too unwieldy.
The whole breakthrough with LLM's, attention, is the ability to connect the "not" with the words it is negating.