The LLMs read everything.
They do, but it doesn't mean that entire texts will remain in their context. Increasingly they can use agentic reading, whereby they will spawn an agent to read long texts, then present a condensed version back to the parent LLM, leading to a theoretical opportunity for information loss.
Only because they are architecturally unable to not read something.
It doesn't mean they're paying attention.