That's fair. The underlying manipulation (presenting fabricated authoritative documents to override legitimate ones) predates LLMs entirely. Corporate fraud has used exactly this pattern for decades.
What's new isn't the social engineering, it's the scale and automation. A human reviewer reading all 8 documents would likely notice the inconsistency and ask questions. The LLM processes all retrieved chunks simultaneously with no memory of what "normal" looks like, no ability to ask for clarification, and no friction. It just synthesizes whatever it retrieves. At query volume (hundreds of requests per day across thousands of users), there's no human in that loop.