logoalt Hacker News

vimdayesterday at 11:12 PM1 replyview on HN

Time and time again, be it "hallucination", prompt injection, or just plain randomness, LLMs have proven themselves woefully insufficient at best when presented with and asked to work with untrusted documents. This simply changes the attack vector rather than solving a real problem


Replies

TeMPOraLyesterday at 11:21 PM

In a computing system, LLMs aren't substituting for code, they're substituting for humans. Treat them accordingly.