logoalt Hacker News

kahnclusionslast Thursday at 2:22 AM0 repliesview on HN

I’m not convinced LLMs can ever be secured, prompt injection isn’t going away since it’s a fundamental part of how an LLM works. Tokens in, tokens out.