logoalt Hacker News

DANmodeyesterday at 11:26 PM4 repliesview on HN

We’re supposed to be fixing LLM security by adding a non-LLM layer to it,

not adding LLM layers to stuff to make them inherently less secure.

This will be a neat concept for the types of tools that come after the present iteration of LLMs.

Unless I’m sorely mistaken.


Replies

reassess_blindyesterday at 11:30 PM

It looks as if this tool has traditional static rules to allow/deny requests, as well as a secondary LLM-as-a-judge layer for, I imagine, the kinds of rules that would be messy or too convoluted to implement using standard rules.

show 1 reply
snugyesterday at 11:33 PM

I think this can be great as additional layer of security. Where you can have a non llm layer do some analysis with some static rules and then if something might seem phishy run it through the llm judge so that you don’t have to run every request through it, which would be very expensive.

Edit: actually looks like it has two policy engines embedded

show 2 replies
nltoday at 12:21 AM

> We’re supposed to be fixing LLM security by adding a non-LLM layer to it,

If people said "we build a ML-based classifier into our proxy to block dangerous requests" would it be better? Why does the fact the classifier is a LLM make it somehow worse?

show 2 replies
SkyPuncheryesterday at 11:28 PM

Defense in depth. Layers don't inherently make something less secure. Often, they make it more secure.

show 1 reply