logoalt Hacker News

yakkomajuriyesterday at 11:26 PM1 replyview on HN

Really cool! I'm also building something in this space but taking a slightly different approach. I'm glad to see more focus on security for production agentic workflows though, as I think we don't talk about it enough when it comes to claws and other autonomous agents.

I think you're spot on with the fact that it's so far it's been either all or nothing. You either give an agent a lot of access and it's really powerful but proportionally dangerous or you lock it down so much that it's no longer useful.

I like a lot of the ideas you show here, but I also worry that LLM-as-a-judge is fundamentally a probabilistic guardrail that is inherently limited. How do you see this? It feels dangerous to rely on a security system that's not based on hard limitations but rather probabilities?


Replies

manapausetoday at 5:34 AM

Correct me if I’m wrong, but from my experience in this space in order for a model to exercise judgment it must force itself to operate in a strict chain of thought mode. Since all LLMs are predictive creatures, I started to care a lot more about my judgment settings, the transparency of them, and the presence of a judgment loop in either the development or functionality of an application built these days.

Not exactly sure where I’m going with this, but my work with creating penetesting tools for LLMs, the way that I use judgment is critical to the core functionality of the application. I agree with your concern and I will just say that the more time I spent concerned with chain of though where now I will make multiple versions of the same app using a different judge set a different “temperaments” and I found it to be incredibly enlightening as to the diversity of applications and approaches that it creates.

  Even using BMAD or superpowers, I can make five versions of an app without judges involved and I feel like I’m just making the same app five times because the API begins to coalesce around the business problem you want to solve. The vicissitudes of prediction tools always want to take the safest bet for the greater good, but with the judge involved we can make the agent force itself to actually be hostile about what exactly we’re trying to do, which has produced interesting and fun results.