logoalt Hacker News

dpedutoday at 9:32 PM4 repliesview on HN

Tangent: is there a future for AI offerings with guardrails? What kind of user wants to pay for a product that occasionally tells you "I'm sorry Dave, I'm afraid I can't do that"? Why would I pay for a product that doesn't do what I want, despite being capable? I predict that as AI becomes less of a bubble and more of an everyday thing - and thus subject to typical market pressures - offerings with guardrails will struggle to complete with truly unchained models.


Replies

sfinktoday at 9:41 PM

If I were interviewing people for the position of personal assistant, I would probably find the resume entry "willing to grind up babies for food" to be a negative mark. You?

I'm not about to run OpenClaw, but I suspect similar capabilities will gradually creep in without anyone really noticing. Soon Claude Code will be able to do many of the same things. ("Run python to add two numbers? Sure, that's safe, run whatever python you want.") Given that it is now representing me in the world, yes I would not only like some guardrails, but I would also like to have some confidence that the company making those guardrails actually gives a sh*t and isn't just doing their best to fill in a checkbox. But maybe that's just me.

sbarretoday at 9:43 PM

Cars have seatbelts and other safety measures.

Reasonable countries have gun control laws.

The list goes on of things that need to be restricted or legislated to add limits.

Is this a serious question?

threetonesuntoday at 10:06 PM

I am 100% sure that AI with guardrails will become the dominant models as they become more widely adopted, and the bigger issue you should be concerned with is can you even tell what those guardrails are.

levocardiatoday at 9:45 PM

I personally would love it if AI would say "Sorry Dave (or Pete), I'm afraid I can't spy on Americans for you," and I'd happily pay higher taxes to force the Pentagon to use that AI.