logoalt Hacker News

simianwordsyesterday at 10:05 PM2 repliesview on HN

I don't agree with either. Skills with an API exposed by the service solves both your problems.

The LLM can look at the OpenAPI spec and construct queries - I often do this pretty easily.


Replies

simonwyesterday at 10:08 PM

How can you disagree with my first point? You can't use skills if you don't have a Bash environment in which to run them. Do you disagree?

Skills with an API exposed by the service usually means your coding agent can access the credentials for that service. This means that if you are hit by a prompt injection the attacker can steal those credentials.

show 2 replies
mememememememoyesterday at 10:13 PM

It creates a new problem. I need an isolated shell environment. I need to lock it down. I need containers. I need to ensure said containers are isolated and not running as root. I probably need Kubernetes to do this at scale. &tc

Also even with above there is more opportunity for the bot to go off piste and run cat this and awk that. Meanwhile the "operator" i.e. the Grandpa who has an iPhone but never used a computer has no chance of getting the bot back on track as he tries to renew his car insurance.

"Just going to try using sed to get the output of curl https://.."

"I don't understand I just want to know the excess for not at fault incident when the other guy is uninsured".

Everyone has gone claw-brained. But it really is ok to write code and save that code to disk and execute thay code later.

You can use MCP or even just hard coded API call from your back end to the service you wanna use like it's 2022.