logoalt Hacker News

stingraycharleslast Sunday at 2:20 PM7 repliesview on HN

I don’t understand the point you’re trying to make. LLMs are not humans.

From my perspective, the whole problem with LLMs (at least for writing code) is that it shouldn’t assume anything, follow the instructions faithfully, and ask the user for clarification if there is ambiguity in the request.

I find it extremely annoying when the model pushes back / disagrees, instead of asking for clarification. For this reason, I’m not a big fan of Sonnet 4.5.


Replies

IgorPartolalast Sunday at 2:55 PM

Full instruction following looks like monkey’s paw/malicious compliance. A good way to eliminate a bug from a codebase is to delete the codebase, that type of thing. You want the model to have enough creative freedom to solve the problem otherwise you are just coding using an imprecise language spec.

I know what you mean: a lot of my prompts include “never use em-dashes” but all models forget this sooner or later. But in other circumstances I do want it to push back on something I am asking. “I can implement what you are asking but I just want to confirm that you are ok with this feature introducing an SQL injection attack into this API endpoint”

show 1 reply
Kim_Bruninglast Sunday at 2:58 PM

I can't help you then. You can find a close analogue in the OSS/CIA Simple Sabotage Field Manual. [1]

For that reason, I don't trust Agents (human or ai, secret or overt :-P) who don't push back.

[1] https://www.cia.gov/static/5c875f3ec660e092cf893f60b4a288df/... esp. Section 5(11)(b)(14): "Apply all regulations to the last letter." - [as a form of sabotage]

show 1 reply
InsideOutSantalast Sunday at 2:54 PM

I would assume that if the model made no assumptions, it would be unable to complete most requests given in natural language.

show 1 reply
wat10000last Sunday at 3:22 PM

If I tell it to fetch the information using HTPP, I want it to ask if I meant HTTP, not go off and try to find a way to fetch the info using an old printing protocol from IBM.

scotty79last Sunday at 2:55 PM

> is that it shouldn’t assume anything, follow the instructions faithfully, and ask the user for clarification if there is ambiguity in the request

We already had those. They are called programming languages. And interacting with them used to be a very well paid job.

MangoToupelast Sunday at 4:16 PM

> and ask the user for clarification if there is ambiguity in the request.

You'd just be endlessly talking to the chatbots. Humans are really bad at expressing ourselves precisely, which is why we have formal languages that preclude ambiguity.

simlevesquelast Sunday at 2:49 PM

I think the opposite. I don't want to write down everything and I like when my agents take some initiative or come up with solutions I didn't think of.