logoalt Hacker News

hacker_homietoday at 10:52 AM10 repliesview on HN

I have been saying this for a while, the issue is there's no good way to do LLM structured queries yet.

There was an attempt to make a separate system prompt buffer, but it didn't work out and people want longer general contexts but I suspect we will end up back at something like this soon.


Replies

TeMPOraLtoday at 12:15 PM

I've been saying this for a while, the issue is that what you're asking for is not possible, period. Prompt injection isn't like SQL injection, it's like social engineering - you can't eliminate it without also destroying the very capabilities you're using a general-purpose system for in the first place, whether that's an LLM or a human. It's not a bug, it's the feature.

show 1 reply
spprashanttoday at 12:05 PM

The problem is once you accept that it is needed, you can no longer push AI as general intelligence that has superior understanding of the language we speak.

A structured LLM query is a programming language and then you have to accept you need software engineers for sufficiently complex structured queries. This goes against everything the technocrats have been saying.

show 1 reply
HPsquaredtoday at 11:07 AM

Fundamentally there's no way to deterministically guarantee anything about the output.

show 3 replies
xigoitoday at 1:33 PM

How long is it going to take before vibe coders reinvent normal programming?

show 2 replies
this_usertoday at 1:14 PM

> there's no good way to do LLM structured queries yet

Because LLMs are inherently designed to interface with humans through natural language. Trying to graft a machine interface on top of that is simply the wrong approach, because it is needlessly computationally inefficient, as machine-to-machine communication does not - and should not - happen through natural language.

The better question is how to design a machine interface for communicating with these models. Or maybe how to design a new class of model that is equally powerful but that is designed as machine first. That could also potentially solve a lot of the current bottlenecks with the availability of computer resources.

sornaensistoday at 1:09 PM

IMO the solution is the same as org security: fine grained permissions and tools.

Models/Agents need a narrow set of things they are allowed to actually trigger, with real security policies, just like people.

You can mitigate agent->agent triggers by not allowing direct prompting, but by feeding structured output of tool A into agent B.

adam_patarinotoday at 12:25 PM

It’s not a query / prompt thing though is it? No matter the input LLMs rely on some degree of random. That’s what makes them what they are. We are just trying to force them into deterministic execution which goes against their nature.

GeoAtreidestoday at 11:35 AM

>structured queries

there's always pseudo-code? instead of generating plans, generate pseudo-code with a specific granularity (from high-level to low-level), read the pseudocode, validate it and then transform into code.

codingdavetoday at 12:19 PM

That seems like an acceptable constraint to me. If you need a structured query, LLMs are the wrong solution. If you can accept ambiguity, LLMs may the the right solution.

htrptoday at 11:43 AM

whatever happened to the system prompt buffer? why did it not work out?

show 1 reply