LLM's now can capture intent. I think the issue now is that the full landscape of human values never resolves cleanly when mapped from the things we state in writing as being human values.
Asimov tried to capture this too, as in, if a robot was tasked with "always protect human life", would it necessarily avoid killing at all costs? What if killing someone would save the lives of 2 others? The infinite array of micro-trolly problems that dot the ethical landscape of actions tractable (and intractable) to literate humans makes a full-consistent accounting of human values impossible, thus could never be expected from a robot with full satisfaction.
> LLM's now can capture intent.
Humans cannot capture intent so how can AI?
It is well established that understanding what someone meant by what they said is not a generally solvable problem, akin to the three body problem.
Note of course this doesn't mean you can't get good enough almost all of the time, but it in the context here that isn't good enough.
After all the entire Asimov story is about that inability to capture intent in the absolute sense.
> LLM's now can capture intent No they can’t. Here is an example: Ask an llm to write a multi phase plan for a very large multi file diff that it created, with least ambiguity, most continuity across plans; let’s see if it can understand your intent.
“LLMs can capture intent now” reads to me the same as: AI has emotions now, my AI girlfriend told me so.
I don’t discredit you as a person or a professional, but we meatbags are looking for sentience in things which don’t have it, thats why we anthropomorphise things constantly, even as children.
We are easily fooled and misled.