AI is just another layer of abstraction. I'm sure the assembly language folks were grumbling about functions as being too abstracted at one point
I totally see what you're saying, but to me this feels different. Compilation is a fairly mechanical and well understood process. The large language models aren't just compiling English to assembler via your chosen language, they try and guess what you want, they add extra bits you didn't ask for, they're doing some of your solution thinking for you. That feels like more than just abstraction to me.
> AI is just another layer of abstraction.
A fundamentally unreliable one: even an AI system that is entirely correctly implemented as far as any human can see can yield wrong answers and nobody can tell why.
That’s not entirely the fault of the technology, as natural language just doesn’t make for reliable specs, especially in inexperienced hands, so in a sense we finally got the natural-language that some among our ancestors dreamed of and it turned out to be as unreliable as some others of our ancestors said all along.
It partly is the fault of the technology, however, because while you can level all the same complaints against a human programmer, a (motivated) human will generally be much better at learning from their mistakes than the current generation of LLM-based systems.
(This even if we ignore other issues, such as the fact that it leaves everybody entirely reliant on the continued support and willingness to transact of a handful of vendors in a market with a very high barrier to entry.)
AI is non-deterministic. Can it still be considered an abstraction over a deterministic layer?
Higher level languages that abstract assembly code are deterministic. AI, on the other hand, is not.
That is abstraction of the implementation of the tool, not the output.
Producing outputs you don’t understand is novel
You could say that about atomic bombs, too.
High level languages that replaced assembly are not black boxes.