Well, I am on the provocative side that as AI tooling matures current programming languages will slowly become irrelevant.
I am already using low code tooling with agents for some projects, in iPaaS products.
I would say that current programming languages have a better chance due to the huge amount of code that AI can train on. New languages do not have that leverage. Moreover, current languages have large ecosystems that still matter.
I see the opposite. New languages have more difficulty breaking into popularity due to lack of enough existing codename and ecosystems.
I don't agree. For one thing, the language directly impacts things like iteration speed, runtime performance, and portability. For another, there's a trade-off between "verbose, eats context" and "implicit, hard to reason about".
IMO Rust will strike a very strong balance here for LLMs.
Im already using models to reason about and summarize part of the code from programming language to prose. They are good at that. I can see the process being something like english to machine lang, machine lang to english if the human needs to understand. However amother truism is that compilers are a great guardrail against bad generated code. More deterministic guardrails are good for llms. So yeah im not there yet where i trust binaries to the statistical text generators.
> Well, I am on the provocative side that as AI tooling matures current programming languages will slowly become irrelevant.
I have the opposite opinion. As LLM become ubiquitous and code generation becomes cheap, the choice of language becomes more important.
The problem with LLM for me is that it is now possible to write anything using only assembly. While technically possible, who can possibly read and understand the mountain of code that it is going to generate?
I use LLM at work in Python. It can, and will, easily use hacks upon hacks to get around things.
Thus I maintain that as code generation is cheap, it is more important to constraint that code generation.
All of this assume that you care even a tiny bit about what is happening in your code. If you don't, I suppose you can keep banging the LLM to fix that binary blob for you.