Can your LLM do that to a running system? Or will it have to restart the whole program to run the next iteration? Imagine you build something with long load-times.
Also, your Lisp will always behave exactly as you intended and hallucinate its way to weird destinations.
An LLM can modify the code, rebuild and restart the next iteration, bring it up to a known state and run tests against that state before you've even finished typing in the code. It can do this over and over while you sleep. With the proper agentic loop it can even indeed inject code into a running application, test it, and unload it before injecting the next iteration. But there will be much less of a need for that kind of workflow. LLMs will probably just run in loops, standing up entire containers or Kubernetes pods with the latest changes, testing them, and tearing them down again to make room for the next iteration.
As for hallucinations, I believe those are like version 0 of the thing we call lateral thinking and creativity when humans manifest it. Hallucinations can be controlled and corrected for. And again—you really need to spend some time with the paid version of a frontier model because it is fundamentally different from what you've been conditioned to expect from generative AI. It is now analyzing and reasoning about code and coming back with good solutions to the problems you pose it.
I can’t speak to getting an LLM to talk to a CL listener, simply because I don’t know the mechanics of hooking it up. But being as they can talk to most anything else, I see no reason why it can’t.
What they can certainly do is iterate with a listener with you acting as a crude cut and paste proxy. It will happily give you forms to shove into a REPL and process the results of them. I’ve done it, in CL. I’ve seen it work. It made some very interesting requests.
I’ve seen the LLM iterate, for example, with source code by running it, adding logging, running it again, processing the new log messages, and cycling through that, unassisted, until it found its own “aha” and fixed a problem.
What difference does it make whether it’s talking to a shell or a CL listener? It’s not like it cares. Again, the mechanics of hooking up an LLM to a listener directly, I don’t know. I haven’t dabbled enough in that space to matter. But that’s a me problem, not an LLM problem.