So ... you are letting a nondeterministic LLM operate on the shell, via quasi-shellscript. This will appeal mostly to people who do not have the skillset to write an actual shell-script.
In short, isn't that like giving a voice-controlled scalpel to a random guy on the street an tell them 'just tell it to neurosurgery', and hope it accidentally does the right procedure?
Don’t worry, it’s “more auditable”!
I know this will not appeal to developers who don’t see a legitimate role for the use of AI coding tools with nondeterministic output.
It is intended to be a useful complement to traditional Shell scripting, Python scripting etc. for people who want to add composable AI tooling to their automation pipelines.
I also find that it helps improve the reliability of AI in workflows when you can break down prompts into re-useable single-task-focused modules that leverage LLMs for tasks they are good at (format.md, summarize-logs.md, etc). These can then be chained with traditional Shell scripts and command line tools.
Examples are summarizing reports, formatting content. These become composable building blocks.
So I hope that is something that has practical utility even for users like yourself who don’t see a role for plain language prompting in automation per se.
In practice this is a way to add composable AI-based tooling into scripts.
Many people are concerned about (or outright opposed to) the use of AI coding tools. I get that this will not be useful for them. Many folks like myself find tools like Claude helpful, and this just makes it easier to use them in automation pipelines.