The one good usecase I've found for AI chatbots, is writing ffmpeg commands. You can just keep chatting with it until you have the command you need. Some of them I save as an executable .command, or in my .txt note.
One that older AI struggled with was the "bounce" effect: play from 0:00 to 0:03, then backwards from 0:03 to 0:00, then repeat 5 times.
As pessimistic about it as I am, I do think LLMs have a place in helping people turn their text description into formal directives. (Search terms, command-line, SQL, etc.)
... Provided that the user sees what's being made for them and can confirm it and (hopefully) learn the target "language."
Tutor, not a do-for-you assistant.
But doesnt something like this interface kind of show the inefficiency of this? Like we can all agree ffmpeg is somewhat esoteric and LLMs are probably really great at it, but at the end of the day if you can get 90% of what you need with just some good porcelain, why waste the energy spinning up the GPU?
LLMs are an amazing advance in natural language parsing.
The problem is someone decided that and the contents of Wikipedia was all something needs to be intelligent haha