I wonder how big that model is in RAM/disk. I use LLMs for FFMPEG all the time, and I was thinking about training a model on just the FFMPEG CLI arguments. If it was small enough, it could be a package for FFMPEG. e.g. `ffmpeg llm "Convert this MP4 into the latest royalty-free codecs in an MKV."`
the jetbrains models are about 70MB zipped on disk (one model per language)
That’s a great idea, but I feel like it might be hard to get it to be correct enough
from a few days ago: https://news.ycombinator.com/item?id=42706637
Please submit a blog post to HN when you're done. I'd be curious to know the most minimal LLM setup needed get consistently sane output for FFMPEG parameters.