My work gives us Copilot access since We're a Microsoft Shop™
It's too bad Copilot is by far the dumbest competitor in the space
My favorite interaction so far was when I prompted it with:
ffmpeg command to convert movie.mov into a reasonably sized mp4
Sure, it's not the most direction instructions, but I tend to give it just enough to get the job done, assuming the LLM knows what its purpose is as an LLM, and it always works with the other chatbots.Copilot's response:
I implemented and executed the Python code above to convert movie.mov to a reasonably sized movie.mp4 using ffmpeg.
However, the Python code failed since it was not able to find and access movie.mov file.
Do you want me to try again or is there anything else that I can help you with?
Note that I didn't cut anything out. It didn't actually provide me any "Python code above"Same here (MS Shop).
My shock moment was when I was asking it to covert an image into a nice PPTX slide naively assuming it had the best PowerPoint capabilities since that’s also a MS product.
It returned a non formatted text box on one slide. I had to laugh so hard people on the office noticed.
This is hilarious because both Gemini and ChatGPT are shockingly good at putting together FFMPEG commands. They can both put together and also understand the various options and stages/filters.
I cannot reproduce this in any version of copilot?
Copilot with outlook.com
Copilot base one that comes with M365,
And the add-on one for 30$/mo.
Copilot in VS code
All produce: ffmpeg -i movie.mov -vcodec libx264 -crf 23 -preset medium -acodec aac -b:a 128k output.mp4
Which is not surprising because its just an Open AI 4o call... so how are you getting this?
Microsoft has really screwed up on branding yet again. Every time I read “Copilot” I think of the GitHub thing, forgetting that there is also a Microsoft Copilot that is actually multiple things across multiple products including Windows and Office.
It’s also a website like ChatGPT apparently? I thought it was called Copilot because it writes with you, so why is there also a general chat/search engine called Copilot? Jesus.
This is pretty interesting, I had a very similar experience with Github Copilot's plugin inside Jetbrains IDE today (set to use 4o). I asked it to tell me how to do something, it instead routed in my code, tried to write a python file (its a PHP project) and told me it couldn't do it and did the exact same "Do you want me to try again or is there anything else I can help you with?"
Thing is I ask it random bits like this all the time and it's never done that before so I'm assuming some recent update has borked something.
What model? It got right first try here with your exact prompt using the free GPT-4.1 model.
https://i.imgur.com/toLzwCk.png
ffmpeg -i movie.mov -c:v libx264 -preset medium -crf 23 -c:a aac -b:a 128k movie.mp4
BUT, I have this in my copilot-instructions.md file:
# Always follow these steps when responding to any request
1. Please do a round of thinking in <thinking></thinking> tags
2. Then a round of self-critique in <critique></critique> tags
3. Then a final round of <thinking>, before responding.
4. If you need more information, ask for it.
I think the biggest issue is M365 Copilot was sold as something that would integrate with business data (teams, files, mail, etc.) and that never worked out quite well.
So you end up with a worse ChatGPT that also doesn't have work context.
Standard copilot indeed sucks but I'm quite fond of the new researcher agent. It spends much more time than any of the others I've tried, like Perplexity pro and openai.
From a one line question it made me a relevant document of 45 pages examining the issue from all different sides, many of which I hadn't even thought of. It spent 30 mins working. I've never seen Perplexity spend more than 5.
I won't't be surprised if they will significantly nerf it to save on computing costs. I think now they give it their all to build a customer base and then they nerf it.
i asked copilot to make an excel command, that rounds up all numbers to the next integer... took 4 back and forth messages and 15 minutes until it was working... Google was 5 minutes
Your conversations are notebooks and the code it conjured up should be behind a dropdown arrow. For visualization it seems to work fine (i.e. Copilot will generate a Python snippet, run it on the input file I attach to the request and present the diagram as a response).
In my experience Microsoft Copilot (free version in Deep Think mode) is way better than ChatGPT (free version) in most of things I throw at them (improving text, generating code, etc).
ohh wow, that's bad, just tried this with Gemini 2.5 Flash/Pro (and worked perfectly) -- I assume all frontier models should get this right (even simpler models should).
even gemma3:12b gets it correct:
~> ollama run gemma3:12b-it-qat >>> ffmpeg command to convert movie.mov into a reasonably sized mp4
Here's a good ffmpeg command to convert `movie.mov` to a reasonably sized MP4, along with explanations to help you adjust it:
```bash ffmpeg -i movie.mov -c:v libx264 -crf 23 -preset medium -c:a aac -b:a 128k movie.mp4 ```
*Explanation of the command and the options:*
People are responding with..works for me, but I've found with windows copilot it was impossible to reset the AI state, and that past prompts would color new inputs.
The new chat, or new conversation buttons seem to do nothing.
I put your exact prompt into Copilot and it gave me the command
ffmpeg -i movie.mov -vcodec libx264 -crf 23 -preset medium -acodec aac -b:a 128k movie_converted.mp4
Along with a pretty detailed and decently sounding reasoning as to why it picked these options.
I love Copilot in VSCode. I always select model "Claude Sonnet 3.7", when in Copilot since it lets me choose the LLM. What I love about Copilot is the tight integration with VSCode. I can just ask it to do something and it relies on the intelligence of Claude to get the right code generated, and then all Copilot is really doing is editing my code for me, reading whatever code Claude tells it to, to build context, etc.
People think that llms are an excuse to be lazy. You have to put some effort into the prompt!
I pasted your prompt:
into the Copilot app just now. Here's the response: with this explanation