I broadly agree. They package "copilot" in a way that constantly gets in your way.
The one time I thought it could be useful, in diagnosing why two Azure services seemingly couldn't talk to each other, it was completely useless.
I had more success describing the problem in vague terms to a different LLM, than an AI supposedly plugged into the Azure organisation that could supposedly directly query information.
My 2 cents. It's when OKRs are executed without a vision, or the vision is that one and well, it sucks.
The goal is AI everywhere, so this means top-down everyone will implement it and will be rewarded for doing so, so thrre are incentives for each team to do it - money, promotions, budget.
100 teams? 100 AI integrations or more. It's not 10 entry points as it should be (maybe).
This means for a year or more, a lot of AI everywhere, impossible to avoid, will make usability sink.
Now, if this was only done by Microsoft, I would not mind. The issue is that this behavior is getting widespread.
Things are becoming increasingly unusable.
I had a WTF moment last week, i was writing SQL, and there was no autocomplete at all. Then a chunk of autocomplete code appeared, what looked like an SQL injection attack, with some "drop table" mixed in. The code would have not worked, it was syntactically rubbish, but still looked spooky, should have made a screenshot of it.
I had the experience too. Working with Azure is already a nightmare, but the copilot tool built in to Azure is completely useless for troubleshooting. I just pasted log output into Claude and got actual answers. Mincrosoft’s first party stuff just seems so half assed and poorly thought out.
I have had great luck with ChatGPT trying to figure out a complex AWS issue with
“I am going to give you the problem I have. I want you to help me work backwards step by step and give me the AWS cli commands to help you troubleshoot. I will give you the output of the command”.
It’s a combination of advice that ChatGPT gives me and my own rubberducking.
"They package "copilot" in a way that constantly gets in your way."
And when you try to make it something useful, the response is usually "I can't do that"
that's what happens when everyone is under the guillotine and their lives depend on overselling this shit ASAP instead of playing/experimenting to figure things out
[dead]
This seems like what should be a killer feature: Copilot having access to configuration and logs and being able to identify where a failure is coming from. This stuff is tedious manually since I basically run through a checklist of where the failure could occur and there’s no great way to automate that plus sometimes there’s subtle typo type issues. Copilot can generate the checklist reasonably well but can’t execute on it, even from Copilot within Azure. Why not??