I am a Mistral Le Chat Pro subscriber. I specifically chose to test their offerings because they are European. I don't have the necessary local hardware to run really big models, therefore need to choose a cloud provider if I want LLM action.
I find the antics of Anthropic, OpenAI, Google, Microsoft distasteful and avoid their products where I can.
After testing Le Chat and Devstral-2 for a while, I felt their offering was good enough to stump up some cash for it. I appreciate that many of their models are open weights and Apache 2.0 licensed. In general, I've been happy enough with the service and quality.
Maybe others are better, but I have little reason to change right now. If curiosity gets the better of me, I'll be looking at Qwen, Kimi, GLM, Deepseek, other open weights models, before Anthropic and OpenAI.
I use their API for several models, both for personal and professional use. I think their approach (smaller, specialised models that are well-adapted for specific tasks) is a very good fit for how I work. And even the more general-purpose ones, like the chat model, just... refreshingly good in a lot of ways. My "ruthless review" prompt, which I use for, well, ruthless, guided reviews of early technical drafts, has good technical results for early reviews and holy crap is it ruthless and does it know how to swear. By the time Claude or ChatGPT are done being honest about how right I am to push back and gently circling back, Mistral's large model has sent me back to the drawing board twice.
Being in the EU does smooth a lot of things in terms of compliance, payment processing and whatnot, but I also like that their data retention and privacy policies are pretty clearly spelled out. I need to know something, there's a good chance it's explained outright somewhere and I don't need to read between the EULA lines and wonder what it means.
I do hit limits in terms of capabilities sometimes, and I'm sure other providers' services offer better results for some things. But the businesses ran on top of those more capable models feel too much like a scam at this point and I'd rather not depend on them for anything I actually need.
There is also risk from a US regulatory side as recent drama around antrophic showed.
Don’t think it’s inconceivable that the clowns in power decide to limit api access out of the blue one day because someone whispered a conspiracy theory in someone’s ear. API blockade…
See also the constant flip flopping on what cards NVIDIA can export - no consistency in stance or coherent policy
Mistral models are definitely good enough. Most people fall for what I call the SOTA Logical Fallacy: whenever there is a 'better model', they think they need to use it, when less-powerful models actually perform the same tasks just as well. (it's an inverse form of the Shifting Baseline Syndrome: every time a new model comes out, people shift their baseline of what is acceptable, despite the fact that a previous baseline was acceptable for the same task)
Devstral Small 2 was (and remains) a particularly strong small coding model, even beating larger open weights. Mistral's "problem" is marketing; other providers ship model updates constantly so they remain in the news and seem like they're "beating" the competition. And it works: people get emotionally attached to brands and models, deciding who's better in the court of popular opinion, and that drives their choices (& dollars).