So interestingly, I know of at least one application in a charity that deals with trafficking where grok was happy to do one-shot classification tasks where all other models refused to cooperate.
I think there's a surprising number of actually useful applications in this sort of grey area for a slightly-less guardrailed, near-frontier model (also the grok-fast models are cheap!).
Gemini especially has a habit of blocking my pretty mundane requests, claiming they’re attempts to jailbreak or create malicious code.
Grok also does quite well at code reviews in my experience because it’s not so aggressively ”aligned”.
I couldn't get Gemini nor ChatGPT to do OCR of children's books (I literally own the books, so there's no copyright issue - all just fair use!).
The OCR was complex enough (bad quality photos) that "simple" OCR models couldn't do it.
Fortunately, Claude obliged (as well as Mistral OCR was helpful!)
There are lots of uncensored models out there. I don't think grok is leading in that front. They kind of pick and choose which things they want to support based on elons world views. Elon used to hang out with sex traffickers so of course grok is fine talking about it. Probably even offers strategies for them does free accounting has money laundering strategies etc...
I am software dev and i was doing a security check on my own application (work) I was running in localhost and gave it access to the code.
every single model refused to attempt to run any sort of test to check if it was a n issue other than grok.