If you don't mind sharing - what's the specific prompt you use to get this to happen, and which LLM do you use it with?
The rudest and most aggressive LLM I've used is Deepseek. Most LLMs have trained-in positivity bias but I can prompt Deepseek to tell me my code is shit very easily.
I can share a similar approach I'm finding beneficial. I add "Be direct and brutally honest in your feedback. Identify assumptions and cognitive biases to correct for." (I also add a compendium of cognitive biases and examples to the knowledge I give the LLM.