Gemini is also the first model I have seen call me out in it's thinking. Stuff like "The user suggested we take approach ABC, but I don't think the user fully understands ABC, I will suggest XYZ as an alternative since it would be a better fit"
It is impressive when it finds subtle errors in complex reasoning.
But even the dumbest model will call you out if you ask it something like:
"Hey I'm going to fill up my petrol car with diesel to make it faster. What brand of diesel do you recommend?"
It is impressive when it finds subtle errors in complex reasoning.
But even the dumbest model will call you out if you ask it something like:
"Hey I'm going to fill up my petrol car with diesel to make it faster. What brand of diesel do you recommend?"