Gemini CLI at this stage isn't good at complex coding tasks (vs. Claude Code, Codex, Cursor CLI, Qoder CLI, etc.). Mostly because of the simple ReAct loop, compounded by relatively weak tool calling capability of the Gemini 2.5 Pro model.
> I haven't tried complex coding tasks using Gemini 3.0 Pro Preview yet. I reckon it won't be materially different.
Gemini CLI is open source and being actively developed, which is cool (/extensions, /model switching, etc.). I think it has the potential to become a lot better and even close to top players.
The correct way of using Gemini CLI is: ABUSE IT! With 1M Context Window (soon to be 2M) and generous daily (free) quota are huge advantages. It's a pity that people don't use it enough (ABUSE it!). I use it as a TUI / CLI tool to orchestrate tasks and workflows.
> Fun fact: I found Gemini CLI pretty good at judging/critiquing code generated by other tools LoL
Recently I even hook it up with homebrew via MCP (other Linux package managers as well?), and a local LLM powered Knowledge/Context Manager (Nowledge Mem), you can get really creative abusing Gemini CLI, unleash the Gemini power.
I've also seen people use Gemini CLI in SubAgents for MCP Processing (it did work and avoided polluting the main context), can't help laughing when I first read this -> https://x.com/goon_nguyen/status/1987720058504982561
Notable re author: “Addy Osmani is an Irish Software Engineer and leader currently working on the Google Chrome web browser and Gemini with Google DeepMind. A developer for 25+ years, he has worked at Google for over thirteen years, focused on making the web low-friction for users and web developers. He is passionate about AI-assisted engineering and developer tools. He previously worked on Fortune 500 sites. Addy is the author of a number of books including Learning JavaScript Design Patterns, Leading Effective Engineering Teams, Stoic Mind and Image Optimization.“
I really wish there were a de facto state-of-the-art coding agent that is LLM-agnostic, so that LLM providers wouldn't bother reinventing their own wheels like Codex and Gemini-CLI. They should be pluggable providers, not independent programs. In this way, the CLI would focus on refining the agentic logic and would grow faster than ever before.
Currently Claude Code is the best, but I don't think Anthropic would pivot it into what I described. Maybe we still need to wait for the next groundbreaking open-source coding agent to come out.
IMHO, one understated downside in today's AI/Agentic/Vibe-coding options is that ALL of them are evolving a bit too fast before any of these types of "best practices" can become a habit with a critical mass of developers, rendering many such tips obsolete very quickly (as another person rightfully pointed out).
Sure, software in general will keep evolving rapidly but the methods and tools to build software need to be relatively more stable. E.g. many languages and frameworks come and go, but how we break down a problem, how we discover and understand codebases, etc. have more or less remained steady (I think).
I see this a paradox and have no idea what the state of equalibrium will look like.
I've been using Gemini CLI for months now, mainly because we have a free subscription for it through work.
Tip 1, it consistently ignores my GEMINI.md file, both global and local. Even though it's always saying that "1 GEMINI.md file is being used", probably because the file exists in the right path.
Tip 12, had no idea you could do this, seems like a great tip to me.
Tip 16 was great, thanks. I've been restarting it everytime my environment changes for some reason. Or having it run direnv for me.
All the same warnings about AI apply for Gemini CLI, it hallucinates wildly.
But I have to say gemini cli gave me my first real fun experience using AI. I was a late comer to AI, but what really hooked me was when I gave it permission to freely troubleshoot a k8s PoC cluster I was setting up. Watching it autonomously fetch logs, objects, troubleshoot until it found the error was the closest thing to getting a new toy for christmas for me in many years.
So I've kept using it, but it is frustrating sometimes when AI is behaving so stupid you just /quit and do it yourself.
ymmv, but I think all of this is too much and you generally don't need to think about how to use an AI properly since screaming at it usually works just as well as very fine tuned instructions.
you don't need claude code, gemini-cli or codex I've been doing it raw as a (recent) lazyvim user with a proprietary agent with 3 tools: git, ask and ripgrep and currently gemini 3 is by far the best for me even without all these tricks.
gemini 3 has a very high token density and a significantly larger context than any model that is actually usable, every 'agent' I start shoves 5 things into the context:
- most basic instructions such as: generate git format diff only when editing files and use the git tool to merge it (simplified, it's more structured and deeper than this)
- tree command that respects git ignore
- $(ask "summarize $(git diff)")
- $(ask "compact the readme $(cat README.MD"))
- (ripgrep tools, mcp details, etc)
when the context is too bloated I just tell it to write important new details to README.MD and then start a new agent
I like AI a lot. I try to use to as much as I can. It feels like it is becoming an essential part of making me a more effective human, like the internet or my iphone. I do not see it as a bad thing.
But I can't help but to get "AI tutorial fatigue" from so many posts telling me about how to use AI. Most are garbage, this one is better than most. Its like how javascript developer endlessly post about the newest ui framework or js build tool. This feels a lot like that.
Gemini CLI sucks. Just use Opencode if you have to use Gemini. They need to rebuild the CLI just as OAI did with Codex.
All these tips and tricks just to get out-coded by some guy rawdogging Copilot in VS Code.
> $ time gemini -p "hello world"
> Loaded cached credentials. > Hello world! I am ready for your first command. > gemini -p "hello world" 2.35s user 0.81s system 33% cpu 29.454 total
seeing between 10-80 seconds for responses on hello world. 10-20s of which is for loading the god damn credentials. this thing needs a lot of work.
My tip: Move away from Google to an LLM that doesn't respond with "There was a problem getting a response" 90% of the time.
I am worried that we are diverging with CLI updates across models. I wish we had converged towards a common functionality and behaviour. Instead, we need to build knowledge of model-specific nuances. The cost of choosing a model is high.
A lot of times Gemini models will get stuck in a loop of errors in a lot of times it fails to edit/read or other simple function calling
it's really really terrible at agentic stuff
Gemini 3 with CLI is relentless if you give it detailed specs and other than API errors, it just is great. I'd still rank Claude models higher but Gemini 3 is good too.
And the GPT-5 Codex has a very somber tone. Responses are very brief.
>this lets you use Gemini 2.5 Pro for free with generous usage limits
Considering that access is limited to the countries on the list [0], I wonder what motivated their choices, especially since many Balkan countries were left out.
[0]: https://developers.google.com/gemini-code-assist/resources/a...
The problem is that Gemini CLI simply doesn’t work. Beside simplest of tasks like creating new release it is useless as coding assistant. Doesn’t have a plan mode, jumps right into coding and then gets stuck in the middle of spaghetti code.
Gemini models are actually pretty capable but Gemini CLI tooling makes them dumb and useless. Google is simply months behind Anthropic and OpenAI in this space!
we've gone from 'RTFM' to 'here's 30 tips to babysit your AI assistant' and somehow this is considered progress
The contrast between manual scripting and LLM-assisted workflows is interesting. Both seem to fail when the constraints aren’t clear enough. When they are clear, the LLM becomes surprisingly reliable.
Looking through this, I think a lot of these also apply to Google Antigravity which I assume just uses the same backend as the CLI and just UI wraps a lot of these commands (e.g. checkpointing).
Kinda useful, especially tip 15 and tip 26.
There needs to be a lot more focus on the observability and showing users what is happening underneath the hood (especially wrt costs and context management for non-power users).
A useful feature Cursor has that Antigravity doesn't is the context wheel that increases as you reach the context window limit (but don't get me started on the blackbox that is Cursor pricing).
Gemini-CLI on Termux does not work anymore. Gemini itself found a way to fix the problem, but I did not totally grok what it was going to do. It insisted my Termux was old and rotten.
agentic coding seems like its not the top priority but more at capturing the search engine users which is understandable.
still i had high hopes for gemini 3.0 but was let down by the benchmarks i can barely use it in cli however in ai studio its been pretty valuable but not without quirks and bugs
lately it seems like all the agentic coders like claude, codex are starting to converge and differentiated only by latency and overall cli UX and usage.
i would like to use gemini cli more even grok if it was possible to use it like codex
Tips and tricks for playing slot machines
Best practices for gambling
Its simple, just follow these 30 tips and tricks :D
Is there a similar guide/document for Claude Code?
Nice breakdown. Curious if you’ve explored arbitration layers or safety-bounded execution paths when chaining multiple agentic calls?
I’m noticing more workflows stressing the need for lightweight governance signals between agents.
This just after "Google Antigravity exfiltrates data via indirect prompt injection attack"
https://news.ycombinator.com/item?id=46048996
Who the heck trusts this jank to have wanton reign on their system?
A lot it seems to mirror syntax of Claude Code
Integration with Google Docs/Spreadsheets/Drive seems interesting but it seems to be via MCP so nothing exclusive/native to Gemini CLI I presume?
I love the model, hate the tool. I’ve taken complex stuff and given it to Gemini 3 and been impressed, but Anthropic has the killer app with Claude Code. The interplay of sonnet (a decent model) and the tools and workflow they’ve got with Claude code around it supercharge the outcome. I tried Gemini cli for about 5 seconds and was so frustrated, it’s so stupid at navigation in the codebase it takes 10x as long to do anything or I have to guide it there. I have to supervise it rather than doing something important while Claude works in the background
Am I stupid? I run /corgi, nothing happens and I don't see a corgi. I have the latest version of the gemini CLI. Or is it just killedbygoogle.com
It would/will be interesting to see this modified to include Antigravity alongside Gemini CLI.
Addy delivers!
How many of these 30 tips can replaced by Tip 8: tell Gemini to read the tips and update its own prompt?
Antigravity obsoleted Gemini CLI, right?
I really tried to get gemini to work properly in Agent mode. Tho it way to often wen't crazy, started rewriting files empty, commenting like "here you could implement the requested function" and many more stuff including running into permanent printing loops of stuff like "I've done that. What's next on the debugger? Okay, I've done that. What's next on the with? Okay, I've done that. What's next on the delete? Okay, I've done that. What's next on the in? Okay, I've done that. What's next on the instanceof? Okay, I've done that. What's next on the typeof? Okay, I've done that. What's next on the void? Okay, I've done that. What's next on the true? Okay, I've done that. What's next on the false? Okay, I've done that. What's next on the null? Okay, I've done that. What's next on the undefined? Okay, I've done that..." which went on for like 1hour (yes i waited to see how long it takes for them to cut it).
Its just really good yet.
I recently tried IntelliJs Junie and i have to say it works rather well.
I mean at the end of the day all of them need a human in the loop and the result is just as good as your prompt, tho with Junie i at least most of the time got something of a result, while with gemini 50% would have been a good rate.
Finally: Still dont see agentic coding for production stages - its just not there yet in terms of quality. For research and fun? Why not.
[dead]
[dead]
I have never had a luck with using Gemini. I had a pretty good app create with CODEX. Due to the hype I thought let me give Gemini a try. I asked it find all way to improve security and architecture / design. sure enough it gave a me a list of components etc that didn’t match best patterns and practices. So I let it refactor the code.
It Fucked up the entire repot. It hard coded tenant ids and used ids, it completely destroyed my UI. Broke my entire grpahql integration. Set me back 2 weeks of work.
I do admit the browse version of Gemini chat does much better job at providing architecture and design guidance time to time.
I am not doing any of this.
It becomes obsolete in literally weeks, and it also doesn't work 80% of the time. Like why write a mcp server for custom tasks when I don't know if the llm is going to reliably call it.
My rule for AI has been steadfast for months (years?) now. I write (myself, not AI because then I spend more time guiding the AI instead of thinking about the problem) documentation for myself (templates, checklist, etc.). I give ai a chance to one-shot it in seconds, if it can't, I am either review my documentation or I just do it manually.