Huh? Their example could be just reading code in github or reading diffs. You shouldn’t need to pull code into a development environment just so you can GoToDefinition to understand what’s going on.
There’s all sorts of workflows where vim would mog the IDE workflow you’re really excited about, like pressing E in lazy git to make a quick tweak to a diff. Or ctrl-G in claude code.
I wouldn’t be so sure you’ve cracked the code on the best workflow that has no negative trade offs. Everyone thinks that about their workflow until they use it long enough to see where it snags.
> You shouldn’t need to
... but you do more often that the quick & dirty approach really allows.
I was just watching the Veritasium episode on the XZ tools hack, which was in part caused by poor tooling.
The attacker purposefully obfuscated his change, making a bunch of "non-changes" such as rearranging whitespace and comments to hide the fact that he didn't actually change the C code to "fix" the bug in the binary blob that contained the malware payload.
You will miss things like this without the proper tooling.
I use IDEs in a large part because they have dramatically better diff tools than CLI tools or even GitHub.
> you’ve cracked the code on the best workflow
I would argue that the ideal tooling doesn't even exist yet, which is why I don't believe that I've got the best possible setup nailed. Not yet.
My main argument is this:
Between each keypress in a "fancy text editor" of any flavour, an ordinary CPU could have processed something like 10 billion instructions. If you spend even a minute staring at the screen, you're "wasting" trillions of possible things the computer could be doing to help you.
Throw a GPU into the mix and the waste becomes absurd.
There's an awful lot the computer could be doing to help developers avoid mistakes, make their code more secure, analyse the consequences of each tiny change, etc...
It's very hard to explain without writing something the length of War & Peace, so let me leave you with a real world example of what I mean from a related field:
There's two kinds of firewall GUIs.
One kind shows you the real-time "hit rate" of each rule, showing packets and bytes matched, or whatever.
The other kind doesn't.
One kind dramatically reduces "oops" errors.
The other kind doesn't. It's the most common type however, because it's much easier to develop as a product. It's the lazy thing. It's the product broken down into independent teams doing their own thing: the "config team" doing their thing and the "metrics" team doing theirs, no overlap. It's Conway's law.
IDEs shouldn't be fancy text editors. They should be constantly analysing the code to death, with AIs, proof assistants, virtual machines, instrumentation, whatever. Bits and pieces of this exist now, scattered, incomplete, and requiring manual setup.
One day we'll have these seamlessly integrated into a cohesive whole, and you'd be nuts to use anything else.
... one day.