Fun story time!
Early in my career as a software engineer, I developed a reputation for speeding things up. This was back in the day where algorithm knowledge was just as important as the ability to examine the output of a compiler, every new Intel processor was met with a ton of anticipation, and Carmak and Abrash were rapidly becoming famous.
Anyway, the 22 year old me unexpectedly gets invited to a customer meeting with a large multinational. I go there not knowing what to expect. Turns out, they were not happy with the speed of our product.
Their VP of whatever said, quoting: "every saved second here adds $1M to our yearly profit". I was absolutely floored. Prior to that moment I couldn't even dream of someone placing a dollar amount on speed, and so directly. Now 20+ years later it still counts as one of the top 5 highlights of my career.
P.S. Mentioning as a reaction to the first sentence in the blog post. But the author is correct when she states that this happens rarely.
P.P.S. There was another engineer in the room, who had the nerve to jokingly ask the VP: "so if we make it execute in 0 seconds, does it mean you're going to make an infinite amount of money?". They didn't laugh, although I thought it was quite funny. Hey, Doug! :)
> Rarely in software does anyone ask for “fast.”
They don't explicitly ask for it, but they won't take you seriously if you don't at least pretend to be. "Fast" is assumed. Imagine if Rust had shown up, identical in every other way, but said "However, it is slower than Ruby". Nobody would have given it the time of day. The only reason it was able to gain attention was because it claimed to be "Faster than C++".
Watch HN for a while and you'll start to notice that "fast" is the only feature that is necessary to win over mindshare. It is like moths to a flame as soon as something says it is faster than what came before it.
It's well known, but this video[1] is a proof of concept demonstration from 4 years ago, Casey Muratori called out Microsoft's new Windows Terminal for slow performance and people argued that it wasn't possible, practical, or maintainable to make a faster terminal and that his claims of "thousands of frames per second" were hyperbolic, and one person said it would be a "PHD level research project".
In response, Casey spent <1 week making RefTerm, a skeleton proto-terminal with the same constraints Microsoft people had - using Windows APIs for things, using DirectDraw with GPU rendering, handling terminal escape codes, colours, blinking, custom fonts, missing font character fallback, line wrap, scrollback, Unicode and Right-to-Left Arabic combining characters, etc. RefTerm had 10x faster throughput than Windows Terminal and ran at 6-7000 frames per second. It was single-threaded, not profiled, not tuned, no advanced algorithms, no-cheating by sending some data to /dev/null, all it had to speed it up was simple code without tons of abstractions and a Least Recently Used (LRU) glyph cache to avoid re-rendering common characters, written the first way that he thought of. Around that time he did a video series on that YouTube channel about optimization and arguing that even talking about 'optimization' was too hopeful, we should be talking about 'not-pessimization', that most software is not slow because it has unavoidable complexity and abstractions needed to help maintenance, it's slow because it's choked by a big pile of do-nothing code and abstraction layers added for ideological reasons which hurt maintenance as well as performance.
[1] https://www.youtube.com/watch?v=hxM8QmyZXtg - "How fast should an unoptimized terminal run?"
This video[2] is another specific details one, Jason Booth talking about his experience of game development, and practical examples of changing data layout and C++ code to make it do less work, be more cache friendly, have better memory access patterns, and run orders of magnitude faster without adding much complexity and sometimes removing complexity.
[2] https://www.youtube.com/watch?v=NAVbI1HIzCE - "Practical Optimizations"
Just want to say how much I thank YCom for not f'ing up the HN interface, and keeping it fast.
I distinctly remember when Slashdot committed suicide. They had an interface that was very easy for me to scan and find high value comments, and in the name of "modern UI" or some other nonsense needed to keep a few designers employed, completely revamped it so that it had a ton of whitespace and made it basically impossible for me to skim the comments.
I think I tried it for about 3 days before I gave up, and I was a daily Slashdot reader before then.
Fast is also cheap. Especially in the world of cloud computing where you pay by the second. The only way I could create a profitable transcription service [1] that undercuts the rest was by optimizing every little thing along the way. For instance, just yesterday I learned that the image size I've put together is 2.5× smaller than the next open source variant. That means faster cold boots, which reduces the cost (and providers a better service).
This is interesting. It got me to think. I like it when articles provoke me to think a bit more on a subject.
I have found this true for myself as well. I changed back over to Go from Rust mostly for the iteration speed benefits. I would replace "fast" with "quick", however. It isn't so much I think about raw throughput as much as "perceived speed". That is why things like input latency matter in editors, etc. If something "feels fast" (ie Go compiles), we often don't even feel the need to measure. Likewise, when things "feel slow" (ie Java startup), we just don't enjoy using them, even if in some ways they actually are fast (like Java throughput).
I've noticed over and over again at various jobs that people underestimate the benefit of speed, because they imagine doing the same workflow faster rather than doing a different workflow.
For example, if you're running experiments in one big batch overnight, making that faster doesn't seem very helpful. But with a big enough improvement, you can now run several batches of experiments during the day, which is much more productive.
Pavel Durov (founder of Telegram) totally nailed this concept.
He pays special attention to the speed of application. The Russian social network VK worked blazingly fast. The same is about Telegram.
I always noticed it but not many people verbalized it explicitly.
But I am pretty sure that people realize it subconsciously and it affects user behaviour metrics positively.
> Rarely in software does anyone ask for “fast.” We ask for features, we ask for volume discounts, we ask for the next data integration. We never think to ask for fast.
Almost everywhere I’ve worked, user-facing speed has been one of the highest priorities. From the smallest boring startup to the multi billion dollar company.
At companies that had us target metrics, speed and latency was always a metric.
I don’t think my experience has been all that unique. In fact, I’d be more surprised if I joined a company and they didn’t care about how responsive the product felt, how fast the pages loaded, and any weird lags that popped up.
Website is superfast. Reason I usually go for the comments first on HN is exactly this: they're fast. THIS is notably different.
On interfaces:
It's not only the slowness of the software or machine we have to wait for, it's also the act of moving your limb that adds a delay. Navigating a button (mouse) adds more friction than having a shortcut (keyboard). It's a needless feedback loop. If you master your tool all menus should go away. People who live in the terminal know this.
As a personal anecdote, I use custom rofi menus (think raycast for Linux) extensively for all kinds of interaction with data or file system (starting scripts, opening notes, renaming/moving files). It's notable how your interaction changes if you remove friction.
Venerable tools in this vein: vim, i3, kitty (former tmux), ranger (on the brim), qutebrowser, visidata, nsxiv, sioyek, mpv...
Essence of these tools is always this: move fast, select fast and efficiently, ability to launch your tool/script/function seamlessly. Be able to do it blindly. Prefer peripheral feedback.
I wish more people saw what could be and built more bicycles for the mind.
Efficient code is also environmentally friendly.
First, efficient code is going to use less electricity, and thus, fewer resources will need to be consumed.
Second, efficient code means you don't need to be constantly upgrading your hardware.
I find most jobs I had fast becomes a big issue once things are too slow. Or expensive.
It's a retroactively fixed thing. Like imagine forgetting to make a UI, shipping just an API to a customer then thinking "oh shit, they need a UI they are not programmers". And only noticing from customer complaints. That is how performance is often treated.
This is probably because performance problems usually require load or unusual traffic patterns, which require sales, which require demos, which dont require performance tuning as there is one user!
If you want to speed your web service up first thing is invest time, maybe money in really good observability. Should be easy for anyone in the team to find a log, see what CPU is at etc. Then set up proxy metrics around speed you care about and talk about them every week and take actions.
Proxy metrics means you likely cant (well probably should not) check the speed that Harold can sum his spreadsheet every minute, but you can check the latency of the major calls involved. If something is slow but metrics look good then profiling might be needed.
Sometimes there is an easy speed up. Sometimes you need a new architecture! But at least you know what's happening.
No mention of google search itself being fast. It's one of the poster children of speed being part of the interface.
Microsoft needs to take heed, for example Explorer's search, Teams, make your computer seem extremely slow. VS Code on the other hand is fast enough, while slower than native editors such as Sublime Text.
Only sorta related, but it’s crazy that to me how much our standards have dropped for speed/responsiveness in some areas.
I used to play games on N64 with three friends. I didn’t even have a concept of input lag back then. Control inputs were just instantly respected by the game.
Meanwhile today, if I want to play rocket league with three friends on my Xbox series S (the latest gen, but the less powerful version), I have to deal with VERY noticeable input lag. Like maybe a quarter of a second. It’s pretty much unplayable.
I was going to say one of the more recent times fast software excited me was with `uv` for Python packaging, and then I saw that op had a link to Charlie Marsh in the footnote. :)
The industry (hint:this forum's readers) have replaced "fast" software with "portable" meaning -universally addressable libraries that must load from discrete and often remote sources, -zero hang time in programming language evolution (leaving no time for experts to discover, document, and implement optimizations) -insistence on "the latest version" focused software with no emphasis on long term code stability
I once accidentally blocked TCP on my laptop and found out "google.com" runs on UDP, it was a nice surprise.
baba is fast.
I sometimes get calls like "You used to manage a server 6 years ago and we have an issue now" so I always tell the other person "type 'alias' and read me the output", this is how I can tell if this is really a server I used to work on.
fast is my copilot.
A lot of people have low expectations from having to use shit products at work, and generally not being discerning.
Speed is what made Google, which was a consumer product at the time. (I say this because it matters more in consumer products.)
Fast is a distinctive feature.
For what is worth I built myself a custom jira board last month, so I could instantly search, filter and group tickets (by title, status, assignee, version, ...)
Motivation: Running queries and finding tickets on JIRA kills me sometimes.
The board is not perfect, but works fast and I made it superlightweight. In case anybody wants to give it a try:
https://jetboard.pausanchez.com/
Don't dare to try on mobile, use desktop. Unfortunately it uses a proxy and requires an API key, but doesn't store anything in backend (just proxies the request because of CORS). Maybe there is an API or a way to query jira cloud instance directly from browser, I just tried first approach and moved on. It even crossed my mind to add it somehow to Jira marketplace...
Anyway, caches stuff locally and refreshes often. Filtering uses several tricks to feel instant.
UI can be improved, but uses a minimalistic interface on purpose, like HN.
If anybody tries it, I'll be glad to hear your thoughts.
>> Rarely in software does anyone ask for “fast.”
I have been asking about Latency-Free Computing for a very long time. Every Computing now is slow.
My favorite essay on this topic, not yet referenced, is James Somers's "Speed matters:" https://jsomers.net/blog/speed-matters
Reminds me "Fast Software, the Best Software" by Craig Mod: https://craigmod.com/essays/fast_software/
make fast sexy again... please growing up I've thoroughly enjoyed seeing workers tapping away at registers where it doesn't have a mouse, all muscle memory and layers and layers of menu accessible by key taps, whether it's airline, clothing store, even some restaurant used to have those dimly lit terminals glowing green or orange with just bunch of text and a well versed operator chatting while getting their work done. the keys were commercial grade mechanical which made pleasing sound.
nowadays it's fancy touch display, requires concentration and often sluggish, and the machine often felt cheap and made cheap sound when tapped on, I don't think the operator are ever enjoying interacting with it and the software's often slow across the network....
I'm all for fast. It shows no matter what, at least somebody cared enough for it to be blazing fast.
This is one of the reasons I switched from Unity to Godot. There is something about Godot loading fast and compiling fast that makes it so much more immersive to spend hours chugging away at your projects for.
> Developers ship more often when code deploys in seconds (or milliseconds) instead of minutes.
I don't want my code deployed in seconds or milliseconds. I'm happy to wait even an hour for my deployment to happen, as long as I don't have to babysit it.
I want my code deployed safely, rolled out with some kind of sane plan (like staging -> canary -> 5% -> 20% -> 50% -> 100%), ideally waiting long enough at each stage of the plan to ensure the code is likely being executed with enough time for alerts to fire (even with feature flags, I want to make sure there's no weird side effects), and for a rollback to automatically occur if anything went wrong.
I then want to enable the feature I'm deploying via a feature flag, with a plan that looks similar to the deployment. I want the enablement of the feature flag, to the configured target, to be as fast as possible.
I want rollbacks to be fast, in-case things go wrong.
Another good example is UI interactions. Adding short animations to actions makes the UI slower, but can considerably improve the experience, by making it more obvious that the action occurred and what it did.
So, no, fast isn't always better. Fast is better when the experience is directly improved by making it fast, and you should be able to back that up with data.
I always have to remind myself of the bank transfer situation in the US whenever I read an article complaining about it. Here in the UK, bank transfers are quick and simple (the money appears to move virtually instantly). Feel free to enlighten me to why they're so slow in the US.
I think that people generally underestimate what even small increases in the interaction time between human and machine cost. Interacting with sluggish software is exhausting, clicking a button and being left uncertain whether it did anything is tedious and software being fast is something you can feel.
Windows is the worst offender here, the entire desktop is sluggish even though it there is no computational task which justifies those delays.
I wish I could live in a world of fast.
C++ with no forward decls, no clang to give data about why the compile time is taking so long. 20 minute compiles. Only git tool I like (git-cola) is written in Python and slows to a crawl. gitk takes a good minute just to start up. Only environments are MSYS which is slow due to Windows, and WSL which isn't slow but can't do DPI scaling so I squint at everything.
> Rarely in software does anyone ask for “fast.”
> But software that's fast changes behavior.
I wonder if the author stopped to consider why these opposing points make sense, instead of ignoring one to justify the other.
My opinion is that "fast" only becomes a boon when features are robust and reliable. If you prioritize going twice as "fast" over rooting out the problems, you get problems at twice the rate too.
> Asking an LLM to research for 6 minutes is already 10000x faster than asking for a report that used to take days.
Assuming, like, three days, 6 minutes is 720x faster. 10000x faster than 6 minutes is like a month and a half!
Very much the opposite in my experience. People, especially on this site, ask for "fast" regardless of whether they need it. If asked "how fast?" the answer is always "as fast as possible". And they make extremely poor choices as a result. Fast is useful up to a point, but faster than that is useless - maybe actively detrimental if you can e.g. generate research reports faster than you can read them.
You make much better code, and much better products, if you "fast" from your vocabulary. Instead set specific, concrete latency budgets (e.g. 99.99% within x ms). You'll definitely end up with fewer errors and better maintainability than the people who tried to be "fast". You'll often end up faster than them too.
> it's obvious to anyone that writes code that we're very far from the standards that we're used to
This is true, but I also think there's a backlash now and therefore some really nice mostly dev-focused software that is reeaaaaly fast. Just to name a few:
- Helix editor - Ripgrep - Astral python tools (ruff, uv, ty)
That's a tiny subsection of the mostly bloated software that exists. But it makes me happy when I come across something like that!
Also, browsers seems to be really responsive despite being one of the most feature bloated products on earth thanks to expanding web standards. I'm not really counting this though because Firefox and Chrone might rarely lag, the websites I view with them often do, so it's not really a fast experience.
I’m senior developer on a feature bloated civil engineering web app that has 2 back end servers (one just proxies to the other) and has 8k lines of stored procedures as the data layer and many multi k line react components that intentionally break react best practices.
I loathe working on it but don’t have the time to refactor legacy code.
———————-
I have another project that I am principal engineer and it uses Django, nextjs, docker compose for dev and ansible to deploy and it’s a dream to build in and push features to prod. Maybe I’m more invested so it’s more interesting to me but also not waiting 10 seconds to register and hot reload a react change is much more enjoyable.
> Instagram usually works pretty well—Facebook knows how important it is to be fast.
Fast is indeed magical, that's why I exclusively browse Instagram from the website; it's so slow I dip out before they get me with their slot machine.
>> Rarely in software does anyone ask for “fast.”
I don't know, there are a sizeable subset of folks who value fast, and it's a big subset, it's not niche.
Search for topics like turning off animations or replacing core user space tools with various go and rust replacements, you'll find us easily enough.
I'm generally a pretty happy MacOS user, especially since M1 came along. But I am seriously considering going back to linux again. I maintain a parallel laptop with nixos and i'm finding more and more niggles on the mac side where i can prioritise lower friction on linux.
How did over a thousand people upvote this hollow article? Am I the only one who was looking for substance in vain?
Highly Agree.
Speed of all kinds is incredibly important. Give me all of it.
- Fast developers
- Fast test suites
- Fast feedback loops
- Fast experimentation
Someone (Napoleon?) is credited with saying "quantity has a quality all its own", in software it is "velocity has a quality all its own".
As long as there is some rigor and you aren't shipping complete slop, consistently moving very quickly fixes almost every other deficiency.
- It makes engineering mistakes cheaper (just fix them fast)
- It make product experimentation easy (we can test this fast and revert if needed)
- It makes developers ramp up quickly (shipping code increases confidence and knowledge)
- It actually makes rigor more feasible as the most effective rigorous processes are light weight and built-in.
Every line of code is a liability, the system that enables it to change rapidly is the asset.
Side note: every time I encounter JVM test startup lag I think someday I am going to die and will have spent time doing _this_.
I feel like this should have some kind of "promotional" or "ad" label. I agree wholeheartedly with the words here, but I also note that the author is selling the fast developer tools she laments the dearth of: https://www.catherinejue.com/kernel
Again, no ill will intended at all, but I think it straddles the promotional angle here a bit and maybe people weren't aware
The owner of this site is involved in a scrapping business[0]. How can she justify that with fast-ness?
>Performant stealth mode
>Scale browsers with bot anti-detection. Access high performant residential proxies and built-in auto-CAPTCHA solvers
[0] https://www.onkernel.com/#:~:text=Performant%20stealth%20mod...
My sites. In order of increasing complexity. Are they fast?
Here is some extensive advice for making complex websites load extremely quickly
https://community.qbix.com/t/qbix-websites-loading-quickly/2...
Here is also how to speed up APIs:
https://community.qbix.com/t/building-efficient-apis-with-qb...
One of the biggest things our framework does as opposed to React, Angular, Vue etc. is we lazyload all components as you need them. No need for tree-shaking or bundling files. Just render (static, cached) HTML and CSS, then start to activate JS on top of it. Also helps massively with time to first contentful paint.
https://community.qbix.com/t/designing-tools-in-qbix-platfor...
All this evolved from 2021 when I gave this talk:
> Superhuman's sub-100ms rule—plus their focus on keyboard shortcuts—changed the email game in a way that no one's been able to replicate, let alone beat.
https://blog.superhuman.com/superhuman-is-being-acquired-by-...
Being fast helps, but is rarely a product.
> When was the last time you used airplane WiFi and actually got a lot done?
The greatest day of productivity in my life was a flight from Melbourne to New York via LAX. No wifi on either flight, but a bit in transit. Downloaded everything I needed in advance. Coded like a mofo for like 16 hours.
Fast internet is great for distractions.
> Rarely in software does anyone ask for “fast.”
As some working on embedded audio DSP code I just had to laugh a little.
Yes, there is a ton of code that has a strict deadline. For audio that may be determined by your buffer size — don't write your samples to that buffer fast enough and you will hear it in potentially destructively loud fashion.
This changes the equation, since faster code now just means you are able to do more within that timeframe on the same hardware. Or you could do the same on cheaper hardware. Either way, it matters.
Similar things apply to shader coding, game engines, control code for electromechanical systems (there, missing the deadline can be even worse).
Speed is the most fundamental feature. Otherwise we could do everything by hand and need no computers.
Back in the 90's I ran a dev team building Windows applications in VB, and had the rule that the dev machines had to be lower-specced than the user machines they were programming for.
It was unpopular, because devs love the shiny. But it worked - we had nice quick applications. Which was really important for user acceptance.
I didn't make this rule because I hated devs (though self-hatred is a thing ofc), or didn't want to spend the money on shiny dev machines. I made it because if a process worked acceptably quickly on a dev machine then it never got faster than that. If the users complained that a process was slow, but it worked fine on the dev's machine, then it proved almost impossible to get that process faster. But if the dev experience of a process when first coding it up was slow, then we'd work at making it faster while building it.
I often think of this rule when staring at some web app that's taking 5 minutes to do something that appears to be quite simple. Like maybe we should have dev servers that are deliberately throttled back, or introduce random delays into the network for dev machines, or whatever. Yes, it'll be annoying for devs, but the product will actually work.
I work on optimization a large fraction of my time. It is not something learned in a week, month or even a year.
At least in B2B applications that rely heavily on relational data, the best developers are the ones who can optimize at the database level. Algorithmic complexity pretty much screams at me these days and is quickly addressed, but getting the damned query plan into the correct shape for a variety of queries remains a challenge.
Of course, knowing the correct storage medium to use in this space is just as important as writing good queries.
What's amazing to me is that often all it takes to go fast is to keep things simple. JBlow once said that software should be treated like a rocket ship: every thing you add contributes weight.
Linus Torvalds said exactly that in a talk about git years ago. It's crazy to think back how people used to use version control before git. Git totally changed how you can work by being fast.
Kinda funny but I think LLM-assisted workflows are frequently slow -- that is, if I use the "refactor" features in my IDE it is done in a second, if I ask the faster kind of assistant it comes back in 30 seconds, if I ask the "agentic" kind of assistant it comes back in 15 minutes.
I asked an agent to write an http endpoint at the end of the work day when I had just 30 min left -- my first thought was "it took 10 minutes to do what would have taken a day", but then I thought, "maybe it was 20 minutes for 4 hours worth of work". The next day I looked at it and found the logic was convoluted, it tried to write good error handling but didn't succeed. I went back and forth and ultimately wound up recoding a lot of stuff manually. In 5 hours I had it done for real, certainly with a better test suite than I would have written on my own and probably better error handling.
See https://www.reddit.com/r/programming/comments/1lxh8ip/study_...