What I don't get is why is it that Node is dog slow. Seriously it seems borderline unusable.
In terms of perf, Node has a pretty snappy JIT, and seems to perform OK in the browser, but this seems like something's not right here.
~200ms requests even in the light case are on the very brink of barely acceptable.
On the other hand, Python performs very well, but it's alarming to see it gets slower in every release by a significant margin.
Would be interesting to add a cold start + "import boto3" benchmark for Python as importing boto3 takes forever on lambdas with little memory. For this scenario I only know this benchmark but it is from 2021 https://github.com/MauriceBrg/aws-blog.de-projects/tree/mast...
It’s interesting that the author chose to use SHA256 hashing for the CPU intensive workload. Given they run on hardware acceleration using AES NI, I wonder how generally applicable it is. Still interesting either way though, especially since there were reports of earlier Graviton (pre v3) instances having mediocre AES NI performance.
Yikes, Node.js did really badly. If this holds up, my take-away would be ...
If I want to use TypeScript for Functions, I should write to the v8 runtimes (Deno, Cloudflare, Supabase, etc) which are much faster due to being much more lightweight.
Would be interesting to see a benchmark with the rust binary with successively more “bloat” in the binary to separate out how much of the cold start is app start time vs app transfer time. It would also be useful to have the c++ lambda runtime there too; I expect it probably performs similarly to rust.
Tangent: when you have a lambda returning binary data, it’s pretty painful to have to encode it as base64 just so it can be serialized to json for the runtime. To add insult to injury, the base64 encoding is much more likely put you over the response size limits (6MB normally, 1MB via ALB). The c++ lambda runtime (and maybe rust?) lets you return non-JSON and do whatever you want, as it’s just POSTing to an endpoint within the lambda. So you can return a binary payload and just have your client handle the blob.
One of the easiest hack to reduce your AWS bills is to migrate from x86 to arm64 CPU. Performance difference is negligible, and cost can be upto 50% lower for arm machines. This is for both RDS and general compute (EC2, ECS). Would recommend to all.
Can someone tell me why there isn't almost any laptop with Linux and ARM? Is it more efficient than x86 though
> Node.js: Node.js 22 on arm64 was consistently faster than Node.js 20 on x86_64. There's essentially a “free” ~15-20% speedup just by switching architectures!
Not sure why this is phrased like that in the TL;DR, when ARM64 is just strictly faster when running the same nodejs workload and version.
I would not be surprised that Rust be faster than Python, but looking at the code of his Benchmarks, I'm not sure that it really means anything.
For example, the "light" test will do calls to "dynamodb". Maybe you benchmark python, or you benchmark the aws sdk implementation with different versions of python, or just you benchmark average dynamodb latencies at different times of the day, ...
And, at least for Python, his cpu and memory test code looks like to be especially specific. For example, if you use "sha256" or any other hashlib, you will not really benchmark python but the C implementation of the hashlib, that might depend on the crypto or gcc libraries or optimization that was used by AWS to build the python binaries.