The most interesting bit here is not the “2.4x faster than Lambda” part, it is the constraints they quietly codify to make snapshots safe. The post describes how they run your top-level Python code once at deploy, snapshot the entire Pyodide heap, then effectively forbid PRNG use during that phase and reseed after restore. That means a bunch of familiar CPython patterns at import time (reading entropy, doing I/O, starting background threads, even some “random”-driven config) are now treated as bugs and turned into deployment failures rather than “it works on my laptop.”
In practice, Workers + Pyodide is forcing a much sharper line between init-time and request-time state than most Python codebases have today. If you lean into that model, you get very cheap isolates and global deploys with fast cold starts. If your app depends on the broader CPython/C-extension ecosystem behaving like a mutable Unix process, you are still in container land for now. My hunch is the long-term story here will be less about the benchmark numbers and more about how much of “normal” Python can be nudged into these snapshot-friendly constraints.
I'm betting against wasm and going with containers instead.
I have warm pool of lightweight containers that can be reused between runs. And that's the crucial detail that makes or breaks it. The good news is that you can lock it down with seccomp while still allowing normal execution. This will give you 10-30ms starts with pre-compiled python packages inside container. Cold start is as fast as spinning new container 200-ish ms. If you run this setup close to your data, you can get fast access to your files which is huge for data related tasks.
But this is not suitable for type of deployment Cloudflare is doing. The question is whether you even want that global availability because you will trade it for performance. At the end of the day, they are trying to reuse their isolates infra which is very smart and opens doors to other wasm-based deployments.