The most interesting bit here is not the “2.4x faster than Lambda” part, it is the constraints they quietly codify to make snapshots safe. The post describes how they run your top-level Python code once at deploy, snapshot the entire Pyodide heap, then effectively forbid PRNG use during that phase and reseed after restore. That means a bunch of familiar CPython patterns at import time (reading entropy, doing I/O, starting background threads, even some “random”-driven config) are now treated as bugs and turned into deployment failures rather than “it works on my laptop.”
In practice, Workers + Pyodide is forcing a much sharper line between init-time and request-time state than most Python codebases have today. If you lean into that model, you get very cheap isolates and global deploys with fast cold starts. If your app depends on the broader CPython/C-extension ecosystem behaving like a mutable Unix process, you are still in container land for now. My hunch is the long-term story here will be less about the benchmark numbers and more about how much of “normal” Python can be nudged into these snapshot-friendly constraints.
It’s 2025 and choosing a region for your resources is still an enterprise feature on cloudflare.
In contrast, AWS provides this as the base thing, you choose where your services run. In a world where you can’t do anything without 100s of compliance and a lot of compliances require geolocation based access control or data retention, this is absurd.
``` BREAKING CHANGE The following packages are removed from the Pyodide distribution because of the build issues. We will try to fix them in the future: arro3-compute arro3-core arro3-io Cartopy duckdb geopandas ... polars pyarrow pygame-ce pyproj zarr ```
https://pyodide.org/en/stable/project/changelog.html#version...
Bummer, looks like a lot of useful geo/data tools got removed from the Pyodide distribution recently. Being able to use some of these tools in a Worker in combination with R2 would unlock some powerful server-side workflows. I hope they can get added back. I'd love to adopt CF more widely for some of my projects, and seems like support for some of this stuff would make adoption by startups easier.
Checked out the Cloudflare post... they now support Pyodide-compatible packages through uv... so you can pull in whatever Python libs you need, not just a curated list.
ALSO the benchmarks show about a one second cold start when importing httpx, fastapi and pydantic... that's faster than Lambda and Cloud Run, thanks to memory snapshots and isolate-based infra.
BUT the default global deployment model raises questions about compliance when you need specific regions... and I'd love to know how well packages with native extensions are supported.
If anyone from cloudflare comes here - it's not possible to create D1 databases on the fly and interact them because databases must be mentioned in the worker bindings.
This hampers the per user databases workflow.
Would be awesome if a fix lands.
I still don’t get, what is the use case for cloudflare workers or lambda?
I used both for years. Nothing beats VPS/bare metal. Alright, they give lower latency, and maybe cheaper and big nightmare for managing at the same time. Hello to micro services architecture.
Pyodide is a great enabler for this kind of thing, but most of the libraries I want to use tend to be native or just weird. Still, I wonder how fast things like Pillow, Pandas and the like are these days—-benchmarks would be nice.
The comparison with AWS Lambda seems to ignore the AWS memory snapshot option called "SnapStart for Python". I'd be interested in seeing the timing comparison extended to include SnapStart.
I hope Cloudflare improve Next.js support on Workers.
Currently pagespeed.web.dev score drops by around 20 than self hosted version. One of the best features of Next.js, Image optimization doesn't have out of the box support. You need separate image optimization service that also did not work for me for local images (images in the bundle).
Very interesting but the limitation on the libraries you can use is very strong.
I wonder if they plan to invest seriously into this?
I wish they would contribute stuff like this memory snappshotting to CPython.
Anybody using it for something serious ? I can't see a use case beyond I need a quick script running that is not worth setting up a vps.
nice
One of my biggest points of criticism of Python is its slow cold start time. I especially notice this when I use it as a scripting language for CLIs. The startup time of a simple .py script can easily be in the 100 to 300 ms range, whereas a C, Rust, or Go program with the same functionality can start in under 10 ms. This becomes even more frustrating when piping several scripts together, because the accumulated startup latency adds up quickly.