Im a bit out of the loop with this, but hope its not like that time with python 3.14, when it was claimed a geometric mean speedup of about 9-15% over the standard interpreter when built with Clang 19. It turned out the results were inflated due to a bug in LLVM 19 that prevented proper "tail duplication" optimization in the baseline interpreter's dispatch loop. Actual gains was aprox 4%.
Edit: Read through it and have come to the conclusion that the post is 100% OK and properly framed: He explicitly says his approach is to "sharing early and making a fool of myself," prioritizing transparency and rapid iteration over ironclad verification upfront.
One could make an argument that he should have cross-compiler checks, independent audits, or delayed announcements until results are bulletproof across all platforms. But given that he is 100% transparent with his thinking and how he works, it's all good in the hood.
After years of admonition discouraging me, I’m using Python for a Windows GUI app over my usual C#/MAUI. I’m much more familiar with Python and the whole VS ecosystem is just so heavy for lightweight tasks. I started with tkinter but found it super clunky for interactions I needed heavily, like on field change, but learning QT seemed like more of a lift than I was interested in. (Maybe a skill issue on both fronts?) Grabbed wxglade and drag-and-dropped an interface with wxpython that only has one external dependency installable with pip, is way more convenient than writing xaml by hand, and ergonomically feels pretty pythonic compared to QT. Glad to see more work going into the windows runtime because I’ll probably be leaning on it more.
2 typos in first sentence. Is this on purpose to make it obviously not-AI generated?
"apology peice" and "tail caling"
> This has caused many issues for compilers in the past, too many to list in fact. I have a EuroPython 2025 talk about this.
Looks like it refers to this:
(wish it's a link in the article)
This seems like very low hanging fruit. How is the core loop not already hyper optimized?
I'd have expected it to be hand rolled assembly for the major ISAs, with a C backup for less common ones.
How much energy has been wasted worldwide because of a relatively unoptimized interpreter?
Matt Godbolt was saying recently that using tail-calls for an interpreter suits the branch predictor inside the cpu. Compared to a single big switch / computed jump.
I've never seen this kind of benchmark graph before, and it looks really neat! How was this generated? What tool was used for the benchmarks?
(I actually spent most of Sep/Oct working on optimizing the Immer JS immutable update library, and used a benchmarking tool called `mitata`, so I was doing a lot of this same kind of work: https://github.com/immerjs/immer/pull/1183 . Would love to add some new tools to my repertoire here!)
I have quetion - slightly off topic, but related. I was wandering why is pyhton interpreter so much slower than V8 javascript interpreter when both javascript and python are dynamic interpreted languages.
So…if the Python team finds tail calls useful, when are we going to see them in Python?
Python's recent developments have been monumental, new versions now easily best the PyPy performance charts on M4 MacBook Air, idk if this has something to do with optimizations by Apple but coming from Linux I was surprised
My understanding is that also this tail call based interpretation is also kinder to the branch predictor. I wonder if this explains some of the slow downs - they trigger specific cases that cause lots of branch mispredictions.
TLDR: The tail-calling interpreter is slightly faster than computed goto.
> I used to believe the the tailcalling interpreters get their speedup from better register use. While I still believe that now, I suspect that is not the main reason for speedups in CPython.
> My main guess now is that tail calling resets compiler heuristics to sane levels, so that compilers can do their jobs.
> Let me show an example, at the time of writing, CPython 3.15’s interpreter loop is around 12k lines of C code. That’s 12k lines in a single function for the switch-case and computed goto interpreter.
> […] In short, this overly large function breaks a lot of compiler heuristics.
> One of the most beneficial optimisations is inlining. In the past, we’ve found that compilers sometimes straight up refuse to inline even the simplest of functions in that 12k loc eval loop.
Is there a Clang based build for Windows? I’ve been slowly moving my Windows builds from MSVC to Clang. Which still uses the Microsoft STL implementation.
So far I think using clang instead of MSVC compiler is a strict win? Not a huge difference mind you. But a win nonetheless.
if the author of this blog reads this: can we can an RSS, please?
MSVC mostly generates slower code than gcc/clang, so maybe this trick reduces the gap.
[dead]
[dead]
Really nice results on MSVC. The idea that tail calls effectively reset compiler heuristics and unblock inlining is pretty convincing. One thing that worries me though is the reliance on undocumented MSVC behavior — if this becomes widely shipped, CPython could end up depending on optimizer guarantees that aren’t actually stable. Curious how you’re thinking about long-term maintainability and the impact on debugging/profiling.
I don't understand this focus on micro performance details... considering that all of this is about an interpretation approach which is always going to be slow relatively speaking. The big speed up would be to JIT it all, then you dont need to care about structuring of switch loops etc
The Python interpreter core loop sounds like the perfect problem for AlphaEvolve. Or it's open source equivalent OpenEvolve if DeepMind doesn't want to speed up Python for the competition.
The money shot (wish this were included in the blog post):
https://github.com/python/cpython/pull/143068/files#diff-45b...Apparently(?) this also needs to be attached to the function declarator and does not work as a function specifier: `static void *__preserve_none slowpath();` and not `__preserve_none static void *slowpath();` (unlike GCC attribute syntax, which tends to be fairly gung-ho about this sort of thing, sometimes with confusing results).
Yay to getting undocumented MSVC features disclosed if Microsoft thinks you’re important enough :/