logoalt Hacker News

Python 3.15’s interpreter for Windows x86-64 should hopefully be 15% faster

282 pointsby lumpatoday at 1:02 PM87 commentsview on HN

Comments

mananaysiempretoday at 2:32 PM

The money shot (wish this were included in the blog post):

  #   if defined(_MSC_VER) && !defined(__clang__)
  #      define Py_MUSTTAIL [[msvc::musttail]]
  #      define Py_PRESERVE_NONE_CC __preserve_none
  #   else
  #       define Py_MUSTTAIL __attribute__((musttail))
  #       define Py_PRESERVE_NONE_CC __attribute__((preserve_none))
  #   endif
https://github.com/python/cpython/pull/143068/files#diff-45b...

Apparently(?) this also needs to be attached to the function declarator and does not work as a function specifier: `static void *__preserve_none slowpath();` and not `__preserve_none static void *slowpath();` (unlike GCC attribute syntax, which tends to be fairly gung-ho about this sort of thing, sometimes with confusing results).

Yay to getting undocumented MSVC features disclosed if Microsoft thinks you’re important enough :/

show 4 replies
jtrntoday at 3:52 PM

Im a bit out of the loop with this, but hope its not like that time with python 3.14, when it was claimed a geometric mean speedup of about 9-15% over the standard interpreter when built with Clang 19. It turned out the results were inflated due to a bug in LLVM 19 that prevented proper "tail duplication" optimization in the baseline interpreter's dispatch loop. Actual gains was aprox 4%.

Edit: Read through it and have come to the conclusion that the post is 100% OK and properly framed: He explicitly says his approach is to "sharing early and making a fool of myself," prioritizing transparency and rapid iteration over ironclad verification upfront.

One could make an argument that he should have cross-compiler checks, independent audits, or delayed announcements until results are bulletproof across all platforms. But given that he is 100% transparent with his thinking and how he works, it's all good in the hood.

show 2 replies
DrewADesigntoday at 7:00 PM

After years of admonition discouraging me, I’m using Python for a Windows GUI app over my usual C#/MAUI. I’m much more familiar with Python and the whole VS ecosystem is just so heavy for lightweight tasks. I started with tkinter but found it super clunky for interactions I needed heavily, like on field change, but learning QT seemed like more of a lift than I was interested in. (Maybe a skill issue on both fronts?) Grabbed wxglade and drag-and-dropped an interface with wxpython that only has one external dependency installable with pip, is way more convenient than writing xaml by hand, and ergonomically feels pretty pythonic compared to QT. Glad to see more work going into the windows runtime because I’ll probably be leaning on it more.

show 3 replies
croemertoday at 9:39 PM

2 typos in first sentence. Is this on purpose to make it obviously not-AI generated?

"apology peice" and "tail caling"

show 2 replies
g947otoday at 2:06 PM

> This has caused many issues for compilers in the past, too many to list in fact. I have a EuroPython 2025 talk about this.

Looks like it refers to this:

https://youtu.be/pUj32SF94Zw

(wish it's a link in the article)

redox99today at 2:23 PM

This seems like very low hanging fruit. How is the core loop not already hyper optimized?

I'd have expected it to be hand rolled assembly for the major ISAs, with a C backup for less common ones.

How much energy has been wasted worldwide because of a relatively unoptimized interpreter?

show 7 replies
bborehamtoday at 9:23 PM

Matt Godbolt was saying recently that using tail-calls for an interpreter suits the branch predictor inside the cpu. Compared to a single big switch / computed jump.

acemarketoday at 4:13 PM

I've never seen this kind of benchmark graph before, and it looks really neat! How was this generated? What tool was used for the benchmarks?

(I actually spent most of Sep/Oct working on optimizing the Immer JS immutable update library, and used a benchmarking tool called `mitata`, so I was doing a lot of this same kind of work: https://github.com/immerjs/immer/pull/1183 . Would love to add some new tools to my repertoire here!)

show 1 reply
gozzootoday at 5:17 PM

I have quetion - slightly off topic, but related. I was wandering why is pyhton interpreter so much slower than V8 javascript interpreter when both javascript and python are dynamic interpreted languages.

show 7 replies
wk_endtoday at 10:05 PM

So…if the Python team finds tail calls useful, when are we going to see them in Python?

vednigtoday at 7:23 PM

Python's recent developments have been monumental, new versions now easily best the PyPy performance charts on M4 MacBook Air, idk if this has something to do with optimizations by Apple but coming from Linux I was surprised

eab-today at 6:42 PM

My understanding is that also this tail call based interpretation is also kinder to the branch predictor. I wonder if this explains some of the slow downs - they trigger specific cases that cause lots of branch mispredictions.

Hendriktotoday at 2:12 PM

TLDR: The tail-calling interpreter is slightly faster than computed goto.

> I used to believe the the tailcalling interpreters get their speedup from better register use. While I still believe that now, I suspect that is not the main reason for speedups in CPython.

> My main guess now is that tail calling resets compiler heuristics to sane levels, so that compilers can do their jobs.

> Let me show an example, at the time of writing, CPython 3.15’s interpreter loop is around 12k lines of C code. That’s 12k lines in a single function for the switch-case and computed goto interpreter.

> […] In short, this overly large function breaks a lot of compiler heuristics.

> One of the most beneficial optimisations is inlining. In the past, we’ve found that compilers sometimes straight up refuse to inline even the simplest of functions in that 12k loc eval loop.

show 2 replies
Quitschquattoday at 5:36 PM

Tbh, 15% faster than slow AF is still slow AF

show 1 reply
forrestthewoodstoday at 5:12 PM

Is there a Clang based build for Windows? I’ve been slowly moving my Windows builds from MSVC to Clang. Which still uses the Microsoft STL implementation.

So far I think using clang instead of MSVC compiler is a strict win? Not a huge difference mind you. But a win nonetheless.

develatiotoday at 2:18 PM

if the author of this blog reads this: can we can an RSS, please?

show 1 reply
bgwaltertoday at 3:47 PM

MSVC mostly generates slower code than gcc/clang, so maybe this trick reduces the gap.

show 1 reply
Rakshath_1today at 3:28 PM

[dead]

maximgeorgetoday at 8:00 PM

[dead]

mishrapravin441today at 2:12 PM

Really nice results on MSVC. The idea that tail calls effectively reset compiler heuristics and unblock inlining is pretty convincing. One thing that worries me though is the reliance on undocumented MSVC behavior — if this becomes widely shipped, CPython could end up depending on optimizer guarantees that aren’t actually stable. Curious how you’re thinking about long-term maintainability and the impact on debugging/profiling.

show 2 replies
horizion2025today at 6:29 PM

I don't understand this focus on micro performance details... considering that all of this is about an interpretation approach which is always going to be slow relatively speaking. The big speed up would be to JIT it all, then you dont need to care about structuring of switch loops etc

show 1 reply
machinationutoday at 1:59 PM

The Python interpreter core loop sounds like the perfect problem for AlphaEvolve. Or it's open source equivalent OpenEvolve if DeepMind doesn't want to speed up Python for the competition.