logoalt Hacker News

Unlocking Python's Cores:Energy Implications of Removing the GIL

52 pointsby runningmikelast Friday at 8:41 AM25 commentsview on HN

Comments

Tiberiumtoday at 1:56 PM

I have a suspicion that this paper is basically a summary with some benchmarks, done with LLMs.

devrimozcaytoday at 10:36 AM

One thing I'm curious about here is the operational impact.

In production systems we often see Python services scaling horizontally because of the GIL limitations. If true parallelism becomes common, it might actually reduce the number of containers/services needed for some workloads.

But that also changes failure patterns — concurrency bugs, race conditions, and deadlocks might become more common in systems that were previously "protected" by the GIL.

It will be interesting to see whether observability and incident tooling evolves alongside this shift.

show 2 replies
carlsborgtoday at 12:51 PM

Should have funded the entire GIL-removal effort by selling carbon credits. Here's an industry waiting to happen: issue carbon credits for optimizing CPU and GPU resource usage in established libraries.

show 1 reply
bob1029today at 1:17 PM

> Across all workloads, energy consumption is proportional to execution time

Race-to-idle used to be the best path before multicore. Now it's trickier to determine how to clock the device. Especially in battery powered cases. This is why all modern CPU manufacturers are looking into heterogeneous compute (efficiency vs performance cores).

Put differently, I don't think we should be killing ourselves over this at software time. If you are actually concerned about the impact on raw energy consumption, you should move your workloads from AMD/Intel to ARM/Apple. Everything else would be noise compared to this.

show 1 reply
philipallstartoday at 10:28 AM

Might be worth noting that this seems to be just running some tests using the current implementation, and these are not necessarily general implications of removing the GIL.

show 1 reply
chillitomtoday at 11:43 AM

Our experience on memory usage, in comparison, has been generally positive.

Previously we had to use ProcessPoolExecutor which meant maintaining multiple copies of the runtime and shared data in memory and paying high IPC costs, being able to switch to ThreadPoolExecutor was hugely beneficially in terms of speed and memory.

It almost feels like programming in a modern (circa 1996) environment like Java.

show 1 reply
flowerthoughtstoday at 11:00 AM

Sections 5.4 and 5.5 are the interesting ones.

5.4: Energy consumption going down because of parallelism over multiple cores seems odd. What were those cores doing before? Better utilization causing some spinlocks to be used less or something?

5.5: Fine-grained lock contention significantly hurts energy consumption.

show 1 reply
runningmikelast Friday at 8:41 AM

Title shortened - Original title:

Unlocking Python’s Cores: Hardware Usage and Energy Implications of Removing the GIL

I am curious about the NumPy workload choice made, due to more limited impact on CPython performance.

pothamktoday at 11:00 AM

[flagged]

show 2 replies