logoalt Hacker News

tptaceklast Wednesday at 6:22 PM2 repliesview on HN

There was never a proposal to automate arenas in Go code, and that wouldn't even make sense: the point of arenas is that you bump-allocate until some program-specific point where you free all at once (that's why they're so great for compiler code, where you do passes over translation units and can just do the memory accounting at each major step).

(Yes: I used arenas a lot when I was shipping C code; they're a very easy way to get big speed boosts out of code that does a lot of malloc).


Replies

btownlast Wednesday at 7:50 PM

I'm not sure what the parent posters were referring to, but there's an interesting way in which "automation" might make sense in some languages: implicit arena utilization based on the current call stack, without needing to pass/thread an explicit `arena` parameter through the ecosystem.

One could imagine a language that allows syntax like CallWithArena(functionPointer, someSetupInfo) and any standard library allocation therein would use the arena, releasing on completion or error.

Languages like Python and modern Java would typically use a thread/virtualthread/greenlet-local variable to track the state for this kind of pattern. The fact that Go explicitly avoids this pattern is a philosophical choice, and arguably a good one for Go to stick to, given its emphasis on avoiding the types of implicit "spooky action at a distance" that often plague hand-rolled distributed systems!

But the concept of arenas could still apply in an AlternateLowLevelLanguage where a notion of scoped/threaded context is implicit and language-supported, and arena choice is tracked in that context and treated as a first-class citizen by standard libraries.

show 1 reply
UncleEntitylast Wednesday at 9:17 PM

> Yes: I used arenas a lot when I was shipping C code; they're a very easy way to get big speed boosts out of code that does a lot of malloc

This is something I look forwards to exploring later in my current pet project, right now it has possibly the stupidest GC (just tracks C++ 'new' allocated objects) but is set up for drop in arena allocation with placement new so, we'll see how much that matters later on. There are two allocation patterns, statements and whatnot get compiled to static continuation graphs which push and pop secondary continuations and Value objects to do the deed so, I believe, the second part with the rapid temporary object creation will see the most benefit.

Anyhoo, slightly different pattern where the main benefits will most likely be from the cache locality or whatever, assuming I can even make a placement new arena allocator which is better than the performance of the regular C++ new. Never know, might even add more overhead than just tracking a bunch of raw C++ pointers as I can't imagine there's even a drop of performance which C++ new left on the table?