> You can do fully manual memory management in Go if you want. The runtime is full of tons of examples where they have 0-alloc, Pools, ring buffers, Assembly, and tons of other tricks.
The runtime only exposes a small subset of what it uses internally and there's no stable ABI for runtime internals. If you're lucky to get big enough and have friends they might not break you, some internal linkage is being preserved, but in the general case for a general user, nope. Updates might make your code untenable.
> If you really want an arena like behavior you could allocate a byte slice and use unsafe to cast it to literally any type.
AIUI the prior proposals still provided automated lifetime management, though that's related to various of the standing concerns, so you can't match that from "userspace" of go, finalizers don't get executed on a deterministic schedule. Put simply: that's not the same thing.
As someone else points out this is also much more fraught with error than just typing what you described. On top of the GC issue pointed out already, you'll also hit memory model considerations if you're doing any concurrency, which if you actually needed to do this surely you are. Once you're doing that you'll run into the issue, if you're trying to compete with systems languages, that Go only provides a subset of the platform available memory model, in the simplest form it only offers acq/rel atomic semantics. It also doesn't expose any notion of what thread you're running on (which can change arbitrarily) or even which goroutine you're running on. This limits your design space quite significantly at the bounds your performance for high frequency small region operations. I'd actually hazard an educated guess that an arena written as you casually suggest would perform extremely poorly at any meaningful scale (lets say >=32 cores, still fairly modest).
> You could argue that C++ RAII overhead is “bounded performance” compared to C. Or that C’s stack frames are “bounded performance” compared to a full in-register assembly implementation of a hot loop. > But that’s bloody stupid. Just use the right tool for the job and know where the tradeoffs are, because there’s always something. The tradeoff boundary for an individual project or person is just arbitrary.
Sure, reducto ad absurdum, though I typically would optimize against the (systems language) compiler long before I drop to assembly, it's 2025 systems compilers are great and have many optimizations, intrinsics and hints.
> Man this person is mediocre at best.
Harsh, I think the author is fine really. I think their most significant error isn't in missing or not discussing difficult other things they could do with Go, it's seemingly being under the misconception prior to the Arena proposal that Go actually cedes control for lower level optimization. It doesn't, and it never has, and it likely never will (it will gain other semi-generalized internal optimizations over time, lots of work goes into that).
In some cases you can hack some in on your own, but Go is not well placed as a "systems language" if you mean by that something like "competitive efficiency at upper or lower bound scale tasks", it is much better placed as a framework for writing general purpose servers at middle scales. It's best placed on systems that don't have batteries, and that have plenty of ram. It'll provide you with a decent opportunity to scale up and then out in that space as long as you pay attention to how you're doing along the way. It'll hurt if you need to target state of the art efficiency at extreme ends, and very likely block you wholesale.
I'm glad Go folks are still working on ideas to try to find a way for applications to get some more control over allocations. I'm also not expecting a solution that solves my deepest challenges anytime soon though. I think they'll maybe solve some server cases first, and that's probably good, that's Go's golden market.