> How so? Garbage collection has inherent performance overhead wrt. manual memory management, and Rust now addresses this by providing the desired guarantees of managed memory without the overhead of GC.
I somewhat disagree, specifically on the implicit claim that all GC has overhead and alternatives do not. Rust does a decent job of giving you some ergonomics to get started, but it is still quite unergonomic to fix once you have multiple different allocation problems to deal with. Zig flips that a bit on it's head, it's more painful to get started, but the pain level stays more consistent throughout deeper problems. Ideally though I want a better blend of both - to give a still not super concrete version of what I mean, I mean I want something that can be setup by the systems oriented developer say, near the top of a request path, and it becomes a more implicit dependency for most downstream code with low ceremony and allowing for progressive understanding of contributors way down the call chain who in most cases don't need to care - meanwhile enabling an easy escape hatch when it matters.
I think people make far too much of a distinction between a GC and an allocator, but the reality is that all allocators in common use in high level OS environments are a form of GC. That's of course not what they're talking about, but it's also a critical distinction.
The main difference between what people _call a GC_ and those allocators is that a typical "GC" pauses the program "badly" at malloc time, and a typical allocator pauses a program "badly" at free time (more often than not). It's a bit of a common oddity really, both "GC's" and "allocators" could do things "the other way around" as a common code path. Both models otherwise pool memory and in higher performance tunings have to over-allocate. There are lots of commonly used "faster" allocators in use today that also bypass their own duties at smarter allocation by simply using mmap pools, but those scale poorly: mmap stalls can be pretty unpredictable and have cross-thread side effects that are often undesirable too.
The second difference which I think is more commonly internalized is that typically "the GC" is wired into the runtime in various ways, such as into the scheduler (Go, most dynlangs, etc), and has significant implications at the FFI boundary.
It would be possible to be more explicit about a general purpose allocator that has more GC-like semantics, but also provides the system level malloc/free style API as well as a language assisted more automated API with clever semantics or additional integrations. I guess fil-C has one such system (I've not studied their implementation). I'm not aware of implicit constraints which dictate that there are only two kinds of APIs, fully implicit and intertwined logarithmic GCs, or general purpose allocators which do most of their smart work in free.
My point is I don't really like the GC vs. not-GC arguments very much - I think it's one of the many over-generalizations we have as an industry that people rally hard around and it has been implicitly limiting how far we try to reach for new designs at this boundary. I do stand by a lot of reasoning for systems work that the fully implicitly integrated GC's (Java, Go, various dynlangs) generally are far too opaque for scalable (either very big or very small) systems work and they're unpleasant to deal with once you're forced to. At the same time for that same scalable work you still don't get to ignore the GC you are actually using in the allocator you're using. You don't get to ignore issues like restarting your program that has a 200+GB heap has huge page allocation costs, no matter what middleware set that up. Similarly you don't want a logarithmic allocation strategy on most embedded or otherwise resource constrained systems, those designs are only ok for servers, they're bad for batteries and other parts of total system financial cost in many deployments.
I'd like to see more work explicitly blending these lines, logarithmically allocating GC's scale poorly in many similar ways to more naive mmap based allocators. There are practical issues you run into with overallocation and the solution is to do something more complex than the classical literature. I'd like to see more of this work implemented as standalone modules rather than almost always being implicitly baked into the language/runtime. It's an area that we implicitly couple stuff too much, and again good on Zig for pushing the boundary on a few of these in the standard language and library model it has (and seemingly now also taking the same approach for IO scheduling - that's great).