The reason I really like Zig is because there's finally a language that makes it easy to gracefully handle memory exhaustion at the application level. No more praying that your program isn't unceremoniously killed just for asking for more memory - all allocations are assumed fallible and failures must be handled explicitly. Stack space is not treated like magic - the compiler can reason about its maximum size by examining the call graph, so you can pre-allocate stack space to ensure that stack overflows are guaranteed never to happen.
This first-class representation of memory as a resource is a must for creating robust software in embedded environments, where it's vital to frontload all fallibility by allocating everything needed at start-up, and allow the application freedom to use whatever mechanism appropriate (backpressure, load shedding, etc) to handle excessive resource usage.
I don't know Zig. The article says "Many people seem confused about why Zig should exist if Rust does already." But I'd ask instead why does Zig exist when C does already? It's just a "better" C? But has the drawback that makes C problematic for development, manual memory management? I think you are better off using a language with a garbage collector, unless your usage really needs manual management, and then you can pick between C, Rust, and Zig (and C++ and a few hundred others, probably.)
If you are pre-allocating Rust would handle that decently as well right?
Certainly I agree that allocations in your dependencies (including std) are more annoying in Rust since it uses panics for OOM.
The no-std set of crates is all setup to support embedded development.
> Stack space is not treated like magic - the compiler can reason about its maximum size by examining the call graph, so you can pre-allocate stack space to ensure that stack overflows are guaranteed never to happen.
How does that work in the presence of recursion or calls through function pointers?
> No more praying that your program isn't unceremoniously killed just for asking for more memory - all allocations are assumed fallible and failures must be handled explicitly.
But for operating systems with overcommit, including Linux, you won't ever see the act of allocation fail, which is the whole point. All the language-level ceremony in the world won't save you.