logoalt Hacker News

pjc50today at 1:30 PM3 repliesview on HN

Reference counting in multithreaded systems is much more expensive than it sounds because of the synchronization overhead. I don't see it coming back. I don't think it saves massive amounts of memory, either, especially given my observation with vmmap upthread that in many cases the code itself is a dominant part of the (virtual) memory usage.


Replies

adrian_btoday at 2:23 PM

Incrementing or decrementing a shared counter is done with an atomic instruction, not with a locked critical section.

This has negligible overhead in most cases. For instance, if the shared counter is already in some cache memory the overhead is smaller than a normal non-atomic access to the main memory. The intrinsic overhead of an atomic instruction is typically about the same as that of a simple memory access to data that is stored in the L3 cache memory, e.g. of the order of 10 nanoseconds at most.

Moreover, many memory allocators use separate per-core memory heaps, so they avoid any accesses to shared memory that need atomic instructions or locking, except in the rare occasions when they interact with the operating system.

show 1 reply
zozbot234today at 1:45 PM

If you use an ownership/lifetime system under the hood you only pay that synchronization overhead when ownership truly changes, i.e. when a reference is added or removed that might actually impact the object's lifecycle. That's a rare case with most uses of reference counting; most of the time you're creating a "sub"-reference and its lifetime is strictly bounded by some existing owning reference.

show 1 reply
gwbas1ctoday at 4:46 PM

That's why Rust has Rc<> for single-threaded structs, and Arc<> for thread-safe structs.