Because I had to look it up:
SPSC = Single Producer Single Consumer
MPMC = Multiple Producer Multiple Consumer
(I'll let you deduce the other abbreviations)
When reading this project's wiki [0], it mentions that Kanal (another channel implementation) uses an optimization that "makes [the] async API not cancellation-safe". I wonder if this is the same / related issue to the recent HN thread on "future lock" [1]. I hadn't heard of this cancellation safety issue prior to that other HN thread.
[0]: https://github.com/frostyplanet/crossfire-rs/wiki#kanal [1]: https://news.ycombinator.com/item?id=45774086
Can I select multiple receivers concurrently, similar to a select in Linux?
Using stuff like this, does it make sense to use Rust in a golang-style where instead of async and its function colouring, you spawn coroutines and synchronize over channels?
I feel like testing if it'd be faster than tokio::sync::mpsc in my project, but in the context of a websocket client, the performance of just using tokio is already pretty good. Existing CPU usage is already negligible (under a minute of CPU time in >110 hours real time).
For Java/JVM :
I'm a little nervous about the correctness of the memory orderings in this project, e.g.
Two acquires back to back are unnecessary here. In general, fetch_sub and fetch_add should give enough guarantees for this file in Relaxed. https://github.com/frostyplanet/crossfire-rs/blob/master/src...
Congest is never written to with release, so the Acquire is never used to form a release chain: https://github.com/frostyplanet/crossfire-rs/blob/dd4a646ca9...
The queue appears to close the channel twice (once per rx/tx), which is discordant with the apparent care taken with the fencing. https://github.com/frostyplanet/crossfire-rs/blob/dd4a646ca9...
The author also suggests an incorrect optimization to Tokio here which suggests a lack of understanding of what the specific guarantees given are: https://github.com/tokio-rs/tokio/pull/7622
The tests do not appear to simulate the queue in Loom, which would be a very, very good idea.
This stuff is hard. I almost certainly made a mistake in what I've written above (edit: I did!). In practice, the queue is probably fine to use, but I wouldn't be shocked if there's a heisenbug lurking in this codebase that manifests something like: it all works fine now, but in the next LLVM version an optimization pass is added which breaks it on ARM in release mode, and after that the queue yields duplicate values in a busy loop every few million reads which is only triggered on Graviton processors.
Or something. Like I said, this stuff is hard. I wrote a very detailed simulator for the Rust/C++ memory model, have implemented dozens of lockless algorithms, and I still make a mistake every time I go to write code. You need to simulate it with something like Loom to have any hope of a robust implementation.
For anyone interested in learning about Rust's memory model, I can't recommend enough Rust Atomics and Locks:
https://marabos.nl/atomics/