logoalt Hacker News

Asynchrony is not concurrency

206 pointsby kristoff_ityesterday at 7:21 PM139 commentsview on HN

Comments

dvtyesterday at 10:44 PM

"Asynchrony" is a very bad word for this and we already have a very well-defined mathematical one: commutativity. Some operations are commutative (order does not matter: addition, multiplication, etc.), while others are non-commutative (order does matter: subtraction, division, etc.).

    try io.asyncConcurrent(Server.accept, .{server, io});
    io.async(Cient.connect, .{client, io});
Usually, ordering of operations in code is indicated by the line number (first line happens before the second line, and so on), but I understand that this might fly out the window in async code. So, my gut tells me this would be better achieved with the (shudder) `.then(...)` paradigm. It sucks, but better the devil you know than the devil you don't.

As written, `asyncConcurrent(...)` is confusing as shit, and unless you memorize this blog post, you'll have no idea what this code means. I get that Zig (like Rust, which I really like fwiw) is trying all kinds of new hipster things, but half the time they just end up being unintuitive and confusing. Either implement (async-based) commutativity/operation ordering somehow (like Rust's lifetimes maybe?) or just use what people are already used to.

show 2 replies
jayd16yesterday at 9:55 PM

I kind of think the author simply pulled the concept of yielding execution out of the definition of concurrency and into this new "asynchrony" term. Then they argued that the term is needed because without it the entire concept of concurrency is broken.

Indeed so, but I would argue that concurrency makes little sense without the ability to yield and is therefore intrinsic to it. Its a very important concept but breaking it out into a new term adds confusion, instead of reducing it.

show 3 replies
threatofrainyesterday at 7:57 PM

IMO the author is mixed up on his definitions for concurrency.

https://lamport.azurewebsites.net/pubs/time-clocks.pdf

show 3 replies
kazinatoryesterday at 10:10 PM

Asynchrony, in this context, is an abstraction which separates the preparation and submission of a request from the collection of the result.

The abstraction makes it possible to submit multiple requests and only then begin to inquire about their results.

The abstraction allows for, but does not require, a concurrent implementation.

However, the intent behind the abstraction is that there be concurrency. The motivation is to obtain certain benefits which will not be realized without concurrency.

Some asynchronous abstractions cannot be implemented without some concurrency. Suppose the manner by which the requestor is informed about the completion of a request is not a blocking request on a completion queue, but a callback.

Now, yes, a callback can be issued in the context of the requesting thread, so everything is single-threaded. But if the requesting thread holds a non-recursive mutex, that ruse will reveal itself by causing a deadlock.

In other words, we can have an asynchronous request abstraction that positively will not work single threaded;

1 caller locks a mutex

2 caller submits request

3 caller unlocks mutex

4 completion callback occurs

If step 2 generates a callback in the same thread, then step 3 is never reached.

The implementation must use some minimal concurrency so that it has a thread waiting for 3 while allowing the requestor to reach that step.

kobzolyesterday at 8:53 PM

The new Zig I/O idea seems like a pretty ingenious idea, if you write mostly applications and don't need stackless coroutines. I suspect that writing libraries using this style will be quite error-prone, because library authors will not know whether the provided I/O is single or multi-threaded, whether it uses evented I/O or not... Writing concurrent/async/parallel/whatever code is difficult enough on its own even if you have perfect knowledge of the I/O stack that you're using. Here the library author will be at the mercy of the IO implementation provided from the outside. And since it looks like the IO interface will be a proper kitchen sink, essentially an implementation of a "small OS", it might be very hard to test all the potential interactions and combinations of behavior. I'm not sure if a few async primitives offered by the interface will be enough in practice to deal with all the funny edge cases that you can encounter in practice. To support a wide range of IO implementations, I think that the code would have to be quite defensive and essentially assume the most parallel/concurrent version of IO to be used.

It will IMO also be quite difficult to combine stackless coroutines with this approach, especially if you'd want to avoid needless spawning of the coroutines, because the offered primitives don't seem to allow expressing explicit polling of the coroutines (and even if they did, most people probably wouldn't bother to write code like that, as it would essentially boil down to the code looking like "normal" async/await code, not like Go with implicit yield points). Combined with the dynamic dispatch, it seems like Zig is going a bit higher-level with its language design. Might be a good fit in the end.

It's quite courageous calling this approach "without any compromise" when it has not been tried in the wild yet - you can claim this maybe after 1-2 years of usage in a wider ecosystem. Time will tell :)

tossandthrowyesterday at 9:15 PM

The author does not seem to have made any non-trivial projects with asynchronicity.

All the pitfalls of concurrency are there - in particular when executing non-idempotent functions multiple times before previous executions finish, then you need mutexes!

show 2 replies
danaugrsyesterday at 8:11 PM

Excellent article. I'm looking forward to Zig's upcoming async I/O.

Salgattoday at 4:16 AM

It's more accurate to say that callbacks and async/await can facilitate concurrency.

nemothekidtoday at 1:02 AM

I think I'm missing something here, but the most interesting piece here is how would stackless coroutines work in Zig?

Since any function can be turned into a coroutine, is the red/blue problem being moved into the compiler? If I call:

     io.async(saveFileA, .{io});
Is that a function call? Or is that some "struct" that gets allocated on the stack and passed into an event loop?

Furthermore, I guess if you are dealing with pure zig, then its fine, but if you use any FFI, you can potentially end up issuing a blocking syscall anyways.

show 1 reply
butterisgoodyesterday at 8:14 PM

It's kind of true...

I can do a lot of things asynchronously. Like, I'm running the dishwasher AND the washing machine for laundry at the same time. I consider those things not occurring at "the same time" as they're independent of one another. If I stood and watched one finish before starting the other, they'd be a kind of synchronous situation.

But, I also "don't care". I think of things being organized concurrently by the fact that I've got an outermost orchestration of asynchronous tasks. There's a kind of governance of independent processes, and my outermost thread is what turns the asynchronous into the concurrent.

Put another way. I don't give a hoot what's going on with your appliances in your house. In a sense they're not synchronized with my schedule, so they're asynchronous, but not so much "concurrent".

So I think of "concurrency" as "organized asynchronous processes".

Does that make sense?

Ah, also neither asynchronous nor concurrent mean they're happening at the same time... That's parallelism, and not the same thing as either one.

Ok, now I'll read the article lol

show 2 replies
skybrianyesterday at 10:51 PM

The way I like to think about it is that libraries vary in which environments they support. Writing portable libraries that work in any environment is nice, but often unnecessary. Sometimes you don't care if your code works on Windows, or whether it works without green threads, or (in Rust) whether it works without the standard library.

So I think it's nice when type systems let you declare the environments a function supports. This would catch mistakes where you call a less-portable function in a portable library; you'd get a compile error, indicating that you need to detect that situation and call the function conditionally, with a fallback.

tekbogyesterday at 11:38 PM

There's a great old book on this if someone wants to check it: Communicating Sequential Processes. From Hoare. Go channels and the concurrent approach was inspired on this.

I also wrote a blog post a while back when I did a talk at work, it's Go focused but still worth the read I think.

[0] https://bognov.tech/communicating-sequential-processes-in-go...

Retr0idyesterday at 8:08 PM

I don't get it - the "problem" with the client/server example in particular (which seems pivotal in the explanation). But I am also unfamiliar with zig, maybe that's a prerequisite. (I am however familiar with async, concurrency, and parallelism)

show 2 replies
ralukyesterday at 8:46 PM

One thing that most languages are lacking is expressing lazy return values. -> await f1() + await f2() and to express this concurently requres manually handing of futures.

show 3 replies
riwskytoday at 4:07 AM

“Permission for concurrency is not an obligation for concurrency. Zig lets you explicitly permit-without-obligation, to support the design of libraries that are polymorphic over a/sync ‘function color’.”

sedatkyesterday at 9:08 PM

Blocking async code is not async. In order for something to execute "out of order", you must have an escape mechanism from that task, and that mechanism essentially dictates a form of concurrency. Async must be concurrent, otherwise it stops being async. It becomes synchronous.

show 3 replies
ioasuncvinvaeryesterday at 8:01 PM

Is there anything new in this article?

show 2 replies
butterisgoodyesterday at 8:29 PM

> Concurrency refers to the ability of a system to execute multiple tasks through simultaneous execution or time-sharing (context switching)

Wikipedia had the wrong idea about microkernels for about a decade too, so ... here we are I guess.

It's not a _wrong_ description but it's incomplete...

Consider something like non-strict evaluation, in a language like Haskell. One can be evaluating thunks from an _infinite_ computation, terminate early, and resume something else just due to the evaluation patterns.

That is something that could be simulated via generators with "yield" in other languages, and semantically would be pretty similar.

Also consider continuations in lisp-family languages... or exceptions for error handling.

You have to assume all things could occur simultaneously relative to each other in what "feels like" interrupted control flow to wrangle with it. Concurrency is no different from the outside looking in, and sequencing things.

Is it evaluated in parallel? Who knows... that's a strategy that can be applied to concurrent computation, but it's not required. Nor is "context switching" unless you mean switched control flow.

The article is very good, but if we're going by the "dictionary definition" (something programming environments tend to get only "partially correct" anyway), then I think we're kind of missing the point.

The stuff we call "asynchronous" is usually a subset of asynchronous things in the real world. The stuff we treat as task switching is a single form of concurrency. But we seem to all agree on parallelism!

bentleyayesterday at 11:57 PM

Non-blocking i/o isn't asynchrony and the author should know better. Non-blocking io is a building block of asynchronous systems -- it is not asynchony itself. Today's asynchronous programming did not exist when non-blocking I/O was implemented in Unix in the 80's.

andrewstuartyesterday at 11:19 PM

That’s word games.

If I launch 2 network requests from my async JavaScript and both are in flight then that’s concurrent.

Definition from Oxford Dictionary adjective 1. existing, happening, or done at the same time. "there are three concurrent art fairs around the city"

User23today at 1:36 AM

Asynchrony, parallelism, concurrency, and even deterministic execution (albeit as a degenerate case) are all just species of nondeterminism. Dijkstra and Scholten’s work on the subject is sadly under appreciated. And lest one thing this was ivory tower stuff, before he was a professor Dijkstra was a systems engineer writing operating systems on hilariously bad, by our standards, hardware.

ang_cireyesterday at 10:30 PM

> Asynchrony is not concurrency

This is what I tell my boss when I miss standups.

throwawaymathsyesterday at 8:19 PM

a core problem is that the term async itself is all sorts of terrible, synchronous usually means "happening at the same time", is not what is happening when you don't use `async`

its like the whole flammable/inflammable thing

ltbarcly3yesterday at 8:06 PM

The argument about concurrency != parallelism mentioned in this article as being "not useful" is often quoted and rarely a useful or informative, and it also fails to model actual systems with enough fidelity to even be true in practice.

Example: python allows concurrency but not parallelism. Well not really though, because there are lots of examples of parallelism in python. Numpy both releases the GIL and internally uses open-mp and other strategies to parallelize work. There are a thousand other examples, far too many nuances and examples to cover here, which is my point.

Example: gambit/mit-scheme allows parallelism via parallel execution. Well, kindof, but really it's more like python's multiprocess library pooling where it forks and then marshals the results back.

Besides this, often parallel execution is just a way to manage concurrent calls. Using threads to do http requests is a simple example, while the threads are able to execute in parallel (depending on a lot of details) they don't, they spend almost 100% of their time blocking on some socket.read() call. So is this parallelism or concurrency? It's what it is, it's threads mostly blocking on system calls, parallelism vs concurrency gives literally no insights or information here because it's a pointless distinction in practice.

What about using async calls to execute processes? Is that concurrency or parallelism? It's using concurrency to allow parallel work to be done. Again, it's both but not really and you just need to talk about it directly and not try to simplify it via some broken dichotomy that isn't even a dichotomy.

You really have to get into more details here, concurrency vs parallelism is the wrong way to think about it, doesn't cover the things that are actually important in an implementation, and is generally quoted by people who are trying to avoid details or seem smart in some online debate rather than genuinely problem solving.

show 2 replies