logoalt Hacker News

EGregyesterday at 6:53 PM2 repliesview on HN

well, that's partially true

the caller is itself a task / actor

the thing is that the caller might want to rollback what they're doing based on whether the subtask was rolled back... and so on, backtracking as far as needed

ideally all the side effects should be queued up and executed at the end only, after your caller has successfully heard back from all the subtasks

for example... don't commit DB transactions, send out emails or post transactions onto a blockchain until you know everything went through. Exceptions mean rollback, a lot of the time.

on the other hand, "after" hooks are supposed to happen after a task completes fully, and their failure shouldn't make the task rollback anything. For really frequent events, you might want to debounce, as happens for example with browser "scroll" event listeners, which can't preventDefault anymore unless you set them with {passive: false}!

PS: To keep things simple, consider using single-threaded applications. I especially like PHP, because it's not only single-threaded but it actually is shared-nothing. As soon as your request handling ends, the memory is released. Unlike Node.js you don't worry about leaking memory or secrets between requests. But whether you use PHP or Node.js you are essentially running on a single thread, and that means you can write code that is basically sequentially doing tasks one after the other. If you need to fan out and do a few things at a time, you can do it with Node.js's Promise.all(), while with PHP you kind of queue up a bunch of closures and then explicitly batch-execute with e.g. curl_multi_ methods. Either way ... you'll need to explicitly write your commit logic in the end, e.g. on PHP's "shutdown handler", and your database can help you isolate your transactions with COMMIT or ROLLBACK.

If you organize your entire code base around dispatching events instead of calling functions, as I did, then you can easily refactor it to do things like microservices at scale by using signed HTTPS requests as a transport (so you can isolate secrets, credentials, etc.) from the web server: https://github.com/Qbix/Platform/commit/a4885f1b94cab5d83aeb...


Replies

jpc0yesterday at 7:13 PM

I liked where you started.

Any ASYNC operation, whether using coroutines or event based actors or whatever else should be modelled as a network call.

You need a handle that will contain information about the async call and will own the work it performs. You can have an API that explicitly says “I don’t care what happens to this thing just that it happens” and will crash on failure. Or you can handle its errors if there are any and importantly decide how to handle those errors.

Oh and failing to allocate/create that handle should be a breach of invariants and immediately crash.

That way you have all the control and flexibility and Async error handling becomes trivial, you can use whatever async pattern you want to manage async operations at that point as well.

And you also know you have fundamentally done something expensive in latency for the benefit of performance or access to information, because if it was cheap you would have just done it on the thread you are already using.

renoxyesterday at 8:31 PM

> for example... don't commit DB transactions, send out emails or post transactions onto a blockchain until you know everything went through. Exceptions mean rollback, a lot of the time.

But what if you need to send emails AND record it in a DB?

show 1 reply