It's a good article but I think you need to start explaining structured concurrency from the very core of it: why it exists in the first place.
The design goal of structured concurrency is to have a safe way of using all available CPU cores on the device/computer. Modern mobile phones can have 4, 6, even 8 cores. If you don't get a decent grasp of how concurrency works and how to use it properly, your app code will be limited to 1 or 1.5 cores at most which is not a crime but a shame really.
That's where it all starts. You want to execute things in parallel but also want to ensure data integrity. If the compiler doesn't like something, it means a design flaw and/or misconception of structured concurrency, not "oh I forgot @MainActor".
Swift 6.2 is quite decent at its job already, I should say the transition from 5 to 6 was maybe a bit rushed and wasn't very smooth. But I'm happy with where Swift is today, it's an amazing, very concise and expressive language that allows you to be as minimalist as you like, and a pretty elegant concurrency paradigm as a big bonus.
I wish it was better known outside of the Apple ecosystem because it fully deserves to be a loved, general purpose mainstream language alongside Python and others.
How do you actually learn concurrency without fooling yourself?
Every time I think I “get” concurrency, a real bug proves otherwise.
What finally helped wasn’t more theory, but forcing myself to answer basic questions:
What can run at the same time here?
What must be ordered?
What happens if this suspends at the worst moment?
A rough framework I use now:
First understand the shape of execution (what overlaps)
Then define ownership (who’s allowed to touch what)
Only then worry about syntax or tools
Still feels fragile.
How do you know when your mental model is actually correct? Do you rely on tests, diagrams, or just scars over time?
> Instead of callbacks, you write code that looks sequential [but isn’t]
(bracketed statement added by me to make the implied explicit)
This sums up my (personal, I guess) beef with coroutines in general. I have dabbled with them since different experiments were tried in C many moons ago.
I find that programming can be hard. Computers are very pedantic about how they get things done. And it pays for me to be explicit and intentional about how computation happens. The illusory nature of async/await coroutines that makes it seem as if code continues procedurally demos well for simple cases, but often grows difficult to reason about (for me).
I don't know a ton about Swift, but it does feel like for a lot of apps (especially outside of the gaming and video encoding world), you can almost treat CPU power as infinite and exclusively focus on reducing latency.
Obviously I'm not saying you throw out big O notation or stop benchmarking, but it does seem like eliminating an extra network call from your pipeline is likely to have a much higher ROI than nearly any amount of CPU optimization has; people forget how unbelievably slow the network actually is compared to CPU cache and even system memory. I think the advent of these async-first frameworks and languages like Node.js and Vert.x and Tokio is sort of the industry acknowledgement of this.
We all learn all these fun CPU optimization tricks in school, and it's all for not because anything we do in CPU land is probably going to be undone by a lazy engineer making superfluous calls to postgres.
One of the things that really took me a long time to map in my head correctly is that in theory async/await should NOT be the same as spinning up a new thread (across most languages). It's just suspending that closure on the current thread and coming back around to it on the next loop of that existing thread. It makes certain data reads and writes safe in a way that multithreading doesn't. However, as noted in article, it is possible to eject a task onto a different thread and then deal with data access across those boundaries. But that is an enhancement to the model, not the default.
This looks like it's well-written and approachable. I'll need to spend more time reviewing it, but, at first scan, it looks like it's nicely done.
Good stuff. Would appreciate a section on bridging pre-async (system) libraries.
if you want to go into a long discussion/deep-dive into swift concurrency vs gcd and threads etc i'd recommend this thread on swift.org, it was very illuminating for me personally (and really fun to read)
https://forums.swift.org/t/is-concurrent-now-the-standard-to...
I loved the idea of Swift adopting actors however the implementation seems shoehorned. I wanted something more like Akka or QP/C++...
But how would you do what in Rust's Tokio is a `spawn_blocking` in Swift?
I really don't know why Apple decided to substitute terms like "actor" and "task" with their own custom semantics. Was the goal to make it so complicated that devs would run out of spoons if they try to learn other languages?
And after all this "fucking approachable swift concurrency", at the end of the day, one still ends up with a program that can deadlock (because of resources waiting for each other) or exhaust available threads and deadlock.
Also, the overload of keywords and language syntax around this feature is mind blowing... and keywords change meaning depending on compiler flags so you can never know what a code snippet really does unless it's part of a project. None of the safeties promised by Swift 6 are worth the burnout that would come with trying to keep all this crap in one's mind.
Does it do any refcounting optimizations?
Are there any reference counting optimizations like biased counting? One big problem with Python multithreading is that atomic RCs are expensive, so you often don't get as much performance from multiple threads as you expect.
But in Swift it's possible to avoid atomics in most cases, I think?
@dang I think it's important that "fucking" remains in the title
[dead]
(We don't have a problem with profanity in general but in this case I think it's distracting so I've de-fuckinged the title above. It's still in the sitename for those who care.)
Concurrency issues aside, I've been working on a greenfield iOS project recently and I've really been enjoying much of Swift's syntax.
I’ve also been experimenting with Go on a separate project and keep running into the opposite feeling — a lot of relatively common code (fetching/decoding) seems to look so visually messy.
E.g., I find this Swift example from the article to be very clean:
And in Go (roughly similar semantics) I understand why it's more verbose (a lot of things are more explicit by design), but it's still hard not to prefer the cleaner Swift example. The success path is just three straightforward lines in Swift. While the verbosity of Go effectively buries the key steps in the surrounding boilerplate.This isn't to pick on Go or say Swift is a better language in practice — and certainly not in the same domains — but I do wish there were a strongly typed, compiled language with the maturity/performance of e.g. Go/Rust and a syntax a bit closer to Swift (or at least closer to how Swift feels in simple demos, or the honeymoon phase)