The argument about concurrency != parallelism mentioned in this article as being "not useful" is often quoted and rarely a useful or informative, and it also fails to model actual systems with enough fidelity to even be true in practice.
Example: python allows concurrency but not parallelism. Well not really though, because there are lots of examples of parallelism in python. Numpy both releases the GIL and internally uses open-mp and other strategies to parallelize work. There are a thousand other examples, far too many nuances and examples to cover here, which is my point.
Example: gambit/mit-scheme allows parallelism via parallel execution. Well, kindof, but really it's more like python's multiprocess library pooling where it forks and then marshals the results back.
Besides this, often parallel execution is just a way to manage concurrent calls. Using threads to do http requests is a simple example, while the threads are able to execute in parallel (depending on a lot of details) they don't, they spend almost 100% of their time blocking on some socket.read() call. So is this parallelism or concurrency? It's what it is, it's threads mostly blocking on system calls, parallelism vs concurrency gives literally no insights or information here because it's a pointless distinction in practice.
What about using async calls to execute processes? Is that concurrency or parallelism? It's using concurrency to allow parallel work to be done. Again, it's both but not really and you just need to talk about it directly and not try to simplify it via some broken dichotomy that isn't even a dichotomy.
You really have to get into more details here, concurrency vs parallelism is the wrong way to think about it, doesn't cover the things that are actually important in an implementation, and is generally quoted by people who are trying to avoid details or seem smart in some online debate rather than genuinely problem solving.
The difference is quite useful and informative. In fact, most places don't seem to state it strongly enough: Concurrency is a programming model. Parallelism is an execution model.
Concurrency is writing code with the appearance of multiple linear threads that can be interleaved. Notably, it's about writing code. Any concurrent system could be written as a state machine tracking everything at once. But that's really hard, so we define models that allow single-purpose chunks of linear code to interleave and then allow the language, libraries, and operating system to handle the details. Yes, even the operating system. How do you think multitasking worked before multi-core CPUs? The kernel had a fancy state machine tracking execution of multiple threads that were allowed to interleave. (It still does, really. Adding multiple cores only made it more complicated.)
Parallelism is running code on multiple execution units. That is execution. It doesn't matter how it was written; it matters how it executes. If what you're doing can make use of multiple execution units, it can be parallel.
Code can be concurrent without being parallel (see async/await in javascript). Code can be parallel without being concurrent (see data-parallel array programming). Code can be both, and often is intended to be. That's because they're describing entirely different things. There's no rule stating code must be one or the other.