For something like this, I would instinctively reach for an external queue mechanism instead of trying to work through the complexity of golangs concurrency.
Create a bunch of sequentially numbered jobs that then update their output into postgres database. Then have N number of workers process the jobs. Something like GCP's CloudTasks is perfect for this because the "workers" are just GCP Cloud Functions, so you can have a near infinite number of them (limited by concurrent DB connections).
This approach also buys you durability of the queue for free (ie: what happens when you need to stop your golang process mid queue?).
Then it is just a query:
select * from finished_jobs order by job_num;
I've just made a small but important clarification to the article. While in many cases it's easier and even preferred to calculate all results, accumulate them somewhere, then sort; this article focuses on memory bound algorithms that support infinite streams and backpressure.