> Curious how others tackle such problems.
What do you think about the order preserving simplicity of Java?
List<Input> inputs = ...;
List<Output> results = inputs.parallelStream()
.map(this::processTask)
.collect(toList());
If you want more control or have more complex use cases, you can use an ExecutorService of your choice, handle the futures yourself or get creative with Javas new structured concurrency.I haven’t used Java for about a decade, so I’m not very familiar with streams api.
Your snippet looks good and concise.
One thing I haven’t emphasized enough in the article is that all algorithms there are designed to work with potentially infinite streams
Often in go I’ll create some data structure like a map to hold the new value keyed by the original index (basically a for loop with goroutines inside that close over the index value) - then I just reorder them after waiting for all of them to complete.
Is this basically what Java is doing?
I think that maybe the techniques in this article are a little more complex, allowing you to optimize further (basically continue working as soon as possible instead of just waiting for everything to complete and reordering after the fact) but I’d be curious to know if I’ve missed something.
Their planned semantics don't allow for that - there's no backpressure in that system, so it might race ahead and process up to e.g. item 100 while still working on item 1.
If everything fits in memory, that's completely fine. And then yeah, this is wildly overcomplicated, just use a waitgroup and a slice and write each result into its slice index and wait for everything to finish - that matches your Java example.
But when it doesn't fit in memory, that means you have unbounded buffer growth that might OOM.