I'd be a bit skeptical of the result given the benchmark program:
import java.util.ArrayList;
import java.util.List;
class Day06 {
public static void main(String args[]) {
List<String> fileTypeList = new ArrayList<>();
for (int i = 0; i < 1000000; i++) {
fileTypeList.add("fileType");
}
long beforeForLoop = System.currentTimeMillis();
for (int i = 0; i < fileTypeList.size(); i++) {
fileTypeList.get(i);
}
long afterForLoop = System.currentTimeMillis();
System.out.println("Time took in millis for for " + (afterForLoop - beforeForLoop));
long beforeForeachLoop = System.currentTimeMillis();
for (String s : fileTypeList) {
}
long afterForeachLoop = System.currentTimeMillis();
System.out.println("Time took in millis for foreach " + (afterForeachLoop - beforeForeachLoop));
}
}
Empty loops and no warmup (at a minimum!) make for a somewhat suboptimal benchmark, to say the least. To be honest I'm surprised the JIT didn't eliminate the loops altogether.If you want proper results you probably want to use the Java Microbenchmark Harness [0]. You'd probably want some actual data/work as well so the JIT doesn't overspecialize on the benchmark.
Edit: Halfheartedly tried to adapt the LinkedIn benchmark to JMH. Still not a great benchmark and I'm rusty so I wouldn't be surprised if I messed something up, but it's hopefully better the original:
package org.sample;
import org.openjdk.jmh.annotations.*;
import org.openjdk.jmh.runner.Runner;
import org.openjdk.jmh.runner.RunnerException;
import org.openjdk.jmh.runner.options.Options;
import org.openjdk.jmh.runner.options.OptionsBuilder;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.TimeUnit;
@State(Scope.Thread)
@BenchmarkMode(Mode.AverageTime)
@OutputTimeUnit(TimeUnit.NANOSECONDS)
public class MyBenchmark {
List<String> fileTypeList = new ArrayList<>() {{
for (int i = 0; i < 1000000; i++) {
this.add("fileType");
}
}};
@Benchmark
public void baseline() {
}
@Benchmark
public int measureFor() {
int result = 0;
for (int i = 0; i < fileTypeList.size(); i++) {
result += (int)fileTypeList.get(i).charAt(0);
}
return result;
}
@Benchmark
public int measureForEach() {
int result = 0;
for (String s : fileTypeList) {
result += (int)s.charAt(0);
}
return result;
}
public static void main(String[] args) throws RunnerException {
Options opt = new OptionsBuilder()
.include(MyBenchmark.class.getSimpleName())
.build();
new Runner(opt).run();
}
}
And the result summary (run on Ubuntu via WSL2): # JMH version: 1.37
# VM version: JDK 21.0.10, OpenJDK 64-Bit Server VM, 21.0.10+7-Ubuntu-124.04
# VM invoker: /usr/lib/jvm/java-21-openjdk-amd64/bin/java
# VM options: <none>
# Blackhole mode: compiler (auto-detected, use -Djmh.blackhole.autoDetect=false to disable)
# Warmup: 5 iterations, 10 s each
# Measurement: 5 iterations, 10 s each
# Timeout: 10 min per iteration
# Threads: 1 thread, will synchronize iterations
# Benchmark mode: Average time, time/op
Result "org.sample.MyBenchmark.baseline":
0.254 ±(99.9%) 0.022 ns/op [Average]
(min, avg, max) = (0.247, 0.254, 0.262), stdev = 0.006
CI (99.9%): [0.232, 0.276] (assumes normal distribution)
Result "org.sample.MyBenchmark.measureFor":
693178.390 ±(99.9%) 60793.583 ns/op [Average]
(min, avg, max) = (676266.480, 693178.390, 718114.554), stdev = 15787.901
CI (99.9%): [632384.806, 753971.973] (assumes normal distribution)
Result "org.sample.MyBenchmark.measureForEach":
693756.240 ±(99.9%) 7549.769 ns/op [Average]
(min, avg, max) = (691685.231, 693756.240, 696573.323), stdev = 1960.651
CI (99.9%): [686206.470, 701306.009] (assumes normal distribution)
Doesn't look like a significant difference to me, though obviously the benchmark quality leaves something to be desired.
[0]: https://github.com/openjdk/jmh