It is hard to say which of the 2 is the reason, more likely both, i.e. lower complexity enabled more exhaustive testing.
In any case some of the software from before 2000 was definitely better than today, i.e. it behaved like being absolutely foolproof, i.e. nothing that you could do could cause any crash or corrupted data or any other kind of unpredictable behavior.
However, the computers to which most people had access at that time had only single-threaded CPUs. Even if you used a preemptive multitasking operating system and a heavily multi-threaded application, executing it on a single-threaded CPU was unlikely to expose subtle bugs due to race conditions, that might have been exposed on a multi-core CPU.
While nowadays there exists no standard operating system that I fully trust to never fail in any circumstance, unlike before 2003, I wonder whether this is caused by a better quality of the older programs or by the fact that it is much harder to implement software concurrency correctly on systems with hardware parallelism.