I think most of us learned this from an early age - computer systems often degrade as they keep running and need to be reset from time to time.
I remember when I had my first desktop PC at home (Windows 95) and it would need a fresh install of Windows every so often as things went off the rails.
This only applies to Windows and I think you're referencing desktops.
Ten years ago I think rule of thumb was uptime of not greater than 6 months. But for different reasons. (Windows Server...)
On Solaris, Linux, BSDs etc. it's only necessary for maintenance. Literally. I think my longest uptime production system was a sparc postgres system under sustained high load with uptime of around 6 years.
With cloud infra, people have forgotten just how stable the Unixken are.
This has got to be a failure of early Windows versions -- I've had systems online for 5+ years without needing to be restarted, updating and restarting the software running on them without service interruption. RAID storage makes hotswapping failing drives easy, which is the most common part needing periodic replacement.