There's two things that cause this. First, Windows has a variable swap file size, whereas Linux has a fixed size, so Windows can just fill up your drive, instead of running out of swap space. Second, the default behavior for the out-of-memory killer in Linux isn't very aggressive, with the default behavior being to over-commit memory instead of killing processes.
As far as I know, Linux still doesn't support a variable-sized swap file, but it is possible to change how aggressively it over-commits memory or kills processes to free memory.
As to why there differences are there, they're more historical than technical. My best guess is that Windows figured it out sooner, because it has always existed in an environment where multiple programs are memory hogs, whereas it wasn't common in Linux until the proliferation of web-based everything requiring hundreds of megabytes to gigabytes of memory for each process running in a Chrome tab or Electron instance, even if it's something as simple as a news article or chat client.
Check out this series of blog posts. for more information on Linux memory management: https://dev.to/fritshooglandyugabyte/series/16577
Windows will also prioritise to keep the desktop and current focussed application running smoothly, the Linux kernel has no idea what's currently focused or what not to kill, your desktop shell is up there on the menu in oom situations.
Windows "figured it out sooner" because it never really had to seriously deal with overcommitting memory: there is no fork(), so the memory usage figures of the processes are accurate. On Linux, however, the un-negotiable existence of fork() really leaves one with no truly good solution (and this has been debated for decades).
> As far as I know, Linux still doesn't support a variable-sized swap file...
You can add (and remove) additional swapfiles during runtime, or rather on demand. I'm just unaware of any mechanism doing that automagically, though.
Could probably done in eBPF and some shell scripts, I guess?
Huh? What does swap area size have to do with responsiveness under load? Linux has a long history of being unusable under memory pressure. systemd-oomd helps a little bit (killing processes before direct reclaim makes everything seize up), but there's still no general solution. The relevance to history is that Windows got is basically right ever and Linux never did.
Nothing to do with overcommit either. Why would that make a difference either? We're talking about interactivity under load. How we got to the loaded state doesn't matter.
There are daemons (not installed by default) that monitor memory usage and can increase swap size or kill processes accordingly (you can ofc also configure OOM killer).