It depends on the service and how critical that website is.
Sometimes it's completely acceptable that a server will run for 10 years with say 1 week or 1 month of downtime spread over those 10 years, yes. That's the sort of uptime you can see with single servers that are rarely changed and over-provisioned as many on Hetzner are. Some examples:
Small businesses where the website is not core to operations and is more of a shop-front or brochure for their business.
Hobby websites too don't really matter if they go down for short periods of time occasionally.
Many forums and blogs just aren't very important too and downtime is no big deal.
There are a lot of these websites, and they are at the lower end of the market for obvious reasons, but probably the majority of websites in fact, the long tail of low-traffic websites.
Not everything has to be high availability and if you do want that, these providers usually provide load balancers etc too. I think people forget here sometimes that there is a huge range in hosting from squarespace to cheap shared hosting to more expensive self-hosted and provisioned clouds like AWS.
A week of downtime every decade I think still works out to a higher uptime than I've been getting from parts of GitHub lately. So I'd consider that a win.
Respectfully, this type of "high availability" strawman is a dated take.
This is a general response to it.
I have run hosting on bare metal for millions of users a day. Tens of thousdands of concurrent connections. It can scale way up by doing the same thing you do in a cloud, provision more resources.
For "downtime" you do the same thing with metal, as you do with digital ocean, just get a second server and have them failover.
You can run hypervisors to split and manage a metal server just like Digital Ocean. Except you're not vulnerable to shared memory and cpu exploits on shared hosting like Digital Ocean. When Intel CPU or memory flaws or kernel exploits come out like they have, one VM user can read the memory and data of all the other processes belonging to other users.
Both Digital Ocean, and IaaS/PaaS are still running similar linux technologies to do the failover. There are tools that even handle it automatically, like Proxmox. This level of production grade fail over and simplicity was point and click, 10 years ago. Except no one's kept up with it.
The cloud is convenient. Convenience can make anyone comfortable. Comfort always costs way more.
It's relatively trivial to put the same web app on a metal server, with a hypervisor/IaaS/Paas behind the same Cloudflare to access "scale".
Digital Ocean and Cloud providers run on metal servers just like Hetzner.
The software to manage it all is becoming more and more trivial.
I feel like 95% of the web falls into this category. Like, have you ever said "That's it, I am never gonna visit this page again!", because of temporary downtime? Unless you are Amazon and every minute costs you bazillions, you are likely gonna get the better deal not worrying about availability and scalability. That 250€/m root server is a behemoth. Complete overkill for most anything. As a bonus, you are gonna be half the internet, when someone at AWS or Cloudflare touches DNS.
What struck me though is that OP did so much work to migrare the server with zero downtime. The _single_ big server. Something’s off here.