logoalt Hacker News

Ephemeral Infrastructure: Why Short-Lived Is a Good Thing

28 pointsby birdculturelast Sunday at 10:50 AM12 commentsview on HN

Comments

cortesofttoday at 4:54 PM

I do appreciate the way Kubernetes forces you to plan for instance failure from the beginning, and that it creates standards on how to deal with it.

However, I feel like this article really glosses over the challenge of stateful workloads by simply handing over that responsibility to the cloud providers.

A lot of us have to run our own servers in our own datacenters for various reasons, so we have to solve that problem ourselves.

Luckily, the same principals apply for stateful workloads, it is just more challenging. You have to plan for instance failures while still preserving your data.

Even more luckily, the tools for this have gotten better and better. Various database controllers are getting much better at handling clustering and failover for you, so you can handle instances and nodes going down without losing data and without having to outsource the management to the cloud.

kennethwolterstoday at 4:31 PM

for me it feels like: Everything is stateful by default/convenience. Building robust systems is in part about confining statefulness to as few parts as possible. To contain statefulness. It’s to buy you some time and capacity. Yet the toughest problems often arise in the stateful parts of the system as well as quasi-stateless parts which sometimes develop hidden statefulness (think of syncing webclient and server state). So being good at handling stateful systems is valuable. Maybe one should even embrace statefulness. However, the AWS Solution Architect will tell you otherwise.

xyzzy_plughtoday at 2:51 PM

I've written this about four times for two employers and two clients: ABC: Always Be Cycling

Basic premise is to encode, be it lifecycle rules or a cron, behavior such that instances are cycled after at most 7 days, but there should always be an instance cycling (with some cool down period of course).

It has never not improved overall system stability and in a few cases even decreased costs significantly.

drob518today at 3:24 PM

This seems to be rediscovering "pets vs. cattle."

show 2 replies
N_Lenstoday at 1:47 PM

I think most of us learned this from an early age - computer systems often degrade as they keep running and need to be reset from time to time.

I remember when I had my first desktop PC at home (Windows 95) and it would need a fresh install of Windows every so often as things went off the rails.

show 2 replies
godbertoday at 3:01 PM

Nice post, one more thing to keep in mind with your StatefulSets is how long the service running in the pod takes to come back up. Many will scan the on disk state for integrity and perform recovery tasks. These can take a while and mean the overall service is in a degraded state.

Manage these things and any stateful distributed service can run easily in Kubernetes.

preisschildtoday at 1:33 PM

Have been doing this in production for years now with Cluster-API + Talos.

When I update the Kubernetes or Talos version new nodes will be created, and after the existing pods are rescheduled on new nodes the old nodes are deleted.

Works pretty well.