logoalt Hacker News

I Cannot SSH into My Server Anymore (and That's Fine)

61 pointsby TheWiggleslast Wednesday at 9:37 AM23 commentsview on HN

Comments

crawshawyesterday at 11:33 PM

The idea that an "observability stack" is going to replace shell access on a server does not resonate with me at all. The metrics I monitor with prometheus and grafana are useful, vital even, but they are always fighting the last war. What I need are tools for when the unknown happens.

The tool that manages all my tools is the shell. It is where I attach a debugger, it is where I install iotop and use it for the first time. It is where I cat out mysterious /proc and /sys values to discover exotic things about cgroups I only learned about 5 minutes prior in obscure system documentation. Take it away and you are left with a server that is resilient against things you have seen before but lacks the tools to deal with the future.

show 4 replies
stryanyesterday at 10:59 PM

Quadlets are a real game changer for this type of small-to-medium scale declarative hosting. I've been pushing for them at work over ugly `docker compose in systemd units` service management and moved my home lab over to using them for everything. The latter is a similar setup to OP except with OpenSUSE MicroOS instead of Fedora CoreOS and I'm not so brave as to destroy and rebuild my VPS's whenever I make a change :) . On the other hand, MicroOS (and I'm assuming FCOS) reboots automatically to apply updates with rollback if needed so combined with podman auto-update you can basically just spin up a box, drop the files on, and let it take care of itself (at least until a container update requires manual intervention).

A few things in the article I think might help the author:

1. Podman 4 and newer (which FCOS should definitely have) uses netavark for networking. A lot of older tutorials and articles were written back when Podman used CNI for it's networking and didn't have DNS enabled unless you specifically installed it. I think the default `podman` network is still setup with DNS disabled by default. Either way, you don't have to use a pod if you don't want to anymore, you can just attach both containers to the same network and it should Just Work.

2. You can run the generator manually with "/usr/lib/systemd/system-generators/podman-system-generator --dry-run" to check Quadlet validity and output. Should be faster than daemon-reload'ing all the time or scanning the logs.

And as a bit of self-promotion: for anyone who wants to use Quadlets like this but doesn't want to rebuild their server whenever they make a change, I'm created a tool called Materia[0] that can install, remove, template, and update Quadlets and other files from a Git repository.

[0] https://github.com/stryan/materia

show 1 reply
gucci-on-fleekyesterday at 11:46 PM

Fedora IoT [0] is a nice intermediate solution. Despite its name, it's really good for servers, since it's essentially just the Fedora Atomic Desktops (Silverblue/Kinoite) without any of the desktop stuff. It gets you atomic updates, a container-centric workflow, and easy rollbacks; but it's otherwise a regular server, so you can install RPMs, ssh into it, create user accounts, and similar. This is what I do for my personal server, and I'm really happy with it.

[0]: https://fedoraproject.org/iot/

starttoastertoday at 12:27 AM

So it's AWS Fargate with a different name? That's cool for cloud hosted stuff. But if you're on prem, or manage your own VPS' then you need SSH access.

show 1 reply
dorfsmaytoday at 12:18 AM

Perfect timing for me, I've just been spending my side-project time in the last few weeks on building the smallest possible VMs with different glibc distros exactly for this, running podman containers, and comparing results.

amlutoyesterday at 11:02 PM

> I’ve later learned that restarting a container that is part of a pod will have the (to me, unexpected) side-effect to restart all the other containers of that pod.

Anyone know why this is? Or, for that matter, why Kubernetes seems to work like this too?

I have an application for which the natural solution would be to create a pod and then, as needed, create and destroy containers within the pod. (Why? Because I have some network resources that don’t really virtualize, so they can live in one network namespace. No bridges.)

But despite containerd and Podman and Kubernetes kind-of-sort-of supporting this, they don’t seem to actually want to work this way. Why not?

show 5 replies
lawrencegripperyesterday at 10:59 PM

I’ve been down a similar journey with Fedora Core OS and have loved it.

The predictability and drop in toil is so nice.

https://blog.gripdev.xyz/2024/03/16/in-search-of-a-zero-toil...

yigaliranitoday at 12:29 AM

real programmers can ssh to their servers

show 1 reply
andrewmcwattersyesterday at 11:17 PM

I concede that this is the state of the art in secure deployments, but I’m from a different age where people remoted into colocated hardware, or at least managed their VPSs without destroying them every update.

As a result, I think developers are forgetting filesystem cleanliness because if you end up destroying an entire instance, well it’s clean isn’t it?

It also results in people not knowing how to do basic sysadmin work, because everything becomes devops.

The bigger problem I have with this, is the logical conclusion is to use “distroless” operating system images with vmlinuz, an init, and the minimal set of binaries and filesystem structure you need for your specific deployment, and rarely do I see anyone actually doing this.

Instead, people are using a hodgepodge of containers with significant management overhead, that actually just sit on like Ubuntu or something. Maybe alpine. Or whatever Amazon distribution is used on ec2 now. Or of course, like in this article, Fedora CoreOS.

One day, I will work with people who have a network issue and don’t know how to look up ports in use. Maybe that’s already the case, and I don’t know it.

show 2 replies