If you run nginx anyway, why not serve static files from nginx? No need for temporary files, no extra disk space.
The authorization can probably be done somehow in nginx as well.
> I rushed to run du -sh on everything I could, as that’s as good as I could manage.
I recently came across gdu (1) and have installed/used it on every machine since then.
Putting limits on folders where information may be added (with partitions or project quotas) is a proactive way to avoid that something misbehaves and fills the whole disk. Filling that partition or quota may still cause some problems, depending on the applications writing there, but the impact may be lower and easier to fix than running out of space for everything.
I appreciate the last line
> Note: this was written fully by me, human.
> Plausible Analytics, with a 8.5GB (clickhouse) database
And this is why I tried Plausible once and never looked back.
To get basic but effective analytics, use GoAccess and point it at the Caddy or Nginx logs. It’s written in C and thus barely uses memory. With a few hundreds visits per day, the logs are currently 10 MB per day. Caddy will automatically truncate if logs go above 100 MB.
I remember a story of an Oracle Database customer who had production broken for days until an Oracle support escalation led to identifying the problem as mere "No disk space left".
[dead]
[dead]
A neat trick I was told is to always have ballast files on your systems. Just a few GiB of zeros that you can delete in cases like this. This won't fix the problem, but will buy you time and free space for stuff like lock files so you can get a working system.