For toy apps and initial prototypes the problem is they aren't going to get used so the rot is that they will be in a good-enough state to come back to when you get the time. With supabase the drop-in auth completely broke at some point, probably just a deprecation I didn't keep up with.
The postgres instance spins down when you aren't using it, which is understandable and I will say it works, it's just Postgres and you get the database dump if you need to move or come back a year later.
The nuance here is that you get the raw connection string + the postgREST API which all makes sense but you're choosing full cloud/client mode which is completely different from if you just went with the raw connection string behind a server layer. I kinda had to work through all of that learning on my own. The full client mode trade-off is that you'll be doing everything with that pattern, handling migrations, security, auth, it's just kinda... it's a whole thing. The public postgREST and row level security is a different paradigm.
as a professional dev, I would have just chosen the raw connection string and managed the database from the server until I outgrew it and I'd have the dev workflow already, it's just a Postgres db. Or sqlite to start, same reasoning it's all the same dev workflow, the problem is the cloud-hosting transition, which is why fully-managed cloud db accessible from an edge/client runtime is so alluring, but you're trading two very different ergonomics.
For me, not scaling, the opposite of scaling. For the scaling side there's two much more involved posts about the topic: https://news.ycombinator.com/item?id=36004925, https://news.ycombinator.com/item?id=48038827
For toy apps and initial prototypes the problem is they aren't going to get used so the rot is that they will be in a good-enough state to come back to when you get the time. With supabase the drop-in auth completely broke at some point, probably just a deprecation I didn't keep up with.
The postgres instance spins down when you aren't using it, which is understandable and I will say it works, it's just Postgres and you get the database dump if you need to move or come back a year later.
The nuance here is that you get the raw connection string + the postgREST API which all makes sense but you're choosing full cloud/client mode which is completely different from if you just went with the raw connection string behind a server layer. I kinda had to work through all of that learning on my own. The full client mode trade-off is that you'll be doing everything with that pattern, handling migrations, security, auth, it's just kinda... it's a whole thing. The public postgREST and row level security is a different paradigm.
as a professional dev, I would have just chosen the raw connection string and managed the database from the server until I outgrew it and I'd have the dev workflow already, it's just a Postgres db. Or sqlite to start, same reasoning it's all the same dev workflow, the problem is the cloud-hosting transition, which is why fully-managed cloud db accessible from an edge/client runtime is so alluring, but you're trading two very different ergonomics.
I'm thought-dumping, gotta run, hope this helps.