I'm sorry, but there's a lot of spin here. Basically you guys handled this terribly, and your reliability has tanked recently, hence why customers that need reliability in production are leaving or have already migrated.
> We went deep on them, tested them prior, and then when rubber met road in production we ran into cases we didn't see in testing. The large issue, and mentioned in the blogpost, is that we didn't have a mechanism to to a staged release.
Honestly for a production-grade _platform_ company, that also does compliance (SOC2/3, HIPAA etc.), not having a staged release is negligent, and how you guys are handling this is a huge red flag. I've done such changes myself in production envs, for deployments that don't have the stakes you guys have. I'm normally more sympathetic on incidents, but the lack of transparency thus far from railway leaves me doubting more than anything.
> Our initial post definitely could have been more clear, and we revised it the moment we got customer feedback to do so.
Please read the room, there's still a lot of confusion about the blog post in this thread (https://news.ycombinator.com/item?id=47582295). The technical detail isn't there, we only know it about the surrogate keys from the status incident (https://status.railway.com/incident/X0Q39H56) which is not linked in the post. The blog post reads like PR compared to the initial incident status report, and the resolved timestamp does not match which is sloppy. Your little edit to the title only made it from a bad post to a slightly less bad post.
> We notified customers even before we did a wide release, as is process for anything security related. You create space for as much disclosure area as possible, and then follow up with a public disclosure
Emailing only affected users isn't working out, because affected people aren't yet emailed (I know one personally). Just check the post on your own forum (https://station.railway.com/questions/data-getting-cached-or... did you actually read it?) and see the list of people affected still not emailed, and left on read. You guy should email everyone, this is a security incident not a service interruption. There's a lot of loss trust by your customers now, i.e., if you guys can't figure out who to email, what else are you doing wrong?
> Do you have any specifics here? We're scaling the system at 100x YoY growth right now, working 24/7 to scale the entire thing. Again, all ears on if you have specific crits as we're always open to receiving feedback on how we can do things better!
https://x.com/JustJake/status/2038806338915152350
Again, it's not an excuse if you're a _platform_ company that customers pay a lot of money to be reliable. You can't just keep saying you're open to feedback and being transparent as vanity. There's plenty of feedback on here, your twitter, your forum, and feedback is people are telling you to focus on reliability, because railway keeps breaking their deployments. If you don't care about reliability and prefer to scale with features, be honest about it. Railway's poor uptime does not lie.
> There are team members in that thread linked, are you certain you linked the right thread? Happy to have a look at anything you believe we're missing!
Did you read the thread? Yes, only _one_ employee commented 5 hours after my HN comment. Still almost everyone left of read, unanswered questions etc.
By way that's only one forum post, there are many that are just ignored, one where a user mentioned they're reporting railway to ICO for a GDPR breach, rightfully.
> Honestly for a production-grade _platform_ company, that also does compliance (SOC2/3, HIPAA etc.), not having a staged release is negligent, and how you guys are handling this is a huge red flag. I've done such changes myself in production envs, for deployments that don't have the stakes you guys have. I'm normally more sympathetic on incidents, but the lack of transparency thus far from railway leaves me doubting more than anything.
We do indeed have a staging environment as mentioned previously. The issue arose in the rollout to production as mentioned previously.
> The blog post reads like PR compared to the initial incident status report, and the resolved timestamp does not match which is sloppy.
I've gone ahead and added the surrogate key mention into the post mortem. We initially got in trouble for having it be too technical centric and not enough on the user impact. It's a delicate balance; apologies. As I mention, we are open to critical feedback here.
> Emailing only affected users isn't working out, because affected people aren't yet emailed (I know one personally). Just check the post on your own forum (https://station.railway.com/questions/data-getting-cached-or... did you actually read it?) and see the list of people affected still not emailed, and left on read.
We have people working directly in that thread. For anybody who believes they were affected but not reached out to, we're working directly with them. We do take this very seriously. If you know someone here, please have them reach out either there or directly to me at [email protected]
> Again, it's not an excuse if you're a _platform_ company that customers pay a lot of money to be reliable. You can't just keep saying you're open to feedback and being transparent as vanity.
In the directly linked tweet I've mentioned that we're focusing on scaling the current system vs adding new features. We absolutely do need to do better on reliability, and my point is "Is there a specific poor engineering practice you're seeing here, or is it just based on reliability". Either is a fine crit we just want to make sure all our basis are covered
> Did you read the thread? Yes, only _one_ employee commented 5 hours after my HN comment. Still almost everyone left of read, unanswered questions etc.
Indeed I've read the thread, and we have people working it (you can see as of 8 hours ago).