My solution to this is to have leveled alerts. Some are... recommendations, the ones which you look at with a glance to get a heads up about something being wrong. These are the ones which OP would claim cause alert fatigue, most likely.
Then I have a second level of this, the superpanic. Here is the "true" alert, which means "drop all things, fix this now". On every superpanic, there are stricter routines which intentionally cause friction, such as creating tickets about said superpanic, potentially hosting post mortems etc. This additional manual labour encourages tweaking the levels of the superpanic so that they sometimes are more lack, sometimes stricter, depending on the quality of the deployed services + the current load.
What signals a superpanic? Key valuable functionality being offline. Off-site uptime-checkers assuring that all primary domains resolve + serve traffic, mostly. Also crontime integration tests of core functionality. Stuff like that.
For prior art on how to define alert conditions, see:
https://en.wikipedia.org/wiki/Nelson_rules
> Alerts should be actionable. If no action can or should be taken, then the alert is not needed.
Also, the best alerts come from looking at actual failures you had and not trying to make up "good alerts" from thin air. After you have an outage, figure out what alerts would have caught it, and implement those.
In my opinion the best method to reduce alerts is to work hard to get rid of the underlying problems or turn them into a non-problems. If you do a good job most errors are 3rd party driven, that can be indeed hard to solve relative to company politics. But at that point you can always tell your boss how it can be solved and that you wont go on pager duty for stuff that is out of your control.
Depends on what you are monitoring but let's assume an API endpoint. Collect and monitor the RED metrics with detailed dimensions in combination with blackbox monitoring simulating client transactions as realistic possible and alert only on those 2 types.
When that happens, fire off a battery of diagnostic checks which you have collected over time to pinpoint the cause.
What if the diagnostics checks don't reveal the issue? There is still value since you know these are not the reason so no time is wasted re-evaluating them. Where to get these diagnostic checks from? Well, what's the first thing responding engineers do? Open the CLI and troubleshoot. Those are your diagnostic checks. Collect, automate, capture the domain specific knowlegde and democratize it.
I certainly agree in spirit that the alerts are important, and should be actionable. But I wouldn't start at just "looking at the service" and then trying to define the first set of alerts.
Instead I would move up a level and start with a SLO for the various "business level" metrics you might care about. Things like "request latency", "successful requests", etc.
Then use the longer lookahead "error budget" burndowns to see where your error budget is being spent, and from there decide 1.) if the SLO needs adjusting, and/or 2.) if an alert is appropriate.
To cleanly answer those questions and iterate you'll need metrics, dashboards, traces, and logs. So then you're not just making dashboards because "its best practice", you're creating them to specifically help you measure if you're meeting your stated service objectives.
Not all alerts are created equal. You should generally have three levels of alerts: critical (which pages somebody, time-to-fix should be ASAP), warning (creates a ticket, time-to-fix should be within a few days), and suspicious (does not notify, appear only on an alert dashboard). The suspicious alerts are there to help guide your investigation on a critical or warning alert.
Each critical and warning alert should link to an "interactive runbook" - a dashboard that combines text instructions along with graphs showing real-time data.
Doing this at scale, correctly, requires both alerts-as-code and dashboards-as-code, which almost nobody does because nobody treats higher-level configuration languages (jsonnet, CUE...) with the attention and respect they deserve /cries-in-yaml
I used to believe in alert fatigue, because you’re frequently told to repeat the line: if you have too many alerts, eventually everyone will stop paying attention to them.
I have tons of alerts at work. They go to specialized slack channels that I can look at if I need. We have on call escalation paths for critical ones and housekeeping duties for the ones that require engineers to perform a maintenance task. We have the hell channels that are 99.99% flapping, if you ever need that.
I find that observability in general has an extremely linear marginal reward curve, it basically always justifies the effort you put into setting it up.
> The real core of infrastructure monitoring isn’t dashboards. It’s the alerts.
“it’s not X it’s Y”
at this point when I see this pattern in writing I assume most if not all of it is AI generated - same with em-dashes.
This is not to discount the idea that alerts are more important than dashboards (I work directly in observability) - but just to say that I personally shut off reading anything else with these patterns because, generally speaking, the rest of the content is just not original or interesting.
I like the ideas, but either it’s entirely LLM written or the writer has internalized “LLM voice”. At this point that is more distracting than helpful.
I work writing analytics and monitoring for industrial equipment. We have hundreds of sensors sending back realtime data.
There was a period of time where people were writing alerts for the sake of it (i.e we have this sensor, when should we alert on it).
Nowadays we're strictly failure mode driven, this has meant lots of sensors aren't used in the analytics. They are however available to the experts to plot them for a more holistic view if required.
I have some thoughts here.
I work for a startup; we have what I think is a fairly typical setup: metrics ingested from a variety of sources, fed into industry-standard metrics/dashboard solutions, triggering escalations to humans. It's fine and I'm happy we have it, but...
The highest value source of alerting right now is one of our growth marketers who pays close attention to our CRM and product analytics tool and notices when key product funnels are underperforming.
Our next highest value signals are a handful of ad hoc alerting channels, mostly in Slack, either directly from a partner telling us that something suspicious happened on their side (think: fraud) or from in-product instrumentation sent to a channel for non-engineering visibility. Members of our business/product/operations team pay attention in these places and make decisions based on their business context.
After that, our support team is increasingly able to filter customer issues and differentiate between bugs, missing features, etc.
I know someone is going to argue that these are all a sign that we haven't instrumented the right things. Fair, but also misses the point. The decision makers in these flows don't (and won't) live in traditional alerting systems and wouldn't have helped us understand breakages without these other, ad hoc processes.
My theory is that it's relatively easy to offer a technical product that moves alerts around or that manages escalation paths. It's quite hard to design a product that surfaces detail to a non-technical export and that makes it easy to build systematic rules.
[dead]
Good metrics and alerting systems are designed, from the top down. Not bottom up.
Lots of metrics are typically available, but almost all of them are noise.
Start with the business: what is important to the business ? What kind of failures are existential threats ?
Then work your way down and design your metrics and alerts, instead of just throwing stuff at the wall.
I’ve had to push back so many times with teams whose manager at one point said “we need better monitoring / alerting” and they interpreted that to mean more metrics / alerts.
This is rarely the case.
I personally am really fond of just using a few alerts. The important thing to know that something went wrong. Not necessarily where / why / how something went wrong.
And yes, inertia is real, and false / invaluable alerts need to be killed immediately, without remorse. They are SRE’s cancer.