I certainly agree in spirit that the alerts are important, and should be actionable. But I wouldn't start at just "looking at the service" and then trying to define the first set of alerts.
Instead I would move up a level and start with a SLO for the various "business level" metrics you might care about. Things like "request latency", "successful requests", etc.
Then use the longer lookahead "error budget" burndowns to see where your error budget is being spent, and from there decide 1.) if the SLO needs adjusting, and/or 2.) if an alert is appropriate.
To cleanly answer those questions and iterate you'll need metrics, dashboards, traces, and logs. So then you're not just making dashboards because "its best practice", you're creating them to specifically help you measure if you're meeting your stated service objectives.
SLO timelines are usually over 7d, 30d etc no? and also often don't work that great for backend services in my experience ... they can't give you the level of reactivity that defining alerts about things you care about give you. I'd argue that moving from that direction upwards to figure out what alerts to aggregate and define SLOs around, rather than the other way around in those cases.