You trust these stochastic text/slot machines for scheduling and follow-ups? Human intention is important for both of these. triage and reminders I can see, but if you send me an llm generated follow up, I'm just going to assume you don't care.
Yes. Other humans are generally accepting of mistakes below some frequency threshold, and frontier models are very robust in my experience
> if you send me an llm generated follow up, I'm just going to assume you don't care.
Ironically you just replied to an automated message on a forum and didn't realise :) (hint: click on the user, go to their comment history, you'll see the pattern)