It's POSIX convention to write to stderr for anything that's not strict program output. I have seen 2>&1 far too often in scripts. I don't worry about it and happily write error messages to stderr whenever my scripts exit without a 0 status code.
Aren't you supposed to return a 0 status code when "yea done!" and some other status code when it wasn't done?
But the whole idea is that a warning is a warning. Solving a warning can be deferred, and a warning doesn't cause execution to fail. Your warning was transmuting itself into an error. I feel like "All means are fair except solving the problem" is the wrong conclusion to draw here. If it should have been solved immediately, it should have been an error in the first place. (And then you should have politely bumped the version so that you don't immediately break the code of all your dependents.) If there is no need to solve it immediately, then "all means are fair" to convert it back to a warning as was originally intended.
> Instead, they were quick to point out that it’s hard to know where these warnings could come from, and we cannot risk all those critical workflows failing when some case of misuse surfaces in a new context.
Ah yes, Schroedinger's workflow. So important any disruption is a disaster, and simultaneously so unimportant they couldn't possibly spend a single dime on the tools critical to the workflow.
We even have the options around in our compilers to treat warnings as errors. As continuation on that idea I for one was lucky earlier in my career to work with the brilliant idea of just asserting in production. Straight up crash the software when something was wrong. Wrong preconditions that could mess something in the future. "empty array where it isn't supposed to be empty, sorry crash - fix your bug". even when it kind of worked, which is the real issue... it kind of works until it doesn't. I've silently introduced this mindset into my work ever since and the quality of the result is so much better. Warnings are something that will be deferred when given the option so don't warn it is an error.
I have seen this kind of thing go so many ways.
- sometimes you can get the status code, sometimes you can't.
- sometimes you can separate out stdout from stderr, sometimes you can't
- sometimes the program generating the error message identifies itself, sometimes it doesn't
- sometimes you don't know if you have a "good error" (ok to ignore) or a "bad error" (cannot ignore)
I am a fan of the HARD FAIL.
I think internal unit tests or things like that should hard fail, then get a human to either fix it, or put in a hard exception.
if it is user-facing... sigh
Backwards compatibility is golden.
Was this FOSS or commercial?
If it's commercial software, you're paid to make it work, no matter how stupid that may be, and forced stupidity isn't your problem.
If it's FOSS, you can tell the user to deal with it and close the ticket.
Does no one know what exit codes and stderr vs. stdout is anymore?
I work with old and new code bases used by many clients in complicated setups, but adding a warning to stderr while stdout was left untouched, and proper exit codes maintained, was hardly, if ever, a problem so far.
Of course, there's always some unpleasant exception, but it's rare.
And of course, I also understand that the author might have found themselves not only in one of those rare-ish instances, but also one where reasoning with the other side was fruitless.