You can express those constraints; it just turns out to be less ergonomic in practice if you do. (You can even do so in terms of the junk-valued total functions! Just define `actual_subtraction` to call straight through to `junky_subtraction`, but `actual_subtraction` has these constraints on its domain.)
The mathlib way to do things is to push those requirements out to the one who wishes to use the theorem. If you find that you're depending on a junk value in a way that's incompatible with what you wanted to prove, then you've simply discovered that you forgot to restrict your own domain to exclude the junk. (And if your desired usage lines up with the junk, then great, you get to omit an annoying busywork hypothesis.) A sqrt function that gives 0 on the negatives isn't breaking any of sqrt's properties on the positives!
The mathlib way means that instead of every function having to express these constraints and pass proofs down the line, only some functions have to.
Thanks.
> If you find that you're depending on a junk value in a way that's incompatible with what you wanted to prove
This is the part I'm struggling with. How would you actually know/realise that you were doing this? It seems like "the mathlib way" you describe is choosing to rely on programmer discipline for something that could be enforced automatically.
My fear is that relying on the junk values of functions (values where their "proper" partial counterparts are not defined) is somehow unsound (could lead to proving something untrue). But perhaps my intuition is off here? If so, I think the specific junk values chosen must not matter at all -- e.g., having sqrt return 42 for negative x values should work just as well, am I right?