Why shouldn't they be? It's not the 00's anymore, Unicode support is universal. You'd have to dust off some truly ancient tech to find something incapable of rendering it.
Source code is for humans, and thus should be written in whatever way makes it easiest to read, write, and understand for humans. If your language doesn't map onto ASCII, then Unicode support improves that goal. If your code is meant to directly implement some physics formula, then using the appropriate unicode characters might make it easier to read (and thus spot transcription errors, something I find far too often in physics simulations).
Hot take, but I've always felt the world would be better served if mathematicians and physicists would stop using terrible short variable names and use longCamelCaseDescriptiveNames like the rest of us, because paper is cheap, and abbreviations are confusing. I know it's nicer when you're writing by hand, but when you clean up a proof or formula for publishing, would it really be so hard to switch to descriptive names?
I'm a practitioner of neither though, so I can't condemn the practice wholeheartedly as an outsider, but it does make me groan.
> using the appropriate unicode characters might make it easier to read
It's probably also a great way to introduce almost undetectable security vulnerabilities by using Unicode characters that look similar to each other but in fact are different.
They shouldn't be precisely because it makes the code harder to read and write when you include non-ASCII characters.