The thing is, you're gaining a bunch of knowledge about compiler internals and optimisations, but those aren't necessarily specified or preserved, so it's questionable how valuable that experience actually is. The next release of the compiler might rewrite the optimiser, or introduce a new pass, and so your knowledge goes out of date. And even if you have perfect knowledge of the optimiser and can write code that's UB according to the standard but will be optimised correctly by this specific compiler... would that actually be a good idea?
All of that is less true in the microcontroller world where compilers change more slowly and your product will likely be locked to a specific compiler version for its entire lifecycle anyway (and certainly you don't have to worry about end users compiling with a different compiler). In that case maybe getting deeply involved in your compiler's internals makes more sense.
Learning about how compilers optimize code isn't really knowledge that goes out of date. Yes, things get reshuffled or new ideas appear, but everything builds on what's already there.
You'd never want (except in extreme desperation) to use this knowledge to to justify undefined behavior in your code. You use it to make sure you don't have any UB around! Strategies like "I wrote a null pointer check, why isn't it showing up anywhere in the assembly?" can be really helpful to resolve problems.