Niklaus Wirth died in 2024, and yet I hope he is having a major I-told-you-so moment about people blaming Pascal's bounds checking to be unneeded and making things slow.
To this day, FPC uses less ram than any C compiler, A good thing in today's increasingly ramless world and they've managed this with way less developers working on it than its C compiler equivalent, I can't even imagine what it would look like if they had the same amount of people working on it. C optimization tricks are hacks, the fact godbolt exists is proof that C is not meant to be optimizable at all, it is brute force witchcraft.
At a certain point though, something's gotta give, the compiler can do guesswork, but it should do no more, if you have to add more metadata then so be it it's certainly less tedious than putting pragmas and _____ everywhere, some C code just looks like the writings of an insane person.
There's a blog post from Google about this topic as well where they found that inserting bound checking into standard library functions (in this case C++) had a mere 0.3% negative performance impact on their services: https://security.googleblog.com/2024/11/retrofitting-spatial...
For people using Clang you can read more about libc++ hardening at https://libcxx.llvm.org/Hardening.html