This is not just a problem with formal verification systems. It's a problem intrinsic to math, science and reality.
We call these things axioms. The verifier must be assumed to be true.
For all the axioms in mathematics... we don't know if any of those axioms are actually fully true. They seem to be true because we haven't witnessed a case where they aren't true. This is similar to how we verify programs with unit tests. For the verifier it's the same thing, you treat it like an axiom you "verify" it with tests.
I believe there is a distinction between those concepts.
Axioms are true by their own definition. Therefore, discovering an axiom to be false is a concept that is inherently illogical.
Discovering that a formal verification system produced an incorrect output due to a bug in its implementation is a perfectly well-defined concept and doesn't led to any logical contradictions; unless you axiomatically define the output of formal verification systems to be necessarily correct.
I believe this definition don't make sense in the general case considering the number of vectors that can introduce errors into software.