There is a reason why such a pattern is frequent in LLM-generated text.
Any good human-written text that provides useful information is likely to highlight in this way or in equivalent ways the contrast between what the reader is expected to incorrectly believe and the reality.
When the reader already knows what the text has to say, that text is superfluous.
Therefore a text that provides new and unexpected information, so it is a useful text, must use some means to explain to the readers the errors of their ways.
It may use simple superposition like "it is not ... it is ..." or it may be more verbose and add "but", "however", "nonetheless" etc.
I believe that it is counterproductive to use this kind of pattern as a method for detection for AI-written texts, because it is normal for it to exists in useful human-written texts.
What should be commented is whether that claim is true, i.e. whether indeed the second part with "it is ..." is true, or whether all of the pattern is superfluous, because none of the expected readers is not already aware that the first part with "it is not ..." is true.