FWIW, simply because the model claims it's "not very familiar" with something doesn't mean it's actually able to probe its own knowledge and gauge familiarity in any way at all. That it's correct about not knowing much about a fairly obscure data structure from advanced computer science has more to do with what the average person in its training data would likely say than an indicator of that type of reflectionof occurring.
I agree that it happens to (likely) be right in this instance however and this output is in some ways refreshing compared to other models which appear (!!) to have overconfidence and plow right ahead.
With my optimistic hat on maybe it realized "wavelet tree or similar structure that can efficiently represent and query for subset membership" doesn't actually describe wavelet tree and this was its way of backtracking. I.e. it might have learned to respond like this to a prior inconsistent series of tokens.
But ya, I'm aware of the issue with them saying they don't know things they do know.