The abstract is over-egged. The language obscures what it purports to find. So some prompts return null results.
"Ontologically null concepts" could just be a fancy way of saying "the model doesn't know what to do with nonsense". Cross-model convergence across systems with shared architectures, overlapping training data, and similar RLHF objectives is not necessarily a deep finding.
There's a high ratio of jargon-heavy interpretive superstructure to empirical foundation here.