Title for the back of the class:
"Prompts sometimes return null"
I would be very cautious to attribute any of this to black box LLM weight matrices. Models like GPT and Opus are more than just a single model. These products rake your prompt over the coals a few times before responding now. Telling the model to return "nothing" is very likely to perform to expectation with these extra layers.
Thanks, I was already distracted after the first sentence, hoping there would be a good explanation.
Out of curiosity, are there any sources to there being a significant amount of other steps before being fed into the weights
Security guards / ... are the obvious ones, but do you mean they have branching early on to shortcut certain prompts?