But that's a reason you should expect it to stop working soon, just like all the older tricks like "my grandmother will die". If you have a universal 'blind' prompt which can increase performance a little bit... the AI labs can just toss that into the training loop to teach the model to do it automatically, whatever 'it' was, like 'trying harder' or 'writing down a useful idea'. And then the prompt stops working because the next generations do it by default.
(This also suggests that you should expect them to generally be bad at judging novel self-generated prompts/skills - if they could judge those, they would already be using them! There is a generator-verifier gap, but it is already exploited heavily during post-training and not much low-hanging fruit left there.)
> But that's a reason you should expect it to stop working soon
I agree. (And it seems like it already stopped working, if I understood others here correctly.)
But again if I understood others here correctly, an academic paper like this would necessarily be studying models that are well behind the leading edge at time of publication. My argument is that the study authors shouldn't be faulted for investigating something that currently seems unlikely to work, because at the time of investigation it would have seemed much more likely to work.