Yeah there seem to be two axes here.
Esolang vs mainstream paradigm.
Popular vs scarce training data.
So you'd want to control for training data (e.g. brainfuck vs Odin?)
And ideally you'd control by getting it down to 0, i.e. inventing new programming languages with various properties and testing the LLMs on those.
I think that would be a useful benchmark for other reasons. It would measure the LLMs' ability to "learn" on the spot. From what I understand, this remains an underdeveloped area of their intelligence. (And may not be solvable with current architectures.)