I was thinking the same. Maybe if he tried to think instead of just asking the model. The premise is interesting "We optimize languages for humans, maybe we can do something similar for llms". But then he just ask the model to do the thing instead of thinking about the problem, maybe instead of prompting "Hey made this" a more granular, guided approach could've been better.
For me this is just a lost of potential on the topic, and an interesting read made boring pretty fast.
I don't disagree at all. :)
This was mainly an exercise in exploration with some LLMs, and I think I achieved my goal of exploring.
Like I said, if this topic is interesting to you and you'd like to explore another way to push on the problem, I highly recommend it. You may come up with better results than I did by having a better idea what you're looking for as output.