It's been increasingly more obvious people on hacker news literally do not run these supposed prompts through LLMs. I bet you could run that prompt 10 times and it would never give up without producing a (probably fine) sh command.
Read the replies. Many folks have called gpt-4.1 through copilot and get (seemingly) valid responses.
What is becoming more obvious is that people on Hacker News apparently do not understand the concept of non-determinism. Acting as if the output of an LLM is deterministic, and that it returns the same result for the same prompt every time is foolish.