A human can easily struggle at solving a poorly communicated puzzle, especially if paper/pencil or something isn't available to convert to a better format. LLMs can look back at what they wrote, but it seems kind of like a poor format for working out a better representation to me.
I found some papers [n] about this. And I think the answer is yes, the format matters asnd hence the representation.
I wonder if the author would be willing to try with another representation.
[1]: Does Prompt Formatting Have Any Impact on LLM Performance? https://arxiv.org/html/2411.10541v1
[2]: Large Language Models(LLMs) on Tabular Data: Prediction, Generation, and Understanding - A Survey https://arxiv.org/html/2402.17944v2