Because LLMs are designed as emulators of actual human reasoning, it wouldn't surprise me if we discover that the things that make software easy for humans to reason about also make it easier for LLMs to reason about.