It's kind of an aside in the post, but connecting LLMs and Searle's Chinese Room argument is a brilliant observation. Although there are people who believe LLMs are really thinking, it's mostly confirming that the Turing test wasn't the right way to test this.
Is it? The observation seems patently obvious if one has even the most superficial understanding of how LLMs work? Why is this not common knowledge?