I wrote a book a while back where I argued that coding involves choosing what to work on, writing it, and then debugging it, and that we tend to master these steps in reverse chronological order.
It's weird to look at something that recent and think how dated it reads today. I also wrote about the Turing test as some major milestone of AI development, when in fact the general response to programs passing the Turing test was to shrug and minimize it
I would argue that chatbots still barely pass the turing test
They have such obvious patterns and tells that humans have already picked up on them and they can eventually sus out that they're talking to an LLM
For instance I heard recently about someone talking (verbally) with an AI voiced customer support. They were very convinced, so they asked the support agent to calculate the product of two large numbers, and it replied with the result instantly
I would argue that fails the chinese room