Yes, it's an example of domain-specific thinking. "The tool helps me write code, and my job is hard so I believe this tool is a genius!"
The Roomba vacuumed the room. Maybe it vacuumed the whole apartment. This is good and useful. Let us not diminish the value of the tool. But it's a tool.
The tool may have other features, such as being self-documenting/self-announcing. Maybe it will frighten the cats less. This is also good and useful. But it's a tool.
Humans are credulous. A tool is not a human. Meaningful thinking and ideation is not just "a series of steps" that I will declaim as I go merrily thinking. There is not just a vast training set ("Reality"), but also our complex adaptability that enables us to test our hypotheses.
We should consider what it is in human ideation that leads people to claim that a Roomba, a chess programme, Weizenbaum's Eliza script, the IBM's Jeopardy system Watson, or an LLM trained on human-vetted data is thinking.
Train such a system on the erroneous statements of a madman and suddenly the Roomba, Eliza, IBM Watson (and these other systems) lose our confidence.
As it is today, the confidence we have in these systems is very conditional. It doesn't matter terribly if code is wrong... until it does.
Computers are not humans. Computers can do things that humans cannot do. Computers can do these things fast and consistently. But fundamentally, algorithms are tools.