thats the point though, its not sufficient. Not even slightly. It constantly makes obvious mistakes, and cannot keep things coherent
I was almost going to explicitly mention your point but deleted it because I thought people would be able to understand.
This is not a philosophy/theology sitting around handwringing about "oh but would a sufficiently powerful LLM be able to dance on the head of a pin". We're talking about a thing, that actually exists, that you can actually test. In a whole lot of real-world scenarios that you try to throw at it, it fails in strange and unpredictable ways. Ways that it will swear up and down it did not do. It will lie to your face. It's convincing. But then it will lose in chess, it will fuck up running a vending machine buisness, it will get lost coding and reinvent the same functions over and over, it will make completely nonsensical answers to crossword puzzles.
This is not an intelligence that is unlimited, it is a deeply flawed two year old that just so happens to have read the entire output of human writing. It's a fundamentally different mind to ours, and makes different mistakes. It sounds convincing and yet fails, constantly. It will tell you a four step explanation of how its going to do something, then fail to execute four simple steps.
Which is exactly why is it insane that the industry is hell bent on creating autonomous automation through LLMs. Rube Goldberg machines is what will be created, and if civilization survives that insanity it will be looked back upon as one grand stupid era.