Observe what the AI companies are doing, not what they are saying. If they would expect to achieve AGI soon, their behaviour would be completely different. Why bother developing chatbots or doing sales, when you will be operating AGI in a few short years? Surely, all resources should go towards that goal, as it is supposed to usher the humanity into a new prosperous age (somehow).
> it is supposed to usher the humanity into a new prosperous age (somehow).
More like usher in climate catastrophe way ahead of schedule. AI-driven data center build outs are a major source of new energy use, and this trend is only intensifying. Dangerously irresponsible marketing cloaks the impact of these companies on our future.
Related to your point: if these tools are close to having super-human intelligence, and they make humans so much more productive, why aren't we seeing improvements at a much faster rate than we are now? Why aren't inherent problems like hallucination already solved, or at least less of an issue? Surely the smartest researchers and engineers money can buy would be dogfooding, no?
This is the main point that proves to me that these companies are mostly selling us snake oil. Yes, there is a great deal of utility from even the current technology. It can detect patterns in data that no human could; that alone can be revolutionary in some fields. It can generate data that mimics anything humans have produced, and certain permutations of that can be insightful. It can produce fascinating images, audio, and video. Some of these capabilities raise safety concerns, particularly in the wrong hands, and important questions that society needs to address. These hurdles are surmountable, but they require focusing on the reality of what these tools can do, instead of on whatever a group of serial tech entrepreneurs looking for the next cashout opportunity tell us they can do.
The constant anthropomorphization of this technology is dishonest at best, and harmful and dangerous at worst.
I don't think it's as simple as that. Chatbots can be used to harvest data, and sales are still important before and after you achieve AGI.
The people who make the money in gold rushes sold shovels, not mined the gold. Sure some random people found gold and made a lot of money, but many others didn't strike it rich.
As such even if there is a lot of money AI will make, it can still be the right decision to sell tools to others who will figure out how to use it. And of course if it turns out another pointless fad with no real value you still make money. (I'd predict the answer is in between - we are not going to get some AGI that takes over the world, but there will be niches where it is a big help and those niches will be worth selling tools into)
Exactly. For example, Microsoft was building data centers all over the world since "AGI" was "around the corner" according to them.
Now they are cancelling those plans. For them "AGI" was cancelled.
OpenAI claims to be closer and closer to "AGI" as more top scientists left or are getting poached by other labs that are behind.
So why would you leave if the promise of achieving "AGI" was going to produce "$100B dollars of profits" as per OpenAI's and Microsoft's definition in their deal?
Their actions tell more than any of their statements or claims.
> Why bother developing chatbots or doing sales, when you will be operating AGI in a few short years?
To fund yourself while building AGI? To hedge risk that AGI takes longer? Not saying you're wrong, just saying that even if they did believe it, this behavior could be justified.
Continuing in the same vain. Why would they force their super valuable, highly desirable, profit maximizing chat-bots down your throat?
Observations of reality is more consistent with company FOMO than with actual usefulness.
> If they would expect to achieve AGI soon, their behaviour would be completely different. Why bother developing chatbots or doing sales, when you will be operating AGI in a few short years?
What if chatbots and user interactions ARE the path to AGI? Two reasons they could be: (1) Reinforcement learning in AI has proven to be very powerful. Humans get to GI through learning too - they aren’t born with much intelligence. Interactions between AI and humans may be the fastest way to get to AGI. (2) The classic Silicon Valley startup model is to push to customers as soon as possible (MVP). You don’t develop the perfect solution in isolation, and then deploy it once it is polished. You get users to try it and give feedback as soon as you have something they can try.
I don’t have any special insight into AI or AGI, but I don’t think OpenAI selling useful and profitable products is proof that there won’t be AI.
> Why bother developing chatbots
Maybe it is the reverse? It is not them offering a product, it is the users offering their interaction data. Data which might be harvested for further training of the real deal, which is not the product. Think about it: They (companies like OpenAI) have created a broad and diverse user base which without a second thought feeds them with up-to-date info about everything happening in the world, down to the individual life and even their inner thoughts. No one in the history of mankind ever had such a holistic view, almost gods eye. That is certainly something a super intelligence would be interested in. They may have achieved it already and we are seeing one of its strategies playing out. Not saying they have, but this observation would not be incompatible or indicate they haven't.