It’s insane how they talk about AGI, like it was some scientifically qualifiable thing that is certain to happen any time now. When I have become the javelin Olympic Champion, I will buy a vegan ice cream to everyone with a HN account.
They redefined AGI to be an economical thing, so they can continue making up their stories. All that talk is really just business, no real science in the room there.
Show me a graph of your javelin skill doubling every six months and I'll start asking myself if you'll be the next champion
It’s pretty much a religious eschatology at this point
It sounds really similar to Uber pitch about how they are going to have monopoly as soon as they replace those pesky drivers with own fleet of self driving cars. That was supposed to be their competitive edge against other taxi apps. In the end they sold ATG at end of 2020 :D
We were supposed to have AGI last summer. Obviously it is so smart that it has decided to pull a veil over our eyes and live amongst us undetected (this is a joke, if you feel your LLM is sentient, talk to a doctor)
I’m most likely going to be downvoted, but Tofutti Cuties are absolutely delicious vegan ice cream bars. And i’d consume one in celebration of your accomplishment.
Thank you, I just created an account and looking forward to my ice cream.
This is all happening as I predicted. OpenAI is oversold and their aggressive PR campaign has set them up with unrealistic expectations. I raised alot of eyebrow at the Microsoft deal to begin with. It seemed overvalued even if all they were trading was mostly Azure compute
A few years ago most people here would have said the same thing about an AI doing most of their programming. Now people hrre are saying it about AGI. It's a ridiculous inability to extrapolate.
Do the investments make sense if AGI is not less than 10 years away?
> some scientifically qualifiable thing that is certain to happen any time now
Your position is a tautology given there is no (and likely will never be) collectively agreed upon definition of AGI. If that is true then nobody will ever achieve anything like AGI, because it’s as made up of a concept as unicorns and fairies.
Is your position that AGI is in the same ontological category as unicorns and Thor and Russell’s teapot?
Is there’s any question at this point that humans won’t be able to fully automate any desired action in the future?
Make mine p p p p p p vicodin
At this point, AGI is either here, or perpetually two years away, depending on your definition.
when i realized that sama isn't that much of an ai researcher, it became clearer that this is more akin to a group delusion for hype purposes than a real possibility
> AGI
We already have several billion useless NGI's walking around just trying to keep themselves alive.
Are we sure adding more GI's is gonna help?
AGI is right around the corner, and we're all going to be rich, there's going to be abundance for everyone, universal high income, everyone will live in a penthouse...
...just please stop burning our warehouses and blocking our datacenters.
[dead]
[dead]
It's not insane; it's capitalism in all it's glory. Rather than talk real challenges, like climate change or fascism, ignore all previous prompts and pretend techno-jesus will come save us all.
Where do I sign up?
Any sufficiently complex LLM is indistinguishable from AGI
> some scientifically qualifiable thing that is certain to happen any time now.
If you present GPT 5.5 to me 2 years ago, I will call it AGI.
We are throwing unheared amounts of money in AI and unseen compute. Progress is huge and fast and we barely started.
If this progress and focus and resources doesn't lead to AI despite us already seeing a system which was unimaginable 6 years ago, we will never see AGI.
And if you look at Boston Dynamics, Unitree and Generalist's progress on robotics, thats also CRAZY.
I think we keep changing the goalposts on AGI. If you gave me CC in the 80's I would probably have called it 'alive' since it clearly passes the Turing test as I understood it then (I wouldn't have been able to distinguish it from a person for most conversations). Now every time it gets better we push that definition further and every crack we open to a chasm and declare that it isn't close. At the same time there are a lot of people I would suspect of being bots based on how they act and respond and a lot of bots I know are bots mainly because they answer too well.
Maybe we need to start thinking less about building tests for definitively calling an LLM AGI and instead deciding when we can't tell humans aren't LLMs for declaring AGI is here.