Am I the only one to expect a S curve regarding progress and not an eternal exponential ?
People moving away from prideful principle to leverage new tech in the past doesn't guarantee that the same idea in the current context will pan out.
But as you say.. we'll see.
Oh, you mean an S curve on the progress of the AI?
Most of the discussion on the thread is about LLMs as they are right now. There's only one odd answer that throws an "AGI" around as if those things could think.
Anyway, IMO, it's all way overblown. People will learn to second-guess the LLMs as soon as they are hit by a couple of bad answers.
Even if progress stops:
1. Current reasoning models can do a -lot- more than skeptics give them credit for. Typical human performance even among people who do something for employment is not always that high.
2. In areas where AI has mediocre performance, it may not appear that way to a novice. It often looks more like expert level performance, which robs novices of the desire to practice associated skills.
Lest you think I contradict myself: I can get good output for many tasks from GPT4 because I know what to ask for and I know what good output looks like. But someone who thinks the first, poorly prompted dreck is great will never develop the critical skills to do this.
Information technology has grown exponentially since the first life form created a self-sustaining, growing loop.
You can see evolution speeding up rapidly, the jumbled information inherent in chemical metabolisms evolved to centralize their information in DNA, and then as DNA evolved to componentize body plans.
RATE: over billions of years.
Nerves, nervous systems, brains, all exponentially drove individual information capabilities forward.
RATE: over hundreds of millions, tens of millions, millions, 100s of thousands.
Then the human brains enabled information to be externalized. Language allowed whole cultures to "think", and writing allowed cultures ability to share, and its ability to remember to explode.
RATE: over tens of thousands, thousands.
Then we developed writing. A massive improvement in recording and sharing of information. Progress sped up again.
RATE: over hundreds of years.
We learned to understand information itself, as math. We learned to print. We learned how to understand and use nature so much more effectively to progress, i.e. science, and science informed engineering.
RATE: over decades
Then the processing of information got externalized, in transistors, computers, the Internet, the web.
RATE: every few years
At every point, useful information accumulated and spread faster. And enabled both general technology and information technology to progress faster.
Now we have primitive AI.
We are in the process of finally externalizing the processing of all information. Getting to this point was easier than expected, even for people who were very knowledgable and positive about the field.
RATE: every year, every few months
We are rapidly approaching complete externalization of information processing. Into machines that can understand the purpose of their every line of code, every transistor, and the manufacturing and resource extraction processes supporting all that.
And can redesign themselves, across all those levels.
RATE: It will take logistical time for machine centric design to takeover from humans. For the economy to adapt. For the need for humans as intermediaries and cheap physical labor to fade. But progress will accelerate many more times this century. From years, to time scales much smaller.
Because today we are seeing the first sparks of a Cambrian explosion of self-designed self-scalable intelligence.
Will it eventually hit the top of an "S" curve? Will machines get so smart that getting smarter no longer helps them survive better, use our solar systems or the stars resources, create new materials, or advance and leverage science any further?
Maybe? But if so, that would be an unprecedented end to life's run. To the acceleration of the information loop, from some self-reinforcing chemical metabolism, to the compounding progress of completely self-designed life, far smarter than us.
But back to today's forecast: no, no the current advances in AI we are seeing are not going to slow down, they are going to speed up, and continue accelerating in timescales we can watch.
First because humans have insatiable needs and desires, and every advance will raise the bar of our needs, and provide more money for more advancement. Then second, because their general capability advances will also accelerate their own advances. Just like every other information breakthrough that has happened before.
Useful information is ultimately the currency of life. Selfish genes were just one embodiment of that. Their ability to contribute new innovations, on time scales that matter, has already been rendered obsolete.
> Am I the only one to expect a S curve regarding progress and not an eternal exponential ?
To LLMs specifically as they're now? Sure.
To LLMs in general, or generative AI in general? Eventually, in some distant future, yes.
Sure, progress can't ride the exponent forever - observable universe is finite, as far as we can tell right now, we're fundamentally limited by the size of our light cone. And while in any field narrow enough, progress too follows an S-curve, new discoveries spin off new avenues with their own S-curves. If you zoom out a little those S-curves neatly add up to an exponential function.
So no, for the time being, I don't expect LLMs or generative AIs to slow down - there's plenty of tangential improvements that people are barely beginning to explore. There's more than enough to sustain exponential advancement for some time.