The interesting question to me at the moment is whether we are still at the bottom of an exponential takeoff or nearing the top of a sigmoid curve. You can find evidence for both. LLMs probably can't get another 10 times better. But then, almost literally at any minute, someone could come up with a new architecture that can be 10 times better with the same or fewer resources. LLMs strike me as still leaving a lot on the table.
If we're nearing the top of a sigmoid curve and are given 10-ish years at least to adapt, we probably can. Advancements in applying the AI will continue but we'll also grow a clearer understanding of what current AI can't do.
If we're still at the bottom of the curve and it doesn't slow down, then we're looking at the singularity. Which I would remind people in its original, and generally better, formulation is simply an observation that there comes a point where you can't predict past it at all. ("Rapture of the Nerds" is a very particular possible instance of the unpredictable future, it is not the concept of the "singularity" itself.) Who knows what will happen.
Why is everyone so damn obsessed with the singularity? You don't need superintelligence to disrupt humanity. We easily have enough advancement to change the economy dramatically as is. The adoption isn't there yet.
I've said it before, but it would be a mistake to just focus on the models, and ignore everything else that is changing in the ecosystem -- tools, harnesses, agents, skills, availability of compute, etc. -- things are changing very quickly overall.
The thing that is changing most rapidly, however, is the understanding of how to harness this insanely powerful, versatile, and unpredictable new technology.
Like, those who experimented deeply with LLMs could tell that even if all model development completely froze in 2024, humanity had decades worth of unrealized applications and optimizations to explore. Even with AI recursively accelerating this process of exploration. As a trivial example, way back in 2023 anyone who got broken code from ChatGPT, fed it the error message, and got back working code, knew agents were going to wreck things up very quickly. It wasn't clear that this would look like MD files, Claude Code, skills, GasTown, and YOLO vibe-coding, but those were "mere implementation details."
I'm half-convinced an ulterior goal of these AI companies (other than the lack of a better business model) to give away so many cheap tokens is to encourage experimentation and overcome this "capability overhang."
Given all this, it's very hard to judge where we are on the curve, because there isn't just one curve, there are actually multiple inter-playing curves.
Anyone who believes in materialism should recognize that there is still a lot of room to improve.
Neither! A logistic curve is just an exponential with a carrying capacity - it is still an exponential! There is no reason to believe that AI capability, which grows logarithmically with the handwaved-resources used on it (roughly, this is compute and training data), grows, has grown, or is growing exponentially!
I know this sounds like "the moderate position" to people but you are accepting that something logarithmic is somehow in fact exponential (these are inverse functions of one another) based on no evidence or argument.
Here is Sam Altman, the one man in the world with the most incentive to overstate AI capability, accepting the extremely-well-known logarithmic growth: https://blog.samaltman.com/three-observations
What we see in reality is a basically-linear growth pattern due to pushing exponentially more resources into this logarithm.
Somewhere around 2005-2007, when people were wondering if the Internet was done, PG was fond of saying "It has decades to run. Social changes take longer than technical changes."
I think we're at a similar point with LLMs. The technical stuff is largely "done" - LLMs have closer to 10% than 10x headroom in how much they will technologically improve, we'll find ways to make them more efficient and burn fewer GPU cycles, the cost will come down as more entrants mature.
But the social changes are going to be vast. Expect huge amounts of AI slop and propaganda. Expect white-collar unemployment as execs realize that all their expensive employees can be replaced by an LLM, followed by white-collar business formation as customers realize that product quality went to shit when all the people were laid off. Expect the Internet as we loved it to disappear, if it hasn't already. Expect new products or networks to arise that are less open and so less vulnerable to the propagation of AI slop. Expect changes in the structure of governments. Mass media was a key element in the formation of the modern nation state, mass cheap fake media will likely lead to its fragmentation as any old Joe with a ChatGPT account can put out mass quantities of bullshit. Probably expect war as people compete to own the discourse.
> The interesting question to me at the moment is whether we are still at the bottom of an exponential takeoff or nearing the top of a sigmoid curve.
Even using the models we have today, we have revolutionized VFX, video production, and graphics design.
Similarly, many senior software engineers are reporting 2-10x productivity increases.
These tools are some of the most useful tools of my career. I don't even think the general consumer public needs "AI" in their products. If we just create control surfaces for experts to leverage and harness the speed up and shape and control the outcomes, we're going to be in a very good spot.
These alone will have ripple effects throughout the economy and innovation. We've barely begun to tap into the benefits we have already.
We don't even need new models.
"given 10-ish years at least to adapt, we probably can"
Social media would like a word...
We aren’t anywhere near AGI. They’ve consumed the entirety of human knowledge and poisoned the well, and it still can’t help but tell you to walk to the car wash.
A peasant villager was sentient without a single book, film or song. You don’t need this much data to be sentient. They’re using a stupid method, and a better one will be discovered some day.
We are bottom. It's just a start.
We are in era of pre pentium 4 in AI terms.
I model this as "stacked sigmoid curves". I have no reason to believe that any specific technological implementation will be exponential in impact vs sigmoidal.
However if we throw enough money and smart people at the problems and get enough value from the early sigmoid curves, the effective impact of a large number of stacked sigmoids could theoretically average to a linear impact, but if the sigmoids stay of a similar magnitude (on average) and appear at a higher velocity over time, you end up with an exponential made up of sigmoids*
* To be fair, it has been so long since I have done math that this may be completely incorrect mathematically - I'm not sure how to model it. However I think in practice more and more sigmoids coming faster and faster with a similar median amplitude is gonna feel very fast to humans very soon - whether or not it's a true exponential.
I'm honestly having a very hard time thinking through the likely implications of what's currently happening over the next 2-10 years. Anyone who has the answers, please do share. I'm assuming from Cynafin that it's a peturbated complex adaptive system so I can just OODA or experiment, sense and respond to what happens - not what I think might happen.