logoalt Hacker News

lukevyesterday at 10:59 PM2 repliesview on HN

This clarifies an important point for me.

The derivative of a LLM agent's capabilities (on its own) is negative. It's not that they can't do useful work -- it means that (for now) they require some level of input or steering.

If that were to change -- if an agent could consistently get better at what it does without intervention -- that would represent a true paradigm shift. An accelerating curve, rather than one trending back towards linearity.

This represents a necessary inflection point for any sort of AI "takeoff" scenario.

So this study is actually kind of important, even though it's a null result. Because the contra view would be immensely significant.


Replies

Exoristosyesterday at 11:01 PM

Just to save us all some time and trouble, I'll point out that that's never really going to happen.

show 1 reply
jibaltoday at 12:41 AM

Doesn't anyone learn from Malthus? In the real world, accelerating curves inevitably stop accelerating.

Others here have suggested that AIs should be able to self-generate skills by doing web searches. What happens when all of the information from web searches (of knowledge generated by ordinary human intelligence) has been extracted?

On another post (about crackpot Nick Bostrom claiming that an ASI would "imminently" lead to scientific breakthroughs like curing Alzheimers and so a 3% chance of developing an ASI would be worth a 97% chance of annihilating humanity) I noted that an ASI isn't a genie or magic wand; it can't find the greatest prime or solve the halting problem. Another person noted that an ASI can't figure out how to do a linear search in O(1) time. (We already know how to do a table lookup in amortized O(1) time--build a hash table.) Science is like animal breeding and many other processes ... there's a limit to how much it can be sped up.