logoalt Hacker News

LtWorflast Tuesday at 2:00 PM2 repliesview on HN

I think making one would help you understand that they're not intelligent.


Replies

Davidzhenglast Tuesday at 3:43 PM

OK I, like the other commenter, also feel stupid to reply to zingers--but here goes.

First of all, I think a lot of the issue here is this sense of baggage over this word intelligence--I guess because believing machines can be intelligent goes against this core belief that people have that humans are special. This isn't meant as a personal attack--I just think it clouds thinking.

Intelligence of an agent is a spectrum, it's not a yes/no. I suspect most people would not balk at me saying that ants and bees exhibits intelligent behavior when they look for food and communicate with one another. We infer this from some of the complexity of their route planning, survival strategies, and ability to adapt to new situations. Now, I assert that those same strategies can not only be learned by modern ML but are indeed often even hard-codable! As I view intelligence as a measure of an agent's behaviors in a system, such a measure should not distinguish the bee and my hard-wired agent. This for me means hard-coded things can be intelligent as they can mimic bees (and with enough code humans).

However, the distribution of behaviors which humans inhabit are prohibitively difficult to code by hand. So we rely on data-driven techniques to search for such distributions in a space which is rich enough to support complexities at the level of the human brain. As such I certainly have no reason to believe, just because I can train one, that it must be less intelligent than humans. On the contrary, I believe in every verifiable domain RL must drive the agent to be the most intelligent (relative to RL award) it can be under the constraints--and often it must become more intelligent than humans in that environment.

show 2 replies
benreesmanlast Tuesday at 2:46 PM

Your reply is enough of a zinger that I'll chuckle and not pile on, but there is a very real and very important point here, which is that it is strictly bad to get mystical about this.

There are interesting emergent behaviors in computationally feasible scale regimes, but it is not magic. The people who work at OpenAI and Anthropic worked at Google and Meta and Jump before, they didn't draw a pentagram and light candles during onboarding.

And LLMs aren't even the "magic. Got it." ones anymore, the zero shot robotics JEPA stuff is like, wtf, but LLM scaling is back to looking like a sigmoid and a zillion special cases. Half of the magic factor in a modern frontier company's web chat thing is an uncorrupted search index these days.