logoalt Hacker News

fc417fc802yesterday at 3:19 PM1 replyview on HN

I appreciate the insightful reply. In typical HN style I'd like to nitpick a few things.

> so if process described in 1) is going to lead towards a working general intelligence, there's a good chance it'll stumble on the same architecture evolution did.

I wouldn't be so sure of that. Consider that a biased random walk using agents is highly dependent on the environment (including other agents). Perhaps a way to convey my objection here is to suggest that there can be a great many paths through the gradient landscape and a great many local minima. We certainly see examples of convergent evolution in the natural environment, but distinct solutions to the same problem are also common.

For example you can't go fiddling with certain low level foundational stuff like the nature of DNA itself once there's a significant structure sitting on top of it. Yet there are very obviously a great many other possibilities in that space. We can synthesize some amino acids with very interesting properties in the lab but continued evolution of existing lifeforms isn't about to stumble upon them.

> the symbolic approach to modeling the world is fundamentally misguided.

It's likely I'm simply ignorant of your reasoning here, but how did you arrive at this conclusion? Why are you certain that symbolic modeling (of some sort, some subset thereof, etc) isn't what ML models are approximating?

> the meaning of words and concepts is not an intrinsic property, but is derived entirely from relationships between concepts.

Possibly I'm not understanding you here. Supposing that certain meanings were intrinsic properties, would the relationships between those concepts not also carry meaning? Can't intrinsic things also be used as building blocks? And why would we expect an ML model to be incapable of learning both of those things? Why should encoding semantics though spatial adjacency be mutually exclusive with the processing of intrinsic concepts? (Hopefully I'm not betraying some sort of great ignorance here.)


Replies

sfinktoday at 3:39 AM

>> the symbolic approach to modeling the world is fundamentally misguided. > but how did you arrive at this conclusion? Why are you certain that symbolic modeling (of some sort, some subset thereof, etc) isn't what ML models are approximating?

I'm not the poster, but my answer would be because symbolic manipulation is way too expensive. Parallelizing it helps, but long dependency chains are inherent to formal logic. And if a long chain is required, it will always be under attack by a cheaper approximation that only gets 90% of the cases right—so such chains are always going to be brittle.

(Separately, I think that the evidence against humans using symbolic manipulation in everyday life, and the evidence for error-prone but efficient approximations and sloppy methods, is mounting and already overwhelming. But that's probably a controversial take, and the above argument doesn't depend on it.)