logoalt Hacker News

ryukafalzlast Wednesday at 2:12 PM1 replyview on HN

I'm not convinced that symbolic processing doesn't still have a place in AI though. My feeling about language models is that, while they can be eerily good at solving problems, they're still not as capable of maintaining logical consistency as a symbolic program would be.

Sure, we obviously weren't going to get to this point with only symbolic processing, but it doesn't have to be either/or. I think combining neural nets with symbolic approaches could lead to some interesting results (and indeed I see some people are trying this, e.g. https://arxiv.org/abs/2409.11589)


Replies

TeMPOraLlast Wednesday at 9:58 PM

I agree that symbolic processing still has a role - but I think it's the same role it has for us: formal reasoning. I.e. a specialized tool.

"Logical consistency" is exactly the kind of red herring that got us stuck with symbolic approach longer than it should. Humans aren't logically consistent either - except in some special situations, such as solving logic problems in school.

Nothing in how we think, how we perceive the world, categorize it and communicate about it has any sharp boundaries. Everything gets fuzzy or ill-defined if you focus on it. It's not by accident. It should've been apparent even then, that we think stochastically, not via formal logic. Or maybe the Bayesian interpretation of probabilities was too new back then?

Related blind alley we got stuck in for way longer than we should've (many people are still stuck there) is in trying to model natural language using formal grammars, or worse, argue that our minds must be processing them this way. It's not how language works. LLMs are arguably a conclusive empirical proof of that.

show 1 reply