Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

From a survey of Deep Learning published in Nature[1]:

"The issue of representation lies at the heart of the debate between the logic-inspired and the neural-network-inspired paradigms for cognition. In the logic-inspired paradigm, an instance of a symbol is something for which the only property is that it is either identical or non-identical to other symbol instances. It has no internal structure that is relevant to its use; and to reason with symbols, they must be bound to the variables in judiciously chosen rules of inference. By contrast, neural networks just use big activity vectors, big weight matrices and scalar non-linearities to perform the type of fast ‘intuitive’ inference that underpins effortless commonsense reasoning."

Pithy version:

"As of 2015, I pity the fool who prefers Modus Ponens over Gradient Descent." - Tomasz Malisiewicz [1]

Superlong version: https://plato.stanford.edu/entries/logic-ai/

[1] https://www.cs.toronto.edu/~hinton/absps/NatureDeepReview.pd...

[2] http://www.computervisionblog.com/2015/04/deep-learning-vs-p...



The first quote is simply not true. Symbols, even in Lisp, have properties (as well as values and possibly function values), i.e. they have deep structure. In general, symbols are explicitly linked to other symbols or algorithms. These links are the symbolic analogue of the vectors in word2vec, except that they are explicit so you can see what they mean, but you have to enter them manually or use a more complex machine learning algorithm.

The author of the computer vision blog post doesn't seem to know much about symbolic AI. Some of the comments point this out.


The various benchmarks are a great opportunity to demonstrate the superiority of the symbolic approach over the neural network based models. I encourage you to try, there's no better way to dispel the doubters.


I don't think they're suitable for the same tasks, so I don't consider the two approaches to be in competition with each other. It's also very difficult if you're one person with limited time up against companies the size of Google.

I'm busy on other things at the moment, but I intend to develop a rule-based system some time soon. I can rule out a neural network straight away because there's no data available, the rules are explicit, well documented, and have to be followed, and the system has to justify its reasoning.

That is not to say I wouldn't consider using a neural network for a perception task.


Is anybody working on using a neural network to build and update a symbolic graph, and vice versa? Or at least using a symbolic graph as an input to a NLP neural network, so the network could learn to rely on the symbolic graph when it is useful?


Yes. ConceptNet [1] and distributional word embeddings go really well together, and can compare word meanings better than either one alone. Here's the preprint of the AAAI 2017 paper [2].

[1] http://www.conceptnet.io/

[2] https://arxiv.org/pdf/1612.03975v1.pdf




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: