The Legend of the Greasepole

Neural Nets and AI -- what's the difference?

Artificial Intelligence is "notoriously difficult" to define. In order to compare Neural Networks and Artificial Intelligence, we will define AI as follows: the development, analysis and simulation of computationally detailed, efficient systems for performing complex tasks, where the tasks are broadly defined, involve considerable flexibility and variety, and are typically similar to aspects of human cognition or perception. (This definition requires that we include studies like natural language understanding and generation, expert problem solving, common-sense reasoning, visual scene analysis, action planning, and learning under the AI umbrella.)

Note that nothing in our description of AI prevents the computational systems from being neural networks. Indeed, the bulk of AI can be called "traditional" or "symbolic", meaning it relies on computation over symbolic structures. Let's contrast the properties of Symbolic Artificial Intelligence and Neural Networks.

Neural Networks Symbolic Artificial Intelligence
"Graceful degradation."

(There are a number of other properties that stem from this – the "Pattern completion property" and automatic similarity-based generalization are a result of the same phenomenon.)

Non-graceful degradation. A small change in the inputs to the system often results in significant changes to the output.

(This isn’t a clear-cut win for Neural Networks. Symbolic AI can be made to degrade gracefully.)

Emergent rule-like behaviour. Typically, non-emergent rules govern behaviour.
Content-based access to long-term memory in two senses:

(1) The long term memory is in the weights, so manipulating the input vector can evoke memories

(2) An output is a particular long-term memory recalled on the basis of the content of the input.

Encodings highly temporary.

Rigid long-term memory.

Soft constraint satisfaction. Rigid constraint satisfaction.
Difficult to encode multiply-nested structures. Encoded structures can be multiply nested (nested language understanding).
All information of any complexity must be built from lowest-common-denominator structures. Encodings must allow encoded information to be of widely varying structural complexity.
Difficult to encode labelling of links and understand how a decision is made. Programs can’t be stored in any conventional sense. Links, pointers and stored programs can be created.
Resolution of Neural Network activation values not fine enough to allow them individually to encode complex symbolic structures. Resolution not an issue.
Learning proceeds in lengthy, gradual weight modification. Can perform certain types of rapid learning, proceeding in large steps.

Open Questions

Is it necessary to go beyond symbolic AI to account for complex cognition. If so, should we throw away symbolic AI entirely? Or is some amount of complex symbol-processing unavoidable? Moreover, how can different styles of systems be gracefully combined into hybrid systems?  The truth is out there...


Barnden, John A., "Artificial Intelligence and Neural Networks" in The Handbook of Brain Theory and Neural Networks (Michael A. Arbib, Ed.) pp 98-102.

Haykin, Simon, Neural Networks: A Comprehensive Foundation (1999) 2nd Ed.