You are on page 1of 2

Notes on the paper Learning, Invariance, and Generalization in High-order Neural Networks

Amir Hesam Salavati E-mail: saloot@gmail.com July 25, 2011

Summary

In [1], the authors discuss various applications of higher-order neural networks. A higher-order neural network is a network composed of neurons that rely on statistics of order two or more in computing the connection weights and updating neural states. Higher-order neurons are equivalents of neural networks with hidden layers. They are able to solve problems that are not solvable with a single layer ordinary neurons. Rate: 6/10

Model and Method


Denition 1. A higher-order neuron is a neuron whose outputs not only depends on the state of its neighbors but also on their correlation. More specically, the state of neuron i, denoted by si is given by: wij sj + wijk sj sk + . . .) (1) si = f ( +
j j,k

where f (.) is non-linear function. Higher-order neuron is equivalent to a hidden normal neuron plus a visible one. Such neurons are capable of learning the XOR problem.

Ideas and Facts That Might Help Our Research


[question]Are higher-order neurons biologically realistic? [idea][important]The slow training process of rst order neurons in a multi-layer network and solving not-linearly-separable problem s with a single-layer neural network can be solved by using neurons of higehr order, i.e. the neurons whose output not only depends on their neighbors, but also on how their neighbors correlate.

[important]These higher order neurons are equivalent to adding hidden layers of rst order neurons. [disadvantage][important]One disadvantage of the higher-order neural network is the complexity of learning rule in the sense that we need to keep track of O(nk ) weights, where k is the highest degree of correlation we are interested in.

3.1

Learning Rules for Higher-Order Neurons

One possible learning rule for a network of higher-order neurons is to extend the perceptron rule. An example is given below for second order neurons: wijk = [oi si ]sj sk where oi is the desired output and si is the actual output of node i. Another learning rule comes from extending the outer-product rule, also known as the Hebbian learning rule, to higher-order neurons. An example for the second order neurons is given below:
M

(2)

wijk = where

[x yi ][x yj ][x yk ] i j k
=1 M

(3)

yi =

1 M

x i
=1

Here, M is the number of patterns in the training set and x is the ith bit of pattern . i

Introductory Parts and Related Works


[disadvantage]It is known that single layer neural networks with threshold logic units of rst order can learn only linear separable training sets. [disadvantage]Although one can use a network with multi-layer networks threshold logic unit of rst order, training would be really slow.

References
[1] C. L. Giles, T. Maxwell, Learning, invariance, and generalization in high-order neural networks, Applied Optics, Vol. 26, No. 23, 1987, pp. 4972-4978.

You might also like