PHYSIOLOGY 317 -- COMPUTATIONAL NEUROBIOLOGY
PART II Prof. T. Anastasio

Homework Assignment #3
Due: November 15, 1995


Hopfield Networks:

Hopfield networks are members of a broader class known as auto-associative neural networks. These are networks that can be trained to return to desired states from initial states which are close to yet different from those desired. They associate network states with themselves, hence their name.

Auto-associative networks consist of one layer of neural elements that are all interconnected, except that self-connections are disallowed. The neural elements are nonlinear, with states bounded from zero to one -- so called two-state-neurons. (Any state, whether initial or desired, will be represented by some pattern of zeros and ones over the units.) Recurrence and nonlinearity lie at the heart of auto-associative network behavior. Recurrence allows desired states to emerge via interactions among the units, and the nonlinearity prevents the whole network from running away, and instead encourages the network to settle into a stable state. The auto-associative network will, in most cases, relax from any initial state into the desired state closest to it.

The interconnections between units can be trained according to Hebbian rules (Hebb, The Organization of Behavior, Wiley, 1949). Hebbian rules are local, in the sense that they depend only upon the activity of neurons pre- and post-synaptic to any particular synapse (i.e. connection). The original rule proposed by Hebb specified an increase in synaptic strength whenever pre- and post-synaptic neurons were active together. There was no mechanism for inhibition, but other Hebbian rules with inhibition were proposed later. In addition to the original Hebb increase, the post-synaptic rule specifies a decrease in synaptic strength if the post-synaptic neuron is active when the pre-synaptic neuron is not, and vice-versa for the pre-synaptic rule. Both of these rules have experimental support (post-synaptic, Stent, PNAS 70:997-1001, 1973; pre-synaptic, Stanton and Sejnowski, Nature 339:215-218, 1989). The covariance rule proposed by Hopfield (PNAS 79:2554-2558, 1982), incorporates all three of the above mechanisms, with the addition of a specified increase in synaptic strength if pre- and post-synaptic neurons are inactive together. This last mechanism may not be supported by experimental evidence. These mechanisms are presented in the table, where all specified increases and decreases are one.


Hebbian Learning Rules

post-synaptic         1   1   0   0 
pre-synaptic 1 0 1 0

Hebb +1 0 0 0
post-synaptic +1 -1 0 0
pre-synaptic +1 0 -1 0
Hopfield +1 -1 -1 +1


Hopfield's rule for training covariance is:




where the Tij are the weights (strengths) of the connections (synapses) between any pre-synaptic unit j and post-synaptic unit i ; Tii = 0. The Vs are the desired activation patterns. V for any pattern s is a vector of ones and zeros of length n , where n is the number of neural elements in the network.

1) Verify that the Hopfield formula will yield the corresponding weight change specifications for the Hopfield rule as given in the table. Then write formulas for each of the other three learning rules given in the table (these can be just appropriately simplified versions of the Hopfield covariance rule).

2) Construct auto-associative networks having ten units. To do this, generate the following matrix of two patterns with ten states each:

        P =  [  1 1 1 1 0 0 0 0 0 0
0 0 0 0 0 0 1 1 1 1 ]

Use it to train four auto-associative networks, one for each of the four Hebbian learning rules given in the table. For example, to construct the Hopfield connectivity matrix, use Hopfield's formula and compute the sum over the two patterns for each connection weight (Tij ) between every pair of units. Repeat this using the other three formulas you defined in (1). Don't forget to set the diagonals to zero. Since each network has ten units, the connectivity matrices will be [10, 10]. Print out the matrices for the Hebb, post-synaptic, pre-synaptic and Hopfield networks. How do they differ?

3) The state of each network is represented by an n vector, where n is the number of units in the network (in these cases, ten). One way to update a network is simply to multiply its state vector by its connectivity matrix (CM): g(k) = CMv(k) . Then set:



This type of update is called synchronous, because all of the unit states are updated at the same time. Initialize the Hebb network with: v = [ 1 1 1 1 0 0 0 0 0 0 ], and synchronously update the network ten times. Does network state change over time? Try it also for the other pattern: v = [ 0 0 0 0 0 0 1 1 1 1 ]. Also, try both patterns for the other three networks. Are the learned patterns stable?

4) Set the initial state to: v = [ 1 1 0 0 0 0 0 0 0 0 ], and synchronously update the Hebb network several times. Repeat this for the other three networks. What is the result? Now try the initial state: v = [ 0.1 0 0 0 0 0 0 0 0 0 ] and update each of the four networks several times. What happens? How would you characterize this behavior? Is the result the same for all the networks? What is it about the way the network units interact that allows them to produce this behavior?

5) Now set the initial state to: v = [ 1 1 1 1 0 0 0 0 0 1 ], and iterate it several times for each of the four networks. What do they do? How would you characterize this behavior? Do they all perform the same? Which one performs differently? Why?

6) Now reconstruct (i.e. retrain) the four networks using the following patterns:

        P =  [  1 0 1 0 1 0 1 0 1 0
1 1 1 1 1 0 0 0 0 0 ].

Initialize each of the networks with the vector: v = [ 1 1 1 1 1 0 0 0 0 0 ] and synchronously iterate several times. For which networks is this pattern not a stable state? What is it about these patterns that makes them confusing to some networks? What is it about some networks that prevents them from fully differentiating the two patterns?

7) Use the Hopfield rule only for the remainder of the assignment. Construct a Hopfield network from the following patterns:

     P =  [  1 0 1 0 1 0 1 0 1 0
1 0 0 1 1 1 1 0 0 1 ].

Initialize the network with the vector: v = [ 0 0 1 0 1 0 1 0 1 0 ] and synchronously iterate it for ten cycles. Does it complete the pattern? Now repeat the experiment, but initialize with: v = [ 0 0 0 0 1 0 1 0 1 0 ] and be sure to observe network state over all ten cycles. What happens?

8) Change to an asynchronous update algorithm. Do this by choosing a unit from the ten at random, and update it alone. For example, if unit i is chosen, it would be updated by multiplying row i of the connectivity matrix by the state vector. Only element i will changed in the new state vector. Then choose another unit at random and repeat the process. For a rough equivalent to ten synchronous cycles, try using 100 asynchronous updates. Again initialize with: v = [ 0 0 0 0 1 0 1 0 1 0 ] and use the asynchronous algorithm to iterate the network, again being sure to observe network state over time. Re-initialize and repeat this procedure a few times. What are the results? Has the dynamic behavior changed? Which form of updating (synchronous or asynchronous) is more biologically plausible? Why?

9) Now construct a ten element Hopfield network using simpler patterns such as:

      P =  [  1 1 1 1 1 0 0 0 0 0
0 0 0 0 0 1 1 1 1 1 ].

Play around with it. See if it can recover, using asynchronous updates, from such corrupted versions of the patterns as: v = [ 0.4 0.3 0.5 0.3 0.4 0.2 0.1 0.1 0.2 0.1 ]. Chances are it'll do pretty well. How would you characterize this ability? Of what advantage would this be in sensory processing? Motor control? Cognitive function?

10) Generate a 30 element Hopfield network using two random patterns, each 30 elements long, where each element has a 50/50 chance of being a one. Then verify that the network can recall both patterns. Do this by initializing with each pattern at half strength (i.e. multiply it first by 0.5), update asynchronously for about 100 steps, and compare the final state with the tested pattern and record if it is correct. Then repeat the procedure from the beginning, but construct the Hopfield network from three random patterns, then four, five, six, and so on until verification begins to fail. How many patterns can a 30 unit Hopfield network reliably store? Is the transition to failure abrupt? What would you estimate is the capacity of a Hopfield network (express your answer as the ratio of the number of reliably stored patterns to the number of network elements)? Would you expect the capacity of a Hebb rule network (see table) to be greater or lesser than this? Why?

11) Go back and train the 30 unit Hopfield network with the number of patterns it could reliably store (again use 50/50 patterns). Save the resulting connectivity matrix in a separate array. Now lesion 10% of the connections by giving every element in the connectivity matrix a 10% chance of being set to zero. Verify recall for each pattern as in (10). How did the lesioned network perform? Starting with the unlesioned connectivity matrix that you saved, repeat the test by lesioning 20%, 30%, 40% and so on. At what level of lesioning do you really begin to see a decrement in recall performance? Is this level reached abruptly? What does this tell you about distributed representations? Would this property be an advantage for the nervous system? Why?

Turn in answers to all questions, as well as print-outs of your m-files and results.