115 HOW DOES THE NEURAL NETWORK WORK AS AN

115 HOW DOES THE NEURAL NETWORK WORK AS AN
13 ONCONEURAL ANTIBODIES AND TUMOR TYPE DETERMINE SURVIVAL AND
24 NEURAL SYSTEMS AND ARTIFICIAL LIFE GROUP INSTITUTE OF

A NEURAL NETWORK MODEL RELATING H AT A SINGLE
A TRANSACTION PATTERN ANALYSIS SYSTEM BASED ON NEURAL NETWORK
ANALYSIS OF TRAINED NEURAL NETWORKS K WOJTEK PRZYTULA1 AND

Przetłumaczona wersja strony 11

11.5. How does the neural network work as an associative memory?


(Translation: Marcin Ptak, [email protected])


Program Example12b contains the Hopfield network model; its task will be to store and reproduce simple images. The whole “flavor” of this memory, however, lies in the fact that it is able to reproduce the message (image) on the basis of strongly distorted or disturbed input signal, so it works in a way that usually is called autoassociation1. Thanks to autoassociation Hopfield network can automatically fill out incomplete information. Figure 11.13 shows reprinted many times in various books and reproduced on the Internet (here downloaded from www.cs.pomona.edu website and also available on the eduai.hacker.lt) image showing how efficient the Hopfield network can be in removing interference from input signal and in complete data recovering in cases where there were provided only some excerpts.

115 HOW DOES THE NEURAL NETWORK WORK AS AN

Fig. 11.13. Examples of Hopfield network working as associative memory. Description in text.


The next three lines of this figure show each time on the left side a picture, which was provided as an input to the network, the middle column shows the transition state when the network searches its memory for the proper pattern, and - on the right side - the end result, that is the result of “reproducing” the proper pattern. Of course, before we could obtain here presented images of “remembering” by the network of relevant information (images) - first they had to be fixed in the learning process. During learning, the network was shown exemplary image of a spider, two bottles and a dog’s “face” and the network has memorized the patterns and prepared to reproduce them. However, once the network has been trained - it could do really cool stuff. When showing her very “noisy” picture of a spider (the top row of Figure 11.13) - it reconstructed the image of the spider without noise. When shown the picture of a bottle - it remembered that during the learning process a picture of two bottles was presented. Finally, it was sufficient to show to the network the dog's ear only - to reconstitute the whole picture.


Pictures shown in Figure 11.13 are nice, but reproducing them is not associated with any important practical task. However autoassociative memory can be used for many useful and practical purposes. The network, for example, can reproduce a full silhouette of an incoming aircraft at a time when the camera recorded a picture of an incomplete machine due to the fact that the clouds obscured the full outline of the machine. This fact plays an important role in the defense systems, when the most crucial issue is the fast recognition of the problem, “friend - enemy”. Network employed as an autoassociative memory can also fill out an incomplete inquiry addressed to some type of information system such as database. As it's commonly known, such database can provide many useful information, on the condition however, that a question is asked correctly. If the question to the network will be managed by untrained or careless user and do not correspond to preconceived models, then the database does not know how to answer it. Autoassociative memory can be helpful in “working things out” with the database management program, so finding the searched information will be done correctly even in case of imprecise (but clear) user's query. The network itself will be able to figure out all the missing details, that scatty (or untrained) user has not entered - although he should do that. Autoassociative Hopfield network then mediates between the user and the database, like a wise librarian in the school library, which is able to understand, what book the student asks for, wishing to lend a school lecture, and he has forgotten the author, the title and any other details, but he knows what this book is about.


Autoassociative network may be used in other numerous ways; in particular, it can remove noise and distortion from different signals - even when the level of “noisiness” of the input makes impossible practical use of any other methods of signal filtration. The extraordinary effectiveness of Hopfield networks in such cases comes from the fact that the network, in fact, reproduces the form of a signal (usually a picture) from its storage resources, and provided a distorted input image is only a starting point, “some idea on the trail of” the proper image—among all the images that may come into question.


But I think that's enough of this theory and it is time to go to practical exercises using the program Example12b. Due to clearness and readability, the program will show you the Hopfield network performance using the pictures, but remember that it is absolutely not the only possibility - these networks can just memorize and reproduce any other information, on the condition that we will arrange the way, the information is mapped and represented in the network.

Here is given the output signal from neuron No.1

Here is given the output signal from neuron No.2

Here is given the output signal from neuron No.8






115 HOW DOES THE NEURAL NETWORK WORK AS AN 115 HOW DOES THE NEURAL NETWORK WORK AS AN 115 HOW DOES THE NEURAL NETWORK WORK AS AN



115 HOW DOES THE NEURAL NETWORK WORK AS AN

Here is given the output signal from neuron No.9



115 HOW DOES THE NEURAL NETWORK WORK AS AN 115 HOW DOES THE NEURAL NETWORK WORK AS AN

Here is given the output signal from neuron No.16


Here given signal have value +1





115 HOW DOES THE NEURAL NETWORK WORK AS AN



Here is given the output signal from neuron No.96



Here given signal have value -1



115 HOW DOES THE NEURAL NETWORK WORK AS AN

115 HOW DOES THE NEURAL NETWORK WORK AS AN



Fig. 11.14. This sample image presents the distribution of the output signals of neuronal networks.


So let us explain first the relationship between the modeled Hopfield network and the pictures presented by a program. We're connecting each neuron of the network with a single point (pixel) of an image. If the neuron output signal is +1, a corresponding pixel is black. If the output neuron signal is - 1, corresponding pixel is white. Other possibilities than +1 and - 1 we do not expect, because neurons the Hopfield network is build from, are highly nonlinear and can only be distinguished in these two states (+1 or - 1), but they cannot take any other values. Considered network contains 96 neurons that - only for the presentation of results - I put in order in the form of matrix of dimensions 12 rows and 8 items in each row. Therefore, each specific network state (understood as a set of output signals produced by the network) can be seen as a monochrome image with the dimensions 12 x 8 pixels, such as it is shown in Figure 11.14.


Considered pictures could be chosen entirely arbitrary, but for the convenience of creating a set of tasks for the network I decided that they will be images of letters (because it will be easy to write them using the keyboard), or completely abstract pictures produced by the program itself according to certain criteria in mathematics (I'll describe it more extensively further in text).


The program will remember some number of these images (provided by you or generated automatically) and then lists them itself (without your participation!) as patterns for later reproducing. In Figure 11.15 you can see, how such a ready - to - remember set of patterns may look like. Of course, a set of patterns memorized by the network may include any other letters or numbers that you can generate with the same keyboard, so a set given in Figure 11.15 should be considered as one of many possible examples.

115 HOW DOES THE NEURAL NETWORK WORK AS AN

Fig. 11.15. A set of patterns prepared to memorize in the Hopfield network.


In a moment I will explain, how you can introduce more patterns to the program, but at the beginning I would like you to be focused on what this program is doing and what are the consequences - and the time for technical details will come shortly.


After the introduction (or generating) all the patterns, program Example12b sets the parameters (weight factors) of all neurons in the network, so that these images have become for the network points of equilibrium (attractors). What’s the rule behind that process and how these setting of the weights are done - all this you can read in my book titled “Neural Networks”. I can not describe it at this time, because the theory of Hopfield network learning process is quite difficult and full of mathematics, and I promised you not to use such difficult mathematical considerations in this book. You do not need to know the details, after the introduction of new patterns into the memory; you simply click the button Hebbian learning shown in the Figure 11.16. This will automatically launch the learning process, after which the network will be able to recall patterns stored in it. As it’s clear from the inscription on the button—the learning of this network is realized by Hebb’s method, which you are already familiarized with, but at this time we won’t be looking at the details of the learning process. For now just remember that during this learning process, there are produced the values of weights in the whole network, to be able to reach equilibrium state when on its output appear images matching to a stored pattern.


115 HOW DOES THE NEURAL NETWORK WORK AS AN


Fig. 11.16. Patterns to memorizing in network are entered into the Add pattern.


After learning process the network is ready to “test”. You can perform it yourself. For this purpose, first point (click the mouse) the pattern, the level of control you want to check for. Patterns and their numbers are currently visible in the Input pattern(s) for teaching or recalling window, so you can choose the one that the network has to remember. Selection (i.e. clicking the mouse) of any of these patterns will make its enlarged image appears in the window titled Enlarged selected input, and the chosen measure of its similarity to all other models will be visible directly under the miniature images of all the patterns in the window Input pattern(s) for teaching or recalling (see Figure 11.16). The level of similarity between images can be measured in two ways, that’s why beneath the window Input pattern(s) for teaching or recalling there are two boxes to choose from, described respectively: DP - the dot product, or H - the Hamming distance. What exactly does this mean I will describe you a little bit further, now just remember, that the DP measures the level of similarity between two pictures, so the high value of DP at any pattern indicates that this particular pattern can be easily confused with that one, you currently have selected. The Hamming distance is (as is also the name suggests), a measure of the difference between two images. So, if some pattern produces a high value of H measure, this pattern is safely distant from the one currently selected by you, while the low value of H measure indicates that this pattern could be confused with the current image of your choice.


Once you have selected the image that memorizing by the network you want to examine - then you can “torture” its pattern, because it is not difficult to recall a picture based on his ideal vision, but another thing is, when the picture is randomly distorted! Oh, that's when the network must demonstrate its associative skills - and that's what it’s all about.


Simply type in the box on the right, marked with the symbol %, the percentage of points the program is going to change in the pattern before it is shown to the network, so it can try to remind it. In the window marked % you can specify any number from 0 to 99 and that will be the percentage of points of the pattern the testing program will (randomly) change before it starts the test. But to enter this number into the pattern, you have to press the Noise button (i.e. noise or distortion). To make it even more difficult - you can enter additionally reversed (negative) image by pressing the Inverse button.


Selected and intentionally distorted pattern appears in the Enlarged selected input window. Look at it: would you be able to guess what pattern this picture was made from? Additionally, you can see if your modified pattern does not become after all these changes more similar to one of the 'competitive' images? Assessing the situation you will find useful measures of its “affinity” with different patterns, which will be presented in the form of numbers, located below the thumbnail images of all the patterns in the window Input pattern(s) for teaching or recalling.


I advise you: Do not set at the beginning to large deformations of the image, because it will be difficult to recognize from the massacred pattern its original shape - not just for the network, but also for you. From my experience I can suggest that the network is doing well with distortion not exceeding 10%. Good results are achieved when a - seemingly paradoxically - there is a very large number of changed points. This is the result of the fact that when large number of points is changed, the image retains its shape, but there is the exchange of point’s colors - white on black and vice versa. For example, if you choose the number 99 as a percentage of the points to change, the image changes into its perfectly accurate negative. Meanwhile, negative, is actually the same information, which can be easily traced in the modeled network. When changing the number of points a bit less than 99% formed image is as well recognized for the network - it is a negative, with minor changes and the network also “remembers” a familiar image without any difficulties. However, very poor results are achieved when attempting to reproduce the original pattern with distortions ranging from 30% to 70% - network recalls something, but usually the pattern is reproduced after a long period of time (network needs many iterations before the image is fully recovered), and the reconstruction of the pattern occurs in a deficient way (still a lot of distortions).

So at the beginning of the exam you create an image of slightly (for example, look at Fig. 11.17, on the left) or strongly distorted pattern (Fig. 11.17, on the right), which becomes the starting point of the process of remembering the pattern by the network.


115 HOW DOES THE NEURAL NETWORK WORK AS AN 115 HOW DOES THE NEURAL NETWORK WORK AS AN


Fig. 11.17. Patterns, which the process of recalling messages stored on the network begins from: less distorted pattern of the letter “B” (on the left) and strongly distorted pattern of the letter “B” (on the right).


The process of remembering the memorized pattern is that the network output signals are given on the entry, and there (by the neurons) are transformed into new outputs, which, in turn, by feedback inputs are directed to a network, etc. This process is automatically stopped, when in one of the next iteration no longer occurs any change in the output signals (it means the network “remembered” the image). Usually this remembered picture is an image of a perfect pattern, which distorted version you applied the network - but, unfortunately, it not always goes this way.


If the image, which the process of “remembering” starts from, is only slightly different from the pattern - a recollection may occur almost immediately. This is illustrated in Figure 11.18, that shows, how network recalled the correct shape of the letter B, starting from a much distorted pattern shown in Figure 11.17 on the left. Subsequent images in Figure 11.18 (and in all further in this chapter) show sequentially (in order from the left to the right) sets of the output signals from the tested network, calculated and deducted from the model. Figure 11.18 shows that the output of the network has reached the desired state (the ideal reproduction of the pattern) after only one iteration.


115 HOW DOES THE NEURAL NETWORK WORK AS AN 115 HOW DOES THE NEURAL NETWORK WORK AS AN 115 HOW DOES THE NEURAL NETWORK WORK AS AN


Fig. 11.18. Fast reproduction of distorted pattern in Hopfield network working as associative memory.


Slightly more complex processes were merged in the tested network while reproducing a highly distorted pattern, shown in Figure 11.17 on the right. In this case the network needed two iterations to achieve success (Figure 11.19).


115 HOW DOES THE NEURAL NETWORK WORK AS AN 115 HOW DOES THE NEURAL NETWORK WORK AS AN 115 HOW DOES THE NEURAL NETWORK WORK AS AN 115 HOW DOES THE NEURAL NETWORK WORK AS AN


Fig. 11.19. Reproduction of highly distorted pattern in Hopfield network.


One of the interesting characteristics of the Hopfield network, you will learn when you will do yourself some experiments with the program, is its ability to store both the original signals of successive patterns, as well as signals that are the negatives of stored patterns. It can be proved mathematically that this is happening always. Each stage in the learning process of the network, which leads to memorizing a pattern, automatically causes the creation of an attractor corresponding to negative of the same pattern. Therefore, the pattern recovery process can be completed either by finding the original signal - or finding its negative. Since the negative contains exactly the same information as the original signal (the only difference is that in places where the original signal has a value 1, in the negative is - 1, and vice versa), therefore, finding by the network a negative of distorted pattern is considered also as a success. For example, in Figure 11.20, you can see the process of reproducing a heavily distorted pattern of the letter B, which has ended with finding a negative of this letter.


115 HOW DOES THE NEURAL NETWORK WORK AS AN 115 HOW DOES THE NEURAL NETWORK WORK AS AN 115 HOW DOES THE NEURAL NETWORK WORK AS AN


Fig. 11.20. Associative memory reproducing the negative of remembered pattern.

1 Autoassociation—means the working mode of the network, so that a specific message is associated with itself. Thanks to autoassociation application of even small fragments of stored information makes this information (for example, images) is reproduced in the memory in its entirety and with all the details. An alternative to autoassociation is heteroassociation, which consists in the fact that one message (for example, photographs of Grandma) brings memories of other information (for example, the taste of jam, which Grandma did). Hopfield network can work either as autoassociative memory or as heteroassociative, but the former one is simpler, and therefore we'll focus now on it.


APPENDIX DOSS TRIGEMINAL NEURALGIA TREATMENT (DTNT) FOLLOWING FULL INFORMED
ARTICLES ASTROGLIA INDUCE NEUROGENESIS FROM ADULT NEURAL STEM CELLS
ARTIFICIAL INTELLIGENCE AND NEURAL NETWORKS APPLICATIONS IN ADVANCE LEVEL


Tags: network work, hopfield network, network, neural