A word on Neural Network learning


Some time ago, while doing random research, I came across this funny but important note about neural networks, and thought it was worth sharing.

The choice of the dimensionality and domain of the input set is crucial to the success of any connectionist model. A common example of a poor choice of input set and test data is the Pentagon’s foray into the field of object recognition. This story is probably apocryphal and many different versions exist on-line, but the story describes a true difficulty with neural nets.

As the story goes, a network was set up with the input being the pixels in a picture, and the output was a single bit, yes or no, for the existence of an enemy tank hidden somewhere in the picture. When the training was complete, the network performed beautifully, but when applied to new data, it failed miserably. The problem was that in the test data, all of the pictures that had tanks in them were taken on cloudy days, and all of the pictures without tanks were taken on sunny days. The neural net was identifying the existence or non-existence of sunshine, not tanks.

– David Gerhard; Pitch Extraction and Fundamental Frequency: History and Current Techniques, Technical Report TR-CS 2003-06, 2003.



1 Comment

Leave a Reply

Your email address will not be published. Required fields are marked *