Learning Weights in Perceptrons:
Furthermore details are we will look at the learning method for weights in multi-layer networks next lecture. Thus the following description of learning in perceptrons will help to clarify what is going on in the multi-layer case. According to the situation we are in a machine learning setting means we can expect the task to be to learn a target function wh into categories that given as at least a set of training examples supplied with their correct categorisations. However a little thought will be required in order to choose the correct way of thinking about the examples as input to a set of input units so due to the simple nature of a perceptron there isn't much choice for the rest of the architecture.
Moreover in order to produce a perceptron able to perform our categorisation task that we need to use the examples to train the weights between the input units and the output unit just to train the threshold. In fact to simplify the routine here we think of the threshold as a special weight that comes from a special input node in which always outputs as 1. Thus we think of our perceptron like as: each categorises examples
After then we can justify that the output from the perceptron is +1 if the weighted sum from all the input units as including the special one is greater than zero but here if it outputs -1 otherwise. According to justification we see that weight w_{0} is simply the threshold value. Moreover thinking of the network such this means we can train w_{0} in the same way as we train all the other weights.