Neural networks as perceptrons:
However ANNs look like this in the general case:
Considered that the w, x, y and z represent real valued weights so all the edges in this graph have weights associated according to them as but it was difficult to draw them all on. Here we note also more complicated ANNs are certainly possible. In fact in particularly there many ANNs have multiple hidden layers through the output from one hidden layer forming the input to another hidden layer. But here also, ANNs by no hidden layer - when the input units are connected directly to the output units - are possible.
Presumably in our vehicle an example as the images will all be normalised to having the same number of pixels. Means then there may be an input unit for each red and green and blue intensity just for each pixel. Otherwise greyscale images may be used so than this case there use only to be an input node for each pixel that takes in the brightness of the pixel. Thus hidden layer is likely to contain far fewer units as probably between 3 and 10 than the number of input units. Hence the output layer will contain three units that one for each of the categories possible as car, bus or tank. Means when the pixel data for an image is given as the initial values for the input units since in that information will propagate by the network and the three output units will each produce a real value. In fact the output unit that produces the highest value is taken as the categorisation for the input image.