Units of artificial neural networks:
However the input units simply output the value that was input to them from the example to be propagated. So every other unit in a network normally has the same internal calculation function that takes the weighted sum of inputs to it and calculates an output. Because there are different possibilities for the unit function to this dictates to some extent how learning over networks of that type is performed. Thus Firstly, there is a simple linear unit that does no calculation and it just outputs the weighted sum that was input to it.
Now here secondly are other unit functions that are called as threshold functions it means that they are set up to produce low values up until the weighted sum reaches a particular threshold means they produce high values after this threshold. Thus the simplest type of threshold function produces a 1 if the weighted sum of the inputs is over a threshold value T then produces a -1. Presumably we call that functions step functions and due to the fact that when drawn as a graph there it looks like a step. Since another type of threshold function is known as sigma function that has similarities by the step function so advantages over it.