Result extends to functions - perceptrons:
Thus the dotted lines can be seen as the threshold in perceptrons: whether the weighted sum, S, falls below it, after then the perceptron outputs one value, if S falls above it and the alternative output is produced. In fact there it doesn't matter how the weights are organized, thethreshold will still be a line on the graph. But still therefore, functions that are not linearly separable cannot be represented by perceptrons.
So Notice that this result extends to functions over any number of variables that can take in any input that produce a Boolean output as and hence could, in principle be learned by a perceptron. Just for instance, in the following two graphs, the function takes in two inputs as Boolean functions so the input can be over a range of values. Now we considered concept on the left can be learned by a perceptron, wherever the concept on the right cannot as:
Here as an exercise in the left hand plot there draw in the separating like threshold line.
Regrettably here the disclosure in Minsky and Papert's book in which perceptrons cannot learn even like a simple function that was taken the wrong way as: people believed it represented a fundamental flaw in the utilising of ANNs to perform learning tasks. However this led to a winter of ANN research within "AI" that lasted over a decade. In fact in reality perceptrons were being studied in order to gain insights into more complicated architectures with hidden layers that do not have the limitations that perceptrons have. So here no one ever suggested that perceptrons would be eventually required to solve real world learning problems. But fortunately, people studying ANNs within other sciences as notably neuro-science which revived interest in the study of ANNs.