The term "perceptron" was introduced in the 1950s to designate a simple mechanism to achieve "perception."
See video. Definition of perceptron
It is basically a Feedforward neural network, with 0 hidden layers, and Heaviside step function activation functions. They are a simpler version of Logistic regression.
https://www.wikiwand.com/en/Perceptron
As most older models, the original perceptrons, had threshold activation functions (see Hopfield network).
The hopes to build "seeing machines" vanished when Minsky and Papert published their book Perceptrons [1969], in which they rigorously demonstrated that perceptrons are quite limited in their ability to extract global features from local information. They could only implement linearly separable classification.
Simple storing problem: is a certain training set linearly separable?
However, the reaction was too drastic, because adding more layers to the Feedforward neural network (giving so-called "multilayer perceptrons"), avoided the issues pointed out by Minksy and Papert.
Multilayered perceptrons work essentially in a manner similar to the prevailing neurophysiological view. According to this view, on arrival at the cortex, sensory information is subject to a hierarchy of feature extractions. Further comparison with the Cortex, is done in the Corticonics book (pages 200-203)