Formally, the number of "parameters" grows with the training set. Here number of parameters basically refers to "amount of information in learned function". For example, in neares-neighbour models, the parameters are the position of the neighbourhoods, and the categories to which those regions map. The number of these parameters clearly grows as more data points are added.. A more intuitive way of looking at it is that the algorithm doesn't fit parameters, and instead uses some other method to generate the function it is trying to learn. See Nonparametric statistics An example is Nearest-neighbour classification (note that the model has parameters, but they are not fitting parameters, instead they are chosen before using the algorithm, and then are fixed)
https://en.wikipedia.org/wiki/Nonparametric_statistics
not based on parameterized families of probability distributions
Nonparametric models
They are both based on looking at local neighbours near points for which we are going to make the prediction.