A type of Unsupervised learning where we describe the data using less features (called latent factors) than the data was initially described with.
Graph embedding and extensions: A general framework for dimensionality reduction. Basically minimize
See also Feature learning, which is very similar.
https://en.wikipedia.org/wiki/Multidimensional_scaling
https://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction#Laplacian_eigenmaps
Non-parametric models are suitable especially for a scenario that all the data points in the source space are known or available and the embedding task needs to be undertaken on a given data set without the need of extension to unseen data points during learning. This is a salient characteristic that distinguishes between parametric and non-parametric subspace learning. As a typical non-parametric subspace learning framework, multi-dimensional scaling (MDS) (Cox and Cox 2000) refers to a family of algorithms that learn embedding a set of given high-dimensional data points into a low-dimensional subspace by preserving the distance information between data points in the high-dimensional space. Sammon mapping (Sammon 1969) is an effective non-linear MDS algorithm.
The fact that it works is related to the Sloppy systems and the Manifold hypothesis, and Simplicity bias
Incremental Laplacian eigenmaps by preserving adjacent information between data points
Incremental manifold learning by spectral embedding methods
Embedding new observations via sparse-coding for non-linear manifold learning
Incremental Construction of Low-Dimensional Data Representations
A New Manifold Learning Algorithm Based on Incremental Spectral Decomposition
Learning to detect concepts with Approximate Laplacian Eigenmaps in large-scale and online settings