2016年3月28日 星期一

[AMMAI] [Lecture 05] - "Nonlinear dimensionality reduction by locally linear embedding,"

Paper Information:
  Nonlinear dimensionality reduction by locally linear embedding," Roweis & Saul, Science, 2000.

Motivation:
  The need to analyze large amounts of multivariate data raises the fundamental problem of dimensionality reduction.

Contributions:
   They introduce locally linear embedding (LLE), an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs. Moreover, LLE maps its inputs into a single global coordinate system of lower dimensionality, and its optimizations do not involve local minima.

Technical summarization:
  The LLE can be summarized in the Fig.

1. Select neighbors (for example by using the K nearest neighbors)
2. Reconstruct with linear weights
Minimize the cost function subject to two constrains:
  first,Xi is reconstructed from neighbors
  second, the rows of the weight matrix sum to one.
Weights can be solved by least-squares problems.





3.Map to embedded coordinates
It can solved by a sparse NxN eigenvalue problem


My comment:



LLE is able to identify the underlying structure of the manifold. However, PCA and MDS can not achieve.

A straightforward visualization gives a clear point of view to understand the advantage of the method. Besides, Applying LLE to various domains shows that  the coordinates of these embedding spaces are related to meaningful attributes, such as the pose and expression of human faces.


沒有留言:

張貼留言