Mairal, Julien, et al. "Online dictionary learning for sparse coding." Proceedings of the 26th annual international conference on machine learning. ACM, 2009.
Motivation:
While learning the dictionary has proven to be critical to achieve (or improve upon) state-of-the-art results, effectively solving the corresponding optimization problem is a significant computational challenge, particularly in the context of the large-scale datasets involved in image processing tasks, that may include millions of training samples.
To address these issues, this paper propose an online approach that processes one element (or a small subset) of the training set at a time.
Contributions:
A new online optimization algorithm for dictionary learning, based on stochastic approximations, which scales up gracefully to large datasets with millions of training samples.
Technical summarization:
Classical dictionary learning techniques:
The main idea is try to model data vector as linear combinations of basis element. Therefore, loss function should be small if D is "good" at representing the signal X.

Online Dictionary Learning:
The algorithm is to alternate between the two variables, minimizing over one while keeping the other one fixed.

In practice, to speed up the algorithm it can be achieved by replacing the line 5 and 6 of Algorithm 1 with the concept of mini-batch by

My comment:
While compared to batch, online setting is more realistic and scaleable;
moreover, the parameter-free property makes the experiment stable and objective.
Besides, The application on removing the text from the damaged image is impressive.


沒有留言:
張貼留言