Sparse Metric Learning via Smooth Optimization

Ying Yiming, Kaizhu Huang, ICG Campbell

Research output: Chapter in Book/Report/Conference proceedingConference Contribution (Conference Proceeding)

83 Citations (Scopus)


In this paper we study the problem of learning a low-rank (sparse) distance matrix. We propose a novel metric learning model which can simultaneously conduct dimension reduction and learn a distance matrix. The sparse representation involves a mixed-norm regularization which is non-convex. We then show that it can be equivalently formulated as a convex saddle (min-max) problem. From this saddle representation, we develop an efficient smooth optimization approach [17] for sparse metric learning, although the learning model is based on a nondifferentiable loss function. Finally, we run experiments to validate the effectiveness and efficiency of our sparse metric learning model on various datasets.
Translated title of the contributionSparse Metric Learning via Smooth Optimization
Original languageEnglish
Title of host publicationAdvances in Neural Information Processing Systems 22: 23rd Annual Conference on Neural Information Processing Systems 2009. Proceedings of a meeting held 7-10 December 2009, Vancouver, British Columbia, Canada
Subtitle of host publicationNIPS 22
EditorsYoshua Bengio, Dale Schuurmans, John D. Lafferty, Christopher K.I. Williams, Aron Culotta
PublisherCurran Associates, Inc
Pages2214 - 2222
Number of pages8
ISBN (Electronic)9781615679119
Publication statusPublished - 2009

Publication series

NameNeural Information Processing Systems (NIPS)

Bibliographical note

Name and Venue of Event: Vancouver, Canada
Conference Proceedings/Title of Journal: Advances in Neural Information Processing Systems Conference Organiser: NIPS Foundation


  • metric
  • learning
  • sparse

Fingerprint Dive into the research topics of 'Sparse Metric Learning via Smooth Optimization'. Together they form a unique fingerprint.

Cite this