Modular Autoencoders for Ensemble Feature Extraction

Henry W J Reeve, Gavin Brown

Research output: Chapter in Book/Report/Conference proceedingConference Contribution (Conference Proceeding)

Abstract

We introduce the concept of a Modular Autoencoder (MAE), capable of learning a set of diverse but complementary representations from unlabelled data, that can later be used for supervised tasks. The learning of the representations is controlled by a trade off parameter, and we show on six benchmark datasets the optimum lies between two extremes: a set of smaller, independent autoencoders each with low capacity, versus a single monolithic encoding, outperforming an appropriate baseline. In the present paper we explore the special case of linear MAE, and derive an SVD-based algorithm which converges several orders of magnitude faster than gradient descent.
Original languageEnglish
Title of host publicationJournal of Machine Learning Research (Workshop & Conference Proceedings)
PublisherProceedings of Machine Learning Research
Pages242-259
Number of pages18
Volume44
Publication statusPublished - 11 Dec 2015
EventThe 1st International Workshop “Feature Extraction: Modern Questions and Challenges” - Montreal, Canada
Duration: 11 Dec 201512 Dec 2015
https://nips.cc/virtual/2015/workshop/4915

Conference

ConferenceThe 1st International Workshop “Feature Extraction: Modern Questions and Challenges”
Country/TerritoryCanada
CityMontreal
Period11/12/1512/12/15
Internet address

Bibliographical note

Neural Information Processing Systems: Workshop on Feature Extraction ; Conference date: 01-01-1824

Fingerprint

Dive into the research topics of 'Modular Autoencoders for Ensemble Feature Extraction'. Together they form a unique fingerprint.

Cite this