Abstract
We introduce the concept of a Modular Autoencoder (MAE), capable of learning a set of diverse but complementary representations from unlabelled data, that can later be used for supervised tasks. The learning of the representations is controlled by a trade off parameter, and we show on six benchmark datasets the optimum lies between two extremes: a set of smaller, independent autoencoders each with low capacity, versus a single monolithic encoding, outperforming an appropriate baseline. In the present paper we explore the special case of linear MAE, and derive an SVD-based algorithm which converges several orders of magnitude faster than gradient descent.
| Original language | English |
|---|---|
| Title of host publication | Journal of Machine Learning Research (Workshop & Conference Proceedings) |
| Publisher | Proceedings of Machine Learning Research |
| Pages | 242-259 |
| Number of pages | 18 |
| Volume | 44 |
| Publication status | Published - 11 Dec 2015 |
| Event | The 1st International Workshop “Feature Extraction: Modern Questions and Challenges” - Montreal, Canada Duration: 11 Dec 2015 → 12 Dec 2015 https://nips.cc/virtual/2015/workshop/4915 |
Conference
| Conference | The 1st International Workshop “Feature Extraction: Modern Questions and Challenges” |
|---|---|
| Country/Territory | Canada |
| City | Montreal |
| Period | 11/12/15 → 12/12/15 |
| Internet address |