The Loss Surfaces of Neural Networks with General Activation Functions

Nick P Baskerville, Jon Keating*, Francesco Mezzadri, Joseph Najnudel*

*Corresponding author for this work

Research output: Contribution to journalArticle (Academic Journal)peer-review

14 Citations (Scopus)
96 Downloads (Pure)

Abstract

The loss surfaces of deep neural networks have been the subject of several studies, theoretical and experimental, over the last few years. One strand of work considers the complexity, in the sense of local optima, of high dimensional random functions with the aim of informing how local optimisation methods may perform in such complicated settings. Prior work of Choromanska et al (2015) established a direct link between the training loss surfaces of deep multi-layer perceptron networks and spherical multi-spin glass models under some very strong assumptions on the network and its data. In this work, we test the validity of this approach by removing the undesirable restriction to ReLU activation functions. In doing so, we chart a new path through the spin glass complexity calculations using supersymmetric methods in Random Matrix Theory which may prove useful in other contexts. Our results shed new light on both the strengths and the weaknesses of spin-glass models in this context.
Original languageEnglish
JournalJournal of Statistical Mechanics:Theory and Experiments
Volume2021
Issue number6
Early online date1 Jun 2021
DOIs
Publication statusPublished - Jun 2021

Bibliographical note

Publisher Copyright:
© 2021 Institute of Physics Publishing. All rights reserved.

Fingerprint

Dive into the research topics of 'The Loss Surfaces of Neural Networks with General Activation Functions'. Together they form a unique fingerprint.

Cite this