Learning Translation Invariance in CNNs

Research output: Chapter in Book/Report/Conference proceedingConference Contribution (Conference Proceeding)

Abstract

When seeing a new object, humans can immediately recognize it across different
retinal locations: we say that the internal object representation is invariant to
translation. It is commonly believed that Convolutional Neural Networks (CNNs)
are architecturally invariant to translation thanks to the convolution and/or pooling operations they are endowed with. In fact, several works have found that these networks systematically fail to recognise new objects on untrained locations. In this work we show how, even though CNNs are not ‘architecturally invariant’ to translation, they can indeed ‘learn’ to be invariant to translation. We verified that this can be achieved by pretraining on ImageNet, and we found that it is also possible with much simpler datasets in which the items are fully translated across the input canvas. We investigated how this pretraining affected the internal network representations, finding that the invariance was almost always acquired, even though it was some times disrupted by further training due to catastrophic forgetting/interference. These experiments show how pretraining a network on an environment with the right ‘latent’ characteristics (a more naturalistic environment) can result in the network learning deep perceptual rules which would dramatically improve subsequent generalization.
Original languageEnglish
Title of host publicationNeural Information Processing Systems 2020
Subtitle of host publicationShared Visual Representations in Human and Machine Intelligence
Publication statusAccepted/In press - 3 Nov 2020

Fingerprint Dive into the research topics of 'Learning Translation Invariance in CNNs'. Together they form a unique fingerprint.

Cite this