Freudian Slips: Analysing the Internal Representations of a Neural Network from Its Mistakes

Sen Jia, Tom Lansdall-Welfare, Nello Cristianini

Research output: Chapter in Book/Report/Conference proceedingConference Contribution (Conference Proceeding)

1 Citation (Scopus)
439 Downloads (Pure)

Abstract

The use of deep networks has improved the state of the art in various domains of AI, making practical applications possible. At the same time, there are increasing calls to make learning systems more transparent and explainable, due to concerns that they might develop biases in their internal representations that might lead to unintended discrimination, when applied to sensitive personal decisions. The use of vast subsymbolic distributed representations has made this task very difficult. We suggest that we can learn a lot about the biases and the internal representations of a deep network without having to unravel its connections, but by adopting the old psychological approach of analysing its “slips of the tongue”. We demonstrate in a practical example that an analysis of the confusion matrix can reveal that a CNN has represented a biological task in a way that reflects our understanding of taxonomy, inferring more structure than it was requested to by the training algorithm. In particular, we show how a CNN trained to recognise animal families, contains also higher order information about taxa such as the superfamily, parvorder, suborder and order for example. We speculate that various forms of psycho-metric testing for neural networks might provide us insight about their inner workings.
Original languageEnglish
Title of host publicationAdvances in Intelligent Data Analysis XVI
Subtitle of host publication16th International Symposium, IDA 2017, London, UK, October 26–28, 2017, Proceedings
EditorsN Adams, A Tucker, D Weston
PublisherSpringer
Pages138-148
Number of pages11
ISBN (Electronic)9783319687650
ISBN (Print)9783319687643
DOIs
Publication statusPublished - 4 Oct 2017

Publication series

NameLecture Notes in Computer Science
PublisherSpringer
Volume10584
ISSN (Print)0302-9743

Keywords

  • deep learning
  • taxonomy
  • computer vision
  • explainable AI
  • black-box testing

Fingerprint

Dive into the research topics of 'Freudian Slips: Analysing the Internal Representations of a Neural Network from Its Mistakes'. Together they form a unique fingerprint.

Cite this