24 Downloads (Pure)

Abstract

Surrogate explainers of black-box machine learning predictions are of paramount importance in the field of eXplainable Artificial Intelligence since they can be applied to any type of data (images, text and tabular), are model-agnostic and are post-hoc (i.e., can be retrofitted). The Local Interpretable Model-agnostic Explanations (LIME) algorithm is often mistakenly unified with a more general framework of surrogate explainers, which may lead to a belief that it is the solution to surrogate explainability. In this paper we empower the community to "build LIME yourself" (bLIMEy) by proposing a principled algorithmic framework for building custom local surrogate explainers of black-box model predictions, including LIME itself. To this end, we demonstrate how to decompose the surrogate explainers family into algorithmically independent and interoperable modules and discuss the influence of these component choices on the functional capabilities of the resulting explainer, using the example of LIME.
Original languageEnglish
Number of pages10
JournalarXiv
Publication statusUnpublished - 29 Oct 2019
Event Conference on Neural Information Processing Systems - Vancouver, Canada
Duration: 8 Dec 201914 Dec 2019
Conference number: 33
https://nips.cc/Conferences/2019

Structured keywords

  • Digital Health

Keywords

  • cs.LG
  • stat.ML

Fingerprint Dive into the research topics of 'bLIMEy: Surrogate Prediction Explanations Beyond LIME'. Together they form a unique fingerprint.

  • Projects

    SPHERE2

    Craddock, I. J.

    1/10/1830/09/21

    Project: Research, Parent

    Cite this