Towards Intelligible and Robust Surrogate Explainers
: A Decision Tree Perspective

  • Kacper Sokol

Student thesis: Doctoral ThesisDoctor of Philosophy (PhD)

Abstract

Artificial intelligence explainability and machine learning interpretability are relatively young and fast-growing research fields that may seem chaotic and difficult to navigate at times. Despite these immense endeavours, a universally agreed terminology and evaluation criteria are still elusive, with many methods introduced to solve a commonly acknowledged yet undefined problem, and their success judged based on ad hoc measures. To address this challenge and lay foundation for our research, we formalise explainability (our preferred term) as a technology providing insights that lead to understanding, which both defines such techniques and fixes their evaluation criterion. While the premise is clear, understanding largely depends upon the explanation recipients, who come with a diverse range of background knowledge, mental models and expectations. Therefore, in addition to technical requirements, explainability tools should also embody various social traits as their output is predominantly aimed at humans. To tackle this duality and organise a comprehensive collection of relevant properties, we introduce a unified explainable artificial intelligence taxonomy, which is a principled framework for reasoning about explainers. While most of our contributions are strictly technical, this formalisation allows us to develop them with a human component in mind, which leads us to consider explainability as a social, bi-directional process based on contrastive statements. Stemming from this research direction is Glass-Box – a conversational explainer that empowers its users to customise and personalise explanations in a natural language dialogue.
Date of Award23 Mar 2021
Original languageEnglish
Awarding Institution
  • University of Bristol
SupervisorPeter A Flach (Supervisor)

Cite this

'