23 Downloads (Pure)

Abstract

Recent efforts to learn reward functions from human feedback have tended to use deep neural networks, whose lack of transparency hampers our ability to explain agent behaviour or verify alignment. We explore the merits of learning intrinsically interpretable tree models instead. We develop a recently proposed method for learning reward trees from preference labels, and show it to be broadly competitive with neural networks on challenging high-dimensional tasks, with good robustness to limited or corrupted data. Having found that reward tree learning can be done effectively in complex settings, we then consider why it should be used, demonstrating that the interpretable reward structure gives significant scope for traceability, verification and explanation.
Original languageEnglish
Number of pages22
DOIs
Publication statusPublished - 3 Oct 2022

Bibliographical note

22 pages (9 main body). Preprint, under review

Keywords

  • cs.LG
  • cs.AI

Fingerprint

Dive into the research topics of 'Reward Learning with Trees: Methods and Evaluation'. Together they form a unique fingerprint.

Cite this