Interpretable Preference-based Reinforcement Learning with Tree-Structured Reward Functions

Tom Bewley, Freddy Lecue

Research output: Chapter in Book/Report/Conference proceedingConference Contribution (Conference Proceeding)

8 Citations (Scopus)
18 Downloads (Pure)

Abstract

The potential of reinforcement learning (RL) to deliver aligned and performant agents is partially bottlenecked by the reward engineering problem. One alternative to heuristic trial-and-error is preference-based RL (PbRL), where a reward function is inferred from sparse human feedback. However, prior PbRL methods lack interpretability of the learned reward structure, which hampers the ability to assess robustness and alignment. We propose an online, active preference learning algorithm that constructs reward functions with the intrinsically interpretable, compositional structure of a tree. Using both synthetic and human-provided feedback, we demonstrate sample-efficient learning of tree-structured reward functions in several environments, then harness the enhanced interpretability to explore and debug for alignment.
Original languageEnglish
Title of host publicationThe Proceedings of the 20th International Conference on Autonomous Agents and Multiagent Systems (AAMAS-2022)
PublisherThe International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS)
Pages118-126
Number of pages9
ISBN (Electronic)9781450392136
Publication statusPublished - 13 May 2022
EventAAMAS ' 22: International Conference on Autonomous Agents and Multi-Agent Systems - , New Zealand
Duration: 9 May 202213 May 2022

Publication series

NameProceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems 
ISSN (Print)1548-8403
ISSN (Electronic)1558-2914

Conference

ConferenceAAMAS ' 22: International Conference on Autonomous Agents and Multi-Agent Systems
Country/TerritoryNew Zealand
Period9/05/2213/05/22

Keywords

  • cs.LG
  • cs.AI

Fingerprint

Dive into the research topics of 'Interpretable Preference-based Reinforcement Learning with Tree-Structured Reward Functions'. Together they form a unique fingerprint.

Cite this