Abstract
We generalise the problem of reward modelling (RM) for reinforcement learning (RL) to handle non-Markovian rewards. Existing work assumes that human evaluators observe each step in a trajectory independently when providing feedback on agent behaviour. In this work, we remove this assumption, extending RM to capture temporal dependencies in human assessment of trajectories. We show how RM can be approached as a multiple instance learning (MIL) problem, where trajectories are treated as bags with return labels, and steps within the trajectories are instances with unseen reward labels. We go on to develop new MIL models that are able to capture the time dependencies in labelled trajectories. We demonstrate on a range of RL tasks that our novel MIL models can reconstruct reward functions to a high level of accuracy, and can be used to train high-performing agent policies.
Original language | English |
---|---|
Title of host publication | Advances in Neural Information Processing Systems 35 (NeurIPS 2022) |
Editors | S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, A. Oh |
Number of pages | 27 |
ISBN (Electronic) | 9781713871088 |
Publication status | Published - 9 Dec 2022 |
Event | The Thirty-sixth Annual Conference on Neural Information Processing Systems: NeurIPS 2022 - New Orleans, United States Duration: 23 Nov 2022 → 9 Dec 2022 https://neurips.cc/Conferences/2022 |
Conference
Conference | The Thirty-sixth Annual Conference on Neural Information Processing Systems |
---|---|
Country/Territory | United States |
City | New Orleans |
Period | 23/11/22 → 9/12/22 |
Internet address |
Keywords
- cs.LG
- cs.AI