Projects per year
Abstract
A promising approach to improve the robustness and exploration in Reinforcement Learning is collecting human feedback and that way incorporating prior knowledge of the target environment. It is, however, often too expensive to obtain enough feedback of good quality. To mitigate the issue, we aim to rely on a group of multiple experts (and non-experts) with different skill levels to generate enough feedback. Such feedback can therefore be inconsistent and infrequent. In this paper, we build upon prior work -- Advise, a Bayesian approach attempting to maximise the information gained from human feedback -- extending the algorithm to accept feedback from this larger group of humans, the trainers, while also estimating each trainer's reliability. We show how aggregating feedback from multiple trainers improves the total feedback's accuracy and make the collection process easier in two ways. Firstly, this approach addresses the case of some of the trainers being adversarial. Secondly, having access to the information about each trainer reliability provides a second layer of robustness and offers valuable information for people managing the whole system to improve the overall trust in the system. It offers an actionable tool for improving the feedback collection process or modifying the reward function design if needed. We empirically show that our approach can accurately learn the reliability of each trainer correctly and use it to maximise the information gained from the multiple trainers' feedback, even if some of the sources are adversarial.
Original language | English |
---|---|
Publication status | Published - 16 Nov 2021 |
Event | NeurIPS 2021 Workshop on Safe and Robust Control of Uncertain Systems - Duration: 13 Dec 2021 → 13 Dec 2021 https://sites.google.com/view/safe-robust-control/home |
Workshop
Workshop | NeurIPS 2021 Workshop on Safe and Robust Control of Uncertain Systems |
---|---|
Abbreviated title | SafeRL 2021 |
Period | 13/12/21 → 13/12/21 |
Internet address |
Research Groups and Themes
- SPHERE
Fingerprint
Dive into the research topics of 'Reinforcement Learning with Feedback from Multiple Humans with Diverse Skills'. Together they form a unique fingerprint.Projects
- 1 Finished
-
SPHERE2
Craddock, I. J. (Principal Investigator), Mirmehdi, M. (Co-Investigator), Piechocki, R. J. (Co-Investigator), Flach, P. A. (Co-Investigator), Oikonomou, G. (Co-Investigator), Burghardt, T. (Co-Investigator), Damen, D. (Co-Investigator), Santos-Rodriguez, R. (Co-Investigator), O'Kane, A. A. (Co-Investigator), McConville, R. (Co-Investigator), Masullo, A. (Co-Investigator) & Gooberman-Hill, R. (Co-Investigator)
1/10/18 → 31/01/23
Project: Research, Parent
Student theses
-
Towards Safe and Robust Reinforcement Learning: Leveraging Multiple Sources of Information
Yamagata, T. (Author), Santos-Rodriguez, R. (Supervisor), 10 Dec 2024Student thesis: Doctoral Thesis › Doctor of Philosophy (PhD)
File