Abstract
This paper develops a navigation algorithm for vehicles in complex environments, combining a hard guarantee of constraint satisfaction with the ability to learn from successive missions. The method uses receding horizon control for constrained short-term path planning and control, with the cost-to-go developed by reinforcement learning. Simulation results show that the algorithm learns to reproduce the shortest path through an environment with obstacles and can determine good behaviours for multiple surveillance tasks.
Translated title of the contribution | Combining Planning and Learning for Autonomous Vehicle Navigation |
---|---|
Original language | English |
Title of host publication | AIAA Guidance Navigation and Control Conference, Toronto |
Publication status | Published - Aug 2010 |
Bibliographical note
Conference Organiser: AIAAOther identifier: AIAA-2010-7866