Abstract
The BDI architecture, where agents are modelled based on their beliefs, desires and intentions, provides a practical approach to develop large scale systems. However, it is not well suited to model complex Supervisory Control And Data Acquisition (SCADA) systems pervaded by uncertainty. In this paper we address this issue by extending the operational semantics of CAN(PLAN) into CAN(PLAN)+. We start by modelling the beliefs of an agent as a set of epistemic states where each state, possibly using a different representation, models part of the agent’s beliefs. These epistemic states are stratified to make them commensurable and to reason about the uncertain beliefs of the agent. The syntax and semantics of a BDI agent are extended accordingly and we identify fragments with computationally efficient semantics. Finally, we examine how primitive actions are affected by uncertainty and we define an appropriate form of lookahead planning.
Original language | English |
---|---|
Title of host publication | Uncertainty in Artificial Intelligence |
Subtitle of host publication | Proceedings of the Thirtieth Conference (UAI 2014) |
Editors | Nevin L Zhang, Jin Tian |
Publisher | AUAI Press |
Pages | 52-61 |
Number of pages | 10 |
ISBN (Print) | 9780974903910 |
Publication status | Published - 23 Jul 2014 |
Keywords
- belief revision
- uncertainty
- BDI
- AgentSpeak
- multi-agent programming