The BDI architecture, where agents are modelled based on their beliefs, desires and intentions, provides a practical approach to develop large scale systems. However, it is not well suited to model complex Supervisory Control And Data Acquisition (SCADA) systems pervaded by uncertainty. In this paper we address this issue by extending the operational semantics of CAN(PLAN) into CAN(PLAN)+. We start by modelling the beliefs of an agent as a set of epistemic states where each state, possibly using a different representation, models part of the agent’s beliefs. These epistemic states are stratified to make them commensurable and to reason about the uncertain beliefs of the agent. The syntax and semantics of a BDI agent are extended accordingly and we identify fragments with computationally efficient semantics. Finally, we examine how primitive actions are affected by uncertainty and we define an appropriate form of lookahead planning.
|Title of host publication||Uncertainty in Artificial Intelligence|
|Subtitle of host publication||Proceedings of the Thirtieth Conference (UAI 2014)|
|Editors||Nevin L Zhang, Jin Tian|
|Number of pages||10|
|Publication status||Published - 23 Jul 2014|
- belief revision
- multi-agent programming