Energy Efficiency in Reinforcement Learning for Wireless Sensor Networks

Research output: Contribution to conferenceConference Paper

5 Downloads (Pure)


As sensor networks for health monitoring become more prevalent, so will the need to control their usage and consumption of energy. This paper presents a method which leverages the algorithm's performance and energy consumption. By utilising Reinforcement Learning (RL) techniques, we provide an adaptive framework, which continuously performs weak training in an energy-aware system. We motivate this using a realistic example of residential localisation based on Received Signal Strength (RSS). The method is cheap in terms of work-hours, calibration and energy usage. It achieves this by utilising other sensors available in the environment. These other sensors provide weak labels, which are then used to employ the State-Action-Reward-State-Action (SARSA) algorithm and train the model over time. Our approach is evaluated on a simulated localisation environment and validated on a widely available pervasive health dataset which facilitates realistic residential localisation using RSS. We show that our method is cheaper to implement and requires less effort, whilst at the same time providing a performance enhancement and energy savings over time.
Original languageEnglish
Publication statusPublished - 14 Sept 2018
EventECML-PKDD Workshop Green Data Mining 2018: First International Workshop on Energy Efficient Data Mining and Knowledge Discovery - Dublin, Ireland
Duration: 10 Sept 201814 Sept 2018


ConferenceECML-PKDD Workshop Green Data Mining 2018

Structured keywords

  • Digital Health


  • Reinforcement Learning
  • Indoor Localisation
  • Energy Effciency
  • Pervasive Health


Dive into the research topics of 'Energy Efficiency in Reinforcement Learning for Wireless Sensor Networks'. Together they form a unique fingerprint.

Cite this