Energy Efficiency in Reinforcement Learning for Wireless Sensor Networks

Research output: Chapter in Book/Report/Conference proceedingConference Contribution (Conference Proceeding)

Abstract

As sensor networks for health monitoring become more prevalent, so will the need to control their usage and consumption of energy. This paper presents a method which leverages the algorithm's performance and energy consumption. By utilising Reinforcement Learning (RL) techniques, we provide an adaptive framework, which continuously performs weak training in an energy-aware system. We motivate this using a realistic example of residential localisation based on Received Signal Strength (RSS). The method is cheap in terms of work-hours, calibration and energy usage. It achieves this by utilising other sensors available in the environment. These other sensors provide weak labels, which are then used to employ the State-Action-Reward-State-Action (SARSA) algorithm and train the model over time. Our approach is evaluated on a simulated localisation environment and validated on a widely available pervasive health dataset which facilitates realistic residential localisation using RSS. We show that our method is cheaper to implement and requires less effort, whilst at the same time providing a performance enhancement and energy savings over time.
Original languageEnglish
Title of host publicationProceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD 2018)
PublisherCEUR Workshop Proceedings
Publication statusAccepted/In press - 15 Jun 2018

Structured keywords

  • Digital Health

Keywords

  • Reinforcement Learning
  • Indoor Localisation
  • SARSA
  • Energy Effciency
  • Pervasive Health

Fingerprint Dive into the research topics of 'Energy Efficiency in Reinforcement Learning for Wireless Sensor Networks'. Together they form a unique fingerprint.

Cite this