Energy-efficient neural networks with near-threshold processors and hardware accelerators

Jose Nunez-Yanez*, Neil Howard

*Corresponding author for this work

Research output: Contribution to journalArticle (Academic Journal)peer-review

Abstract

Hardware for energy-efficient AI has received significant attention over the last years with both start-ups and large corporations creating products that compete at different levels of performance and power consumption. The main objective of this hardware is to offer levels of efficiency and performance that cannot be obtained with general-purpose processors or graphics processing units. In parallel, innovative hardware techniques such as near- and sub-threshold voltage processing have been revisited, capitalizing on the low-power requirements of deploying AI at the network edge. In this paper, we evaluate recent developments in hardware for energy-efficient AI, focusing on inference in embedded systems at the network edge. We then explore a heterogeneous configuration that deploys a neural network that processes multiple independent inputs and deploys convolutional and LSTM (Long Short-Term Memory) layers. This heterogeneous configuration uses two devices with different performance/power characteristics connected with a feedback loop. It obtains energy reductions measured at 75% while simultaneously maintaining the level of inference accuracy.
Original languageEnglish
Article number102062
Number of pages12
JournalJournal of Systems Architecture
Volume116
Early online date17 Feb 2021
DOIs
Publication statusE-pub ahead of print - 17 Feb 2021

Keywords

  • sub-threshold processor
  • energy efficient
  • edge computing
  • neural network
  • heterogeneous computing

Cite this