Abstract
Hardware for energy-efficient AI has received significant attention over the last years with both start-ups and large corporations creating products that compete at different levels of performance and power consumption. The main objective of this hardware is to offer levels of efficiency and performance that cannot be obtained with general-purpose processors or graphics processing units. In parallel, innovative hardware techniques such as near- and sub-threshold voltage processing have been revisited, capitalizing on the low-power requirements of deploying AI at the network edge. In this paper, we evaluate recent developments in hardware for energy-efficient AI, focusing on inference in embedded systems at the network edge. We then explore a heterogeneous configuration that deploys a neural network that processes multiple independent inputs and deploys convolutional and LSTM (Long Short-Term Memory) layers. This heterogeneous configuration uses two devices with different performance/power characteristics connected with a feedback loop. It obtains energy reductions measured at 75% while simultaneously maintaining the level of inference accuracy.
Original language | English |
---|---|
Article number | 102062 |
Number of pages | 12 |
Journal | Journal of Systems Architecture |
Volume | 116 |
Early online date | 17 Feb 2021 |
DOIs | |
Publication status | Published - 1 Jun 2021 |
Bibliographical note
Funding Information:This research was funded by the Royal Society Industry fellowship, INF\R2\192044 Machine Intelligence at the Network Edge (MINET).
Publisher Copyright:
© 2021 Elsevier B.V.
Keywords
- sub-threshold processor
- energy efficient
- edge computing
- neural network
- heterogeneous computing