Energy proportional neural network inference with adaptive voltage and frequency scaling

Jose Nunez-Yanez*

*Corresponding author for this work

Research output: Contribution to journalArticle (Academic Journal)

3 Citations (Scopus)
205 Downloads (Pure)

Abstract

This research presents the extension and application of a voltage and frequency scaling framework called Elongate to a high-performance and reconfigurable binarized neural network. The neural network is created in the FPGA reconfigurable fabric and coupled to a multiprocessor host that controls the operational point to obtain energy proportionality. Elongate instruments a design netlist by inserting timing detectors to enable the exploitation of the operating margins of a device reliably. The elongated neural network is re-targeted to devices with different nominal operating voltages and fabricated with 28 nm (i.e., Zynq) and 16nm (i.e., Zynq Ultrascale) feature sizes showing the portability of the framework to advanced process nodes. New hardware and software components are created to support the 16nm fabric microarchitecture and a comparison in terms of power, energy and performance with the older 28 nm process is performed. The results show that Elongate can obtain new performance and energy points that are up to 86 percent better than nominal at the same level of classification accuracy. Trade-offs between energy and performance are also possible with a large dynamic range of valid working points available. The results also indicate that the built-in neural network robustness allows operation beyond the first point of error while maintaining the classification accuracy largely unaffected.

Original languageEnglish
Article number8531784
Pages (from-to)676-687
Number of pages12
JournalIEEE Transactions on Computers
Volume68
Issue number5
Early online date11 Nov 2018
DOIs
Publication statusPublished - 1 May 2019

    Fingerprint

Keywords

  • convolutional neural network
  • DVFS
  • energy efficiency
  • FPGA

Cite this