High-speed Light-weight CNN Inference via Strided Convolutions on a Pixel Processor Array

Yanan Liu, Laurie N Bose, Jianing Chen, Stephen J. Carey, Piotr Dudek, Walterio W Mayol-Cuevas

Research output: Contribution to conferenceConference Paperpeer-review

288 Downloads (Pure)

Abstract

Performance, storage, and power consumption are three major factors that restrict the use of machine learning algorithms on embedded systems. However, new hardware architectures designed with visual computation in mind may hold the key to solving these bottlenecks. This work makes use of a novel visual device: the pixel processor array (PPA), to embed a convolutional neural network (CNN) onto the focal plane. We present a new high-speed implementation of strided convolutions using binary weights for the CNN on PPA devices, allowing all multiplications to be replaced by more efficient addition/subtraction operations. Image convolutions, ReLU activation functions, max-pooling and a fully-connected layer are all performed directly on the PPA’s imaging plane, exploiting its massive parallel computing capabilities. We demonstrate CNN inference across 4 different applications, running between 2,000 and 17,500 fps with power consumption lower than 1.5W. These tasks include identifying 8 classes of plankton, hand gesture classification and digit recognition.
Original languageEnglish
Publication statusPublished - 10 Sept 2020
EventBritish Machine Vision Virtual Conference - Virtual Event
Duration: 7 Sept 202010 Sept 2020
Conference number: 31
https://www.bmvc2020-conference.com/

Conference

ConferenceBritish Machine Vision Virtual Conference
Period7/09/2010/09/20
Internet address

Fingerprint

Dive into the research topics of 'High-speed Light-weight CNN Inference via Strided Convolutions on a Pixel Processor Array'. Together they form a unique fingerprint.

Cite this