Performance, storage, and power consumption are three major factors that restrict the use of machine learning algorithms on embedded systems. However, new hardware architectures designed with visual computation in mind may hold the key to solving these bottlenecks. This work makes use of a novel visual device: the pixel processor array (PPA), to embed a convolutional neural network (CNN) onto the focal plane. We present a new high-speed implementation of strided convolutions using binary weights for the CNN on PPA devices, allowing all multiplications to be replaced by more efﬁcient addition/subtraction operations. Image convolutions, ReLU activation functions, max-pooling and a fully-connected layer are all performed directly on the PPA’s imaging plane, exploiting its massive parallel computing capabilities. We demonstrate CNN inference across 4 different applications, running between 2,000 and 17,500 fps with power consumption lower than 1.5W. These tasks include identifying 8 classes of plankton, hand gesture classiﬁcation and digit recognition.
|Title of host publication||BMVC 2020 Programme|
|Publisher||British Machine Vision Association|
|Publication status||Accepted/In press - 29 Jul 2020|
|Event||British Machine Vision Virtual Conference - Virtual Event|
Duration: 7 Sep 2020 → 10 Sep 2020
Conference number: 31
|Conference||British Machine Vision Virtual Conference|
|Period||7/09/20 → 10/09/20|
Liu, Y., Bose, L. N., Chen, J., Carey, S. J., Dudek, P., & Mayol-Cuevas, W. W. (Accepted/In press). High-speed Light-weight CNN Inference via Strided Convolutions on a Pixel Processor Array. In BMVC 2020 Programme British Machine Vision Association.