Abstract
In this paper we present a novel, low size, weight and power, close-to-sensor computational camera design. The hardware can be configured for a wide range of autonomous applications such as industrial inspection, binocular/stereo robotic vision, UAV navigation/control and biological vision analogues. Close coupling of the image sensor with computation, motor control and motion sensors enables low latency responses to changes in the visual field. An image processing pipeline that detects and processes regions containing space-time structural coherence, in order to reduce the transmission of redundant pixel data and stabilise selective imaging, is introduced. The pipeline is designed to exploit close-to-sensor processing of regions-of-interest (ROI) adaptively captured at high temporal rates (up to 1000 ROI/s) and at multiple spatial and temporal resolutions. Space-time structurally coherent macro blocks are detected using a novel temporal block matching approach; the high temporal sampling rate allows a monotonicity constraint to be enforced to efficiently assess confidence of matches. The robustness of the sparse motion estimation approach is demonstrated in comparison to a state-of-the-art optical flow algorithm and optimal Baysian grid-based filtering. A description of how the system can generate unsupervised training data for higher level multiple instance or deep learning systems is discussed.
Original language | English |
---|---|
DOIs | |
Publication status | Published - 27 Sept 2017 |
Event | Design and Architectures for Signal and Image Processing (DASIP2017) - Dresden, Germany Duration: 27 Sept 2017 → 29 Sept 2017 http://dasip2017.esit.rub.de/index.html |
Conference
Conference | Design and Architectures for Signal and Image Processing (DASIP2017) |
---|---|
Country/Territory | Germany |
Period | 27/09/17 → 29/09/17 |
Internet address |
Keywords
- Close-to-sensor processing
- feature analysis
- low latency processing
- sub-pixel tracking