AbstractMotion blur is present in many images and can be due to many causes: From shaky hand held photographs, the panning of 24 frames-per-second feature film cameras, a broadcast camera following a sprinter, or a camera on an autonomous robot. Judicious choice of camera parameters, illumination, and object speed can mitigate motion blur in some circumstances, but often it is unavoidable, or even desirable. For example, in the particular case of feature film and broadcast video, some amount of motion blur is desired, as it aids the creation of the illusion of a moving object, given a succession of still images, presented rapidly.
For video analysis however, motion blur remains an obstacle. Much of the work to date in visual analysis, and particularly in image matching, has not addressed motion blur. In the cases where both images are similarly blurred, this is not problematic, as these images appear similar, and can readily be identified as such. However when the motion blur differs between frames, many existing approaches fail or offer significantly reduced performance.
This thesis presents experiments that verifies the model of motion blur, which relates un-blurred images to blurred ones, as a rectangular filter. It then proposes a modification to phase correlation, which is based on this rectangular filter model of motion blur. This is shown to perform as well as the best existing methods from the literature. Finally, modifications to SIFT descriptor matching are proposed and tested. One of the methods increases the accuracy of correct matching of SIFT features by up to 60%, for the case of matching a non-blurred image region to a blurred one.
|Date of Award||25 Sep 2018|
|Supervisor||David R Bull (Supervisor)|