We present a method based on statistical properties of local image neighbourhoods for the location of text in real-scene images. This has applications in robot vision, and desktop and wearable computing. The statistical measures we describe extract properties of the image which characterise text, invariant to a large degree to the orientation, scale or colo ur of the text in the scene. The measures are employed by a neural network to classify regions of an image as text or non-text. We thus avoid the use of different thresholds for the various situations we expect, including when text is too small to read, or when the text plane is not fronto-parallel to the camera. We briefly discuss applications and the possibility of recovery of the text for optical character recognition.
|Translated title of the contribution||Finding Text Regions Using Localised Measures|
|Journal||Proceedings of the 11th British Machine Vision Conference|
|Publication status||Published - 2000|
Bibliographical noteISBN: 1901725138
Publisher: BMVA Press
Name and Venue of Conference: Proceedings of the 11th British Machine Vision Conference
Other identifier: 1000507