Finding Text Regions Using Localised Measures

Clark Paul, Majid Mirmehdi, Thomas Barry

Research output: Contribution to journalArticle (Academic Journal)peer-review


We present a method based on statistical properties of local image neighbourhoods for the location of text in real-scene images. This has applications in robot vision, and desktop and wearable computing. The statistical measures we describe extract properties of the image which characterise text, invariant to a large degree to the orientation, scale or colo ur of the text in the scene. The measures are employed by a neural network to classify regions of an image as text or non-text. We thus avoid the use of different thresholds for the various situations we expect, including when text is too small to read, or when the text plane is not fronto-parallel to the camera. We briefly discuss applications and the possibility of recovery of the text for optical character recognition.
Translated title of the contributionFinding Text Regions Using Localised Measures
Original languageEnglish
Pages (from-to)675-684
JournalProceedings of the 11th British Machine Vision Conference
Publication statusPublished - 2000

Bibliographical note

ISBN: 1901725138
Publisher: BMVA Press
Name and Venue of Conference: Proceedings of the 11th British Machine Vision Conference
Other identifier: 1000507


Dive into the research topics of 'Finding Text Regions Using Localised Measures'. Together they form a unique fingerprint.

Cite this