A comparison between human and machine labelling of image regions

AA Clark, TS Troscianko, NW Campbell, BT Thomas

Research output: Contribution to journalArticle (Academic Journal)peer-review

3 Citations (Scopus)

Abstract

In previous work, a vision system was developed which is capable of classifying objects in outdoor scenes. The approach involves segmenting the image into regions, obtaining a feature-based description of each region, and then passing this description onto an artificial neural network (ANN) which has been trained to label the region with one of 11 possible object types. The question which we are now addressing is: how important is each of these features to overall performance, both in human and machine vision? A set of experiments was conducted in which human subjects were trained in the same labelling task as the ANN. The stimuli, each depicting a single image region, were generated from a large database of urban and rural images. The subjects were then tested on both intact and degraded stimuli. The results suggest that certain features are particularly influential in mediating overall labelling performance. An equivalent experiment was carried out with the ANN. A method is presented which allows individual features to be corrupted in such a way as to simulate the loss of certain forms of visual information. The results, which are broadly similar to those found in the previous experiment, imply that the ANN can provide a useful model of human image region labelling. It is anticipated that the methodology, which draws on both computational and psychophysical techniques, will be of use to other areas of investigation.
Translated title of the contributionA comparison between human and machine labelling of image regions
Original languageEnglish
Article number1127-1138
Pages (from-to)1127 - 1138
JournalPERCEPTION
Volume29 (9)
Publication statusPublished - 2000

Bibliographical note

Other identifier: 1000539

Fingerprint

Dive into the research topics of 'A comparison between human and machine labelling of image regions'. Together they form a unique fingerprint.

Cite this