Abstract
This paper proposes a generalised framework for model based, context dependent video coding, based on exploitation of characteristics of the human visual system. The system utilises variable quality coding, based on a map which is created using context dependent rules. The technique is demonstrated for a specific video context, namely open signed content. Consequently a model for gaze prediction in open signed content is developed, based upon motion and shot changes. The framework is shown to achieve a considerable improvement in coding efficiency for the given context.
Translated title of the contribution | Towards a model based paradigm for efficient coding of context dependent video material |
---|---|
Original language | English |
Title of host publication | Eighth International Workshop on Image Analysis for Multimedia Interactive Services, 2007 (WIAMIS '07) Santorini, Greece |
Publisher | Institute of Electrical and Electronics Engineers (IEEE) |
Pages | 52 - 52 |
Number of pages | 1 |
ISBN (Print) | 076952818X |
DOIs | |
Publication status | Published - Jun 2007 |
Event | 8th International Workshop on Image Analysis for Multimedia Interactive Services - Santorini, Greece Duration: 1 Jun 2007 → … |
Conference
Conference | 8th International Workshop on Image Analysis for Multimedia Interactive Services |
---|---|
Country/Territory | Greece |
City | Santorini |
Period | 1/06/07 → … |
Bibliographical note
Rose publication type: Conference contributionTerms of use: Copyright © 2007 IEEE. Reprinted from Eighth International Workshop on Image Analysis for Multimedia Interactive Services, 2007 (WIAMIS '07).
This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of the University of Bristol's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to [email protected].
By choosing to view this document, you agree to all provisions of the copyright laws protecting it.