Scalable video fusion

Research output: Chapter in Book/Report/Conference proceedingConference Contribution (Conference Proceeding)

2 Citations (Scopus)

Abstract

A novel system is introduced that is able to fuse two or more sets of multimodal videos in the transform domain. This is achieved without drift and produces an embedded bitstream that offers fine grain scalability. Previous attempts to fuse in the transform domain have not been possible for video compression systems due to the complications of predictive loops within conventional video encoding. The compression system is based on an optimised spatiotemporal codec using the 3D Discrete Dual-tree Wavelet Transform (DDWT) together with a bit plane encoding method (SPIHT) and a coefficient sparsification process (noise shaping). Together, these methods can efficiently encode a video sequence without the need for motion compensation due to the directional (in space and time) selectivity of the transform. This system offers extremely flexible video fusion in dynamic bandwidth environments where there are variable client receiving capabilities.

Original languageEnglish
Title of host publication2013 IEEE International Conference on Image Processing, ICIP 2013 - Proceedings
Pages1277-1281
Number of pages5
DOIs
Publication statusPublished - 1 Dec 2013
Event2013 20th IEEE International Conference on Image Processing, ICIP 2013 - Melbourne, VIC, United Kingdom
Duration: 15 Sept 201318 Sept 2013

Conference

Conference2013 20th IEEE International Conference on Image Processing, ICIP 2013
Country/TerritoryUnited Kingdom
CityMelbourne, VIC
Period15/09/1318/09/13

Fingerprint

Dive into the research topics of 'Scalable video fusion'. Together they form a unique fingerprint.
  • Scalable Information Fusion -Full

    Bull, D. R. (Principal Investigator)

    1/01/101/01/11

    Project: Research

Cite this