Font Size: a A A

Researches On Context-Based Variable Length Coding

Posted on:2010-04-16Degree:DoctorType:Dissertation
Country:ChinaCandidate:Q WangFull Text:PDF
GTID:1118360278496148Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Along with increased requests for important applications, e.g., high-resolution digital broadcasting, high-density laser-digital storage media, wireless broad-band multimedia communication, and internet broad-band streaming media, new generation video coding techniques and standards have been one of the most active research areas in the academic and industrial societies in recent years. In this dissertation, variable length coding (VLC) techniques for DCT video coding systems are investigated to deal with the inefficiency of traditional VLC techniques in coding efficiency, friendly implementation, and error resiliency. Three context-based variable length coding techniques are proposed to obtain high coding efficiency, high decoding throughput, and high error resiliency. And the maximal coding efficiency of context-based VLC, achievable under acceptable complexity of practical applications, is also discussed. In particular, the proposed context-based 2-D variable length coding, C2DVLC, has been adopted by China video coding standard (AVS). The detailed descriptions for these techniques are as follows.First, traditional variable length coding techniques for video DCT blocks have relative low coding efficiency, since statistical properties of DCT blocks are not fully explored. To resolve this problem, this paper fully analyzes these properties and proposes a context modeling technique to effectively explore them. Further, a context-based 2-D variable length coding technique, C2DVLC, is proposed for video DCT blocks. Specifically, to capture the distribution variation of ( Run , Level ), C2DVLC uses multiple contexts and multiple 2D-VLC tables, where each context and the related table are designed to match one of the typical distributions which are observed in the distribution variation. By adaptively switching the contexts and the tables, the distribution variation is captured. C2DVLC uses the maximal magnitude of previously coded Levels to switch the contexts and the tables. It can be realized by table lookup, so the implementation is easy. C2DVLC uses Exponential-Golomb codes, which makes the table storage space low. The experimental results show that on the test videos C2DVLC has maximal 16.11% and average 8.38% bit-rate savings relative to 2D-VLC. C2DVLC has been adopted by AVS1-P2 video coding standard.Second, context modeling brings further coding efficiency for variable length coding, but meanwhile it lowers decoding throughput. The reason is that the existing context modeling techniques are tightly sequential dependency context modeling, where context dependency between successive symbols makes context modeling and variable length decoding sequentially executed at the decoding side. So hierarchical dependency context modeling, HDCM, is proposed to resolve this problem. HDCM fully exploits the statistical properties of DCT blocks, which brings high coding efficiency, and at the same time it breaks context dependency between successive symbols, which enables the concurrency of context modeling and variable length decoding. The proposed HDCM-based variable length coding technique (HDCMVLC) uses HDCM and Golomb-Rice codes to realize high efficient coding of DCT blocks. The experimental results show that on the test videos HDCMVLC has the similar coding efficiency as CAVLC but has maximal 86.12% decoding throughput improvement.Third, reversible variable length coding (RVLC) is an important error resilient coding technique. But traditional RVLC has no context modeling supported, so its coding efficiency is low. And meanwhile it has limited error resilience performance in high error rates. So a context-based reversible variable length coding technique, CRVLC, is proposed to achieve high coding efficiency and high error resiliency concurrently. CRVLC uses HDCM, reversible Golomb-Rice codes, and a data partitioning technique. HDCM with the aid of the proposed data partitioning enables the reconstruction of context modeling in the backward decoding, and further the complete backward decoding of CRVLC is possible. The proposed data partitioning also improves error resiliency. The experimental results show that on the test videos CRVLC has average 4.21% bit-rate savings and maximal 2.49dB error resilience improvement compared to 2D-RVLC.Fourth, it is a valuable discussion on how to design an optimal context-based adaptive variable length coding technique for video DCT blocks and what the maximally achievable coding efficiency is. This paper designs an context-based adaptive variable length coding technique, CBAVLC, aiming to maximize the achievable coding efficiency under acceptable complexity of practical video applications. CBAVLC uses HDCM. Its modeling performance can be viewed to be near optimal through our analysis. CBAVLC uses adaptive coding technique which can on-line estimate the probability distribution on each context in each time instance, and then selects optimal Golomb codes for coding. So the coding in CBAVLC is also near optimal. The experimental results show that CBAVLC has a little coding efficiency improvement than CAVLC. It indicates that CAVLC, C2DVLC, and HDCMVLC have been close to the highest coding efficiency obtainable under acceptable complexity in practical video applications.
Keywords/Search Tags:video coding, entropy coding, variable length coding, context modeling
PDF Full Text Request
Related items