Efficient Mode Selection Based on Quality of Experience Estimation

DOI : 10.17577/IJERTCONV2IS13046

Download Full-Text PDF Cite this Publication

Text Only Version

Efficient Mode Selection Based on Quality of Experience Estimation

Aravinda Reddy P N M.tech Signal Processing

S.J.C.Institute of Technology Chickballapur India aravindareddy.27@gmail.com

Tilak Raj Asst Professor

S.J.C.Institute of Technology Chickballapur India tilak.1si10lsp15@gmail.com

Abstract A new video coding standard H.264/AVC has been recently developed and standardised, which represents a number of advances in standard video coding efficiency and flexibility and is replaced the existing standards such as H.263 and MPEG-1/2/4. In this project we present an efficient algorithm for selecting the best possible Mode according to locally computed Quality of Experience(QoE) characteristics.

Keywords Quality of Experience, Mean Opinion Score, H.264


H.264 is a new international video coding standard of ITU-T and jointly made by ITU-T Video Coding Experts Group (VCEG) and ISO/IEC MPEG Video Group, named as Joint Video Team (JVT). The goals of this standardization effort were enhanced compression efficiency, network friendly video representation for interactive (video telephony) and non-interactive applications (broadcast, streaming, storage, video on demand). The H.264/AVC video coding standard can deliver significantly improved compression efficiency compared with previous standards. Due to this improved compression efficiency and increased flexibility of coding and transmission, H.264 has the potential to enable new services over different networks. In this paper we present a Fast Mode Selection Algorithm based on user perspective metric called Quality of Experience (QoE).

Inter Prediction:

Inter prediction creates a prediction model from one or more previously encoded video frames. The model is formed by shifting samples in the reference frame(s) (motion compensated prediction). The AVC CODEC uses block-based motion compensation, the same principle adopted by every major coding standard since H.261. Important differences from earlier standards include the support for a range of block sizes (down to 4×4) and "ne sub-pixel motion vectors (1/4 pixel in the luma component).

AVC supports motion compensation block sizes ranging from 16×16 to 4×4 luminance samples with manyoptions between the two. The luminance component of each macroblock (16×16 samples) may be split-up in 4 ways as shown in Figure 21: 16×16, 16×8, 8×16 or 8×8. Each of the sub-divided regions is a macroblock partition. If the 8×8 mode is chosen, each of the four 8×8 macroblock partitions within the macroblock may be split in a further 4 ways as shown in Figure 22: 8×8, 8×4, 4×8 or 4×4 (known as macroblock sub-partitions).

These partitions and sub-partitions give rise to a large number of possible combinations within each macroblock. This method of

partitioning macroblocks into motion compensated sub-blocks of varying size is known as tree structured motion compensation.

Fig: Macroblock partitions 16×16,8×16,16×8 8×8

Fig: Macroblock-sub Partitions: 8×8, 4×8, 8×4, And 4×4

Quality of Experience (QoE):

QoE requirements define the overall, subjective performance at the services level from the perspective of the end user. The establishment of consistent, baseline quality of experience for end users and corresponding objective engineering targets is critical to the market successof broadband service offerings. As it is typically the viewer who judges video quality, the subjective measurement of mean opinion score (MOS) is considered to be an accurate way to determine the perceived video quality of compressed video. However, evaluating MOS is expensive in terms of time and resources and cannot be calculated automatically within real-time video applications. Hence several objective assessment methods have been developed to automatically predict the subjective results based on video content and the characteristics of the human visual system. A 5-grade discrete scale ranging from 0 to 1 was used to rate the quality of the test video sequences where 0=bad, 0.25=poor, 0.5=fair, 0.75=good and 1=excellent.

A Fast Mode Selection Algorithm based on Quality of Experience Estimation

The mode selection process in block-based video encoders involves minimising the rate-distortion cost function J=D+ R where is the Lagrange multiplier, R is the rate and D is the SSD between original and reconstructed video data. MOSp-based mode selection would involve integrating the MOSp metric into the RD cost function to make the mode selection and choosing the best mode which minimises this cost function. A new MOSp-based mode selection model is Presented here.

The rate-distortion cost function used in the reference H264/AVC is:

J = D +× R

Where is the Lagrange multiplier, R is the rate and D is the SSD between original and reconstructed video data. Integrating the MOSp metric into equation involves defining a new MOSp-based distortion measure and a new Lagrange multiplier. The new MOSp- based rate-distortion cost function model is given as:

J = Dmosp + mosp×R

where mosp D is MOSp-based distortion measure which replaces the SSD measure and mosp is the new Lagrange multiplier which must to be re-modelled. The Lagrange multiplier in the reference H264/AVC is calculated as a function of the Quantisation Parameter (QP) and has been modelled for SSD as the distortion metric. Therefore, changing SSD to mosp D will require re-modelling of the Lagrange multiplier to obtain mosp R is the total bits for coding a macroblock using the mode under test.

A distortion measure derived from the MOSp metric must be inversely related to the MOSp measure. Therefore, the MOSpbased distortion measure, mosp D is given as:

Dmosp=1 MOSp

The Mean Opinion Score (MOS) can be calculated from the set of formulas as shown below.

Video quality Vq is calculated using the video quality parameter Vq

is expressed as

Where Icoding represents the basic video quality affected by the coding distortion under a combination of video bit rate (BrV [kbit/s]) and video frame rate (FrV [fps]), and the packet loss robustness factor DPplV expresses the degree of video quality robustness due to packet loss where PplV [%] represents the packet-loss rate.

The basic video quality affected by coding distortion Icoding in terms of Blockiness and Bluriness is expressed as:

x Distortion in

IOfr represents the maximum video quality at each video bit rate (BrV) and is expressed as:

DFrV represents the degree of video quality robustness due to frame rate (FrV) and is expressed as:

DFrV +v6 +v7*BrV, v6 and v7: constants

Mosp based Fast Mode Selection Algorithm:

  1. For each Macroblock calculate the Activity using the following equation

Where i is the macroblock number and P is the total number of macroblocks in a video frame. j is the frame number and T is the total number of frames in the video sequence

  1. Calculate the Lagrangian Multiplier


    A = (9.413E-009*Activity) + 1.152E-006 B = (-0.0003292*Activity) + 0.2685

  2. Select a macroblock mode

  3. Encode the Macroblock and calculate

  4. Compute the RD cost function.

    J=Dmosp+ mosp*R

  5. Check if J < Jmin, where J min = minimum RD cost for all modes.

  6. If J<Jmin, check if all modes have been evaluated. If NO, then update Jmin = J and go to step 2. If YES, then current mode is the best mode for encoding the macroblock.

Experimental Results:

To investigate if there is a gain inMOS using MOSp- based mode selection algorithm when compared with the reference H264 encoder, the results are presented as bitrate versus MOS graphs for each of the 3 test sequence. Each graph has two curves, one for each codec. These bitrate versus MOS graphs are presented in the below figures.

terms of Blockiness and Bluriness.

The Ofr is an optimal frame rate that maximizes the video quality at each video bit rate (BrV) and is expressed as:

Ofr =v1 +v2 *BrV 1 <Ofr >30 v1 and v2 are constants

Below Table compares the coding performance of the two codecs and includes the following information


  1. Percentage gain (or loss) in visual quality for each sequence which is calculated as:

  2. Percentage gain (or loss) in bit rate for each sequence calculated as:





0.06 to 0.83

-0.015 to



0.25 to 1.11

0 to 0.2


0.01 to 1.09

0.03 to 0.15


We propose a Fast Block mode selection algorithm based on Mean Opinion score. The algorithm uses objective method to arrive at subjective analysis of user perspective. Using this method we can rate the quality of video depending upon Blockiness in the video. We have achieved good results compared to other algorithms


  1. Abharana B, Richardson I, Kannangara S (2009) A new perceptual quality metric for compressed video. In: ICASSP

  2. Arum K, Xiao J, Seo A, Hong JW, Boutaba R (2012) The impact of network performance on perceived video quality in H.264/AVC. In: IEEE network operations and management symposium (NOMS): mini-conference

  3. Brandao T, Queluz MP (2010) No-reference quality assessment of H.264/AVC encoded video. In: IEEE transaction on circuits and systems for video technology, vol. 20 no. 11

  4. Brandao T, Luis R, Maria PQ (2009) Quality assessment of H. 264/AVC encoded vide. In: Proc of conference on telecommunications-ConfTele, Sta. Maria da Feira, Portugal

  5. Eden A (2007) No-reference estimation of the coding PSNR for H.264-coded sequences. In: IEEE transactions on consumer electronics

  6. ITU-T (2005) H.264 advanced video coding for generic audiovisual services.

  7. ITU-T Recommendation G.1070 (2012) Opinion model for video-telephony applications.

  8. ITU-T Recommendation P.910 (1999) Subjective video quality assessment methods for multimedia applications.

  9. Joskowicz J, Ardao CL (2010) A general parametric model for perceptual video quality estimation. In: IEEE international workshop technical committee on communications quality and reliability (CQR)

  10. Kawano T, Yamagishi K, Watanabe K, Okamoto J (2010) No reference video-qualityassessment model for video streaming services. In: 18th international packet video workshop.

Leave a Reply

Your email address will not be published.