Linear Block Codes in OFDM System Performances

DOI : 10.17577/IJERTV3IS070271

Download Full-Text PDF Cite this Publication

Text Only Version

Linear Block Codes in OFDM System Performances

Chankit Jain1, Rahul Abhishek2, Shailendra Sahu3

M.Tech (Control & Automation), School of Electrical Engineering, VIT University, Vellore1, 2, 3

Abstract–Channel coding plays a very important role in OFDM systems performance. The structure of OFDM systems makes channel coding more effective in confronting fading channels. Sometimes Coded OFDM is known as COFDM. The role of channel coding in conjunction with frequency and time interleaving is to provide a link between bits transmitted on separated carriers of the signal spectrum, in such a way that the information conveyed by faded carriers can be reconstructed in the receiver. Frequency selectivity, currently known to be a disadvantage, is then turned into an advantage that can be called frequency diversity. Using Channel State Information (CSI), channel coding can yield some additional gain. Channel state information is frequency response of the channel or signal to noise ratio in each carrier.

Index Terms- OFDM, Digital Communication, BPSK, QAM, Cyclic Codes.


    Now a day, wireless communication like mobile communication is causing ecological threat to the environment. High power carrier radiations emitted out of mobile antennas, are causing extinction of some species in the environment. In our project we intend to reduce this high carrier power which turns to be an ecological benefit.

    Digital communication system comprises of many modules which include information source and sink, source encoder and decoder, modulator and demodulator, channel coding and decoding. Type of Communication involved may be wired or wireless.

    Schemes involved in different modules of digital communication system determine the characteristics and nature of signal transmitted. Therefore to meet our requirement, we have chosen efficient schemes like OFDM for modulation and demodulation, cyclic and convolution coding schemes for channel coding modules. Here we have chosen ideal, awgn and Rayleigh as communicating wireless channels.

    Integrating all these efficient modules with efficient schemes, we result in carrier power reduction with lower BER value for transmitted signals


    2.1 Introduction

    The purpose of a communication system is to transport an information bearing signal from source to a user destination via a communication channel. Basically, a communication system is of an analog or digital type. In an analog communication system, the information-bearing signal is continuously varying in both amplitude and time, and it is used directly to modify some characteristic of the sinusoidal carrier wave, such as amplitude, phase, or frequency. In a digital communication system on the other hand, the information bearing signal is processed so that it can be represented by a sequence of discrete messages.

    2.1.1 Sources and Signals

    A source of information generates a message, examples of which include human voice, television picture, atmospheric temperature and pressure etc. the message is not electrical in nature, and so a transducer is used to convert it into an electrical waveform called the message signal. The waveform is also referred to as base band signal.

    The message signal can be of analog or digital type. An analog signal can always be converted digital form by

    combining three basic operations: sampling, quantizing, and encoding as shown in Fig 2.1.

    Fig 2.1: Analog-to-digital conversion.

    In the sampling operation, only sample values of the analog signal at uniformly spaced discrete instants of time are retained. In the quantizing operation, each sample value is approximated by the nearest level in a finite set of discrete levels. In the encoding operation, the selected level is represented by a code word that consists of a prescribed number of code elements.

    As a result of sampling and quantizing operations, errors are introduced into the digital signal. These errors are non- reversible in that it is not possible to produce an exact replica of the original analog signal from its digital representation. Proper selection of sampling rate and code-word length, the errors introduced due to sampling and quantizing can be made so small that the difference between the analog and its digital reconstruction is not discernable by a human observer.


    The choice of modulation scheme will significantly affect the characteristics, performance and resulting physical realization of a communication system. It is important to select a modulation scheme that is appropriate for the Doppler spread and the delay spread of the channel. Consideration must be given to the required data rate, acceptable level of latency, available bandwidth, anticipated link budget and target hardware cost, size and current consumption.

      1. OFDM (orthogonal frequency division multiplexing)

        Orthogonal frequency-division multiplexing (OFDM) is a method of encoding digital data on multiple carrier frequencies. OFDM has developed into a popular scheme for wideband digital, whether wireless or over copper wires, used in applications such as digital television and audio broadcasting, DSL broadband internet access, wireless networks, and 4G mobile communications.

        An OFDM baseband signal is the sum of a number of orthogonal sub-carriers, each sub-carrier being independently modulated (for instance using QAM or PSK) by its own data. The orthogonality allows simultaneous transmission on a lot of sub-carriers in a tight frequency space without interference from each other. Thus, they are able to overlap without

        larger than expected multipath delay spread induced in channel, the inter-symbol interference(ISI) can be eliminated completely, which is a major problem in broadband transmission over multipath fading channels. However, adding a guard interval also implies that this increases the power wastage and decreases the bandwidth efficiency.

        Fig. 3.1: OFDM signal

        The time-frequency view of an OFDM signal is shown in Fig. 3.1, in which the subcarrier space and OFDM symbol period are shown. From this figure, we can see that even though the subcarrier signals are overlapping in the time and frequency domains, no mutual inter carrier interference occurs when the sampling is done at certain specific points in the frequency domain called as subcarrier positions. This is one of the important properties of OFDM signals which lead to high spectral efficiency as compared to conventional FDM.

      2. Mathematical description of OFDM

    In OFDM transmission, for sending an OFDM modulated symbol, we use multiple carriers. Mathematically, each carrier can be described as a complex wave:


    The real signal is the real part of S (t). Both A (t)

    interfering. As a result, OFDM systems are able to maximize c c

    spectral efficiency without causing adjacent channel interference. The task of pulse forming and modulation can be performed by Inverse Discrete Fourier Transform (IDFT) which can be implemented very efficiently as Inverse Fast Fourier Transform (IFFT). As in the receiver, we only need a FFT to reverse this operation. By adding a guard interval between OFDM symbols and making the guard interval

    and c(t), the amplitude and phase of the carrier, can vary on a symbol by symbol basis. The values of the parameters are constant over the symbol duration period.

    Since OFDM consists of many carriers, the modulated signal Ss(t)can be represented as,


    Where nis



    This is the condition that is required for orthogonality. Thus, one consequence of maintaining orthogonality is that the OFDM signal can be defined by using Fourier transform procedures.

    This is of course a continuous signal.

    If we consider the waveforms of each component of the


    1. Introduction

      signal over one symbol period, then the variables Ac(t) and c(t) take on fixed values, which depend on the frequency of that particular carrier, and so can be rewritten as


      If the signal is sampled using a sampling frequency of 1/T, then the resulting signal is represented by:


      At this point, we have restricted the time over which we analyze the signal to N samples. It is convenient to sample over the period of one data symbol. Thus we have a relationship:


      If we now simplify 5, without a loss of generality by letting o=0 , then the signal becomes,


      Now 7 can be compared with the general form of the inverse Fourier transform:


      In 7, the function is no more than a

      definition of the signal in the sampled

      frequency domain, and S(kT) is the time domain

      representation. 3.7 and 3.8 are equivalent if:

      Channel coding refers to the class of signal transformation designed to improve communications performance by enabling the transmitted signals to better withstand the effects of various channel impairments, such as noise, interference, and fading.

      The use of large scale integrated circuits (LSI) and high- speed digital signal processing (DSP) techniques have made it possible to provide as much as 10 dB performance improvement through channel coding, at much less cost than through the use of most other methods such as higher power transmitters or larger antennas.

      Channel coding can be partitioned into two study areas, waveforms coding and structured coding. Waveform coding deals with transforming waveforms into better waveforms, to make the detection process less subject to errors. Structured sequences deals with transforming data sequences into better sequences, having structured redundancy (redundant bits). The redundant bits can then be used for the detection and correction of errors. The encoding procedure provides the coded signal (whether waveforms or structured sequences) with better distance properties than those of uncoded counterparts.

    2. Linear block codes

Linear codes are used in forward error correction and are applied in methods for transmitting symbols (e.g., bits) on a communications channel so that, if errors occur in the communication, some errors can be corrected or detected by the recipient of a message block. The code words in a linear block code are blocks of symbols which are encoded using more symbols than the original value to be sent.

A linear code of length n transmits blocks containing n symbols. For example, the [7, 4, 3] Hamming code is a linear binary code which represents 4-bit messages using 7-bit code words. Two distinct code words differ in at least three bits. As a consequence, up to two errors per codeword can be detected and a single error can be corrected.

  • The number of code words is 2k since there are 2k distinct messages.

  • The set of vectors {gi} are linearly independent since we must have a set of unique code words.

  • Linearly independent vectors mean that no vector gi can be expressed as a linear combination of the other vectors.

  • These vectors are called basis vectors of the vector space C.

  • The dimension of this vector space is the number of the basis vector which are k.

Gi C the rows of G are all legal code words.

      1. Matrix description of linear block codes

        Let the message block of k-bits be represented as a row vector or k-tuple called message-vector given by

        [D]= {d1,d2,dk} 4.1

        where d1,d2,dk are either 0s or 1s. Thus, there are 2k distinct message vectors and all these message-vectors put together represent k-tuples in a k-dimensional sub space over a vector-space of all n-tuples over a field space called Galois Field denoted by GF (2).

        The channel encoder systematically adds (n-k) number of check bits to form a(n, k) linear block code. Then the 2k code

        vectors can be represented by

        C={c1,c2,} 4.2

        Only k-tuples out of n-tuples are valid code vectors and the remaining (2n -2k) code-vectors are invalid code vectors which form the error-vectors .The ratio (k/n) is defined as the rate-efficiency of the (n, k) linear block code.

      2. Syndrome Decoding

        The generator matrix G is used in the encoding operation at the transmitter. On the other hand, the parity check matrix H is used in the decoding operation at the receiver. Let Y denote the 1by n received vector that results from sending the code vector C over a noisy channel. We express the vector Y as the sum of original code vector C and an error vector E, as shown by .Y=C+E

        The ith element of E equals 0, if the corresponding element of Y is the same as that of C. On the other hand, the ith element of E equals 1, if the corresponding element of Y is different from that of C, in which case an error is said to have occurred in the ith location.

        The receiver has a task of decoding the code vector C from the received vector Y. The algorithm commonly used to perform this decoding operation starts with the computation of a 1-by-(n-k) vector called the error-syndrome vector or simply the syndrome. The importance of the syndrome lies in the fact that it depends only upon the error pattern.

        Given a 1-by-n received vector Y, the corresponding syndrome is formally defined as

        S=YHT 4.4

        Accordingly, the syndrome has the following important properties:

        Property 1:

        The syndrome depends only on the error pattern, and not on the transmitted code word.

        A (n, k) linear block code is said to be systematic if the k-message bits appear either at the beginning of the code-word or at the end of the code word.

        Generator matrix

        A generator matrix of order (k*n) is given by

        Property 2:

        All error patterns that differ at most by a code word have the same syndrome

        Property 3:

        The syndrome S is the sum of those columns of the matrix

        [G]=[Ik | P]

        ( k x n)

        where Ik =unit matrix of order k

        H corresponding to the error locations.

        [P]= arbitrary matrix called PARITY MATRIX of order k by-(n-k)

        The parity matrix [P] is suitably selected to correct random and burst errors.

        Property 4:

        With syndrome decoding, an (n, k) linear block code can correct up tot errors per code word, provided that n and k satisfy the Hamming bound.


        Parity Check Matrix [H]



        There is another way of expressing the relationship between message bits and parity check bits of a linear block codes. Let H denote an (n-k) by- n matrix defined as

        [H]= [In-k |PT] 4.3

        Where, Pt is an (n-k) by- k matrix, representing the transpose of the parity matrix P and In-k is the (n-k) by-(n-

        k) identity matrix.

        The block length n and the number of message bits k must satisfy the Hamming bound of Eq.3. A binary code for which Hamming bound is satisfied with the equality sign is called a perfect code.

      3. Minimum Distance Considerations

Consider a pair of code vectors x and y that have the same number of elements .The hamming distance d(x ,y) between such a pair of code vectors is defined as the number of locations in which their respective elements differ.

The hamming weight w(x) of a code vector x is defined as the number of nonzero elements in the code vector. The minimum distance dmin of a linear block code is defined as the smllest Hamming distance between any pair of code vectors in the code. Hence, the minimum distance of a linear block code is defined by the minimum number of rows of the matrix HT that sum to 0.

    1. cyclic codes

      Binary cyclic codes form a subclass of linear block codes .Cyclic codes have two distinct advantages over the linear codes, they are

      • Encoding syndrome calculating circuits can be easily implemented with simple shift register with feedback connections and some basic gates.

      • Cyclic codes have a fair amount of mathematical structure that makes it possible to design codes with useful error correcting properties.

A (n, k) linear block code C is said to be a cyclic code if every cyclic shifts of the code is also a code vector of C.

For example: Let C1 =0111001 be a code vector of C. If C2=1011100 is also a code vector of C, then it is called as Cyclic Code.

In general, let the n-tuple be represented by

V=( v0 v1 v2 v3 .vn.-1) 4.6

If v belongs to a cyclic code, then

V(1) = (vn-1 v0 v1 v2 .. vn-2) V(2) = (vn-2 vn-1 v0 v1 v2

V(i) = (vn-i vn-i+1.. v0 v1 v2vn-i-1)

Obtained by shifting V cyclically successively, are also code vectors of C. This property of cyclic codes allows us to treat the elements of each code vector as the coefficients of a polynomial of degree (n-1)..

Equation (4.6) can now be represented as a polynomial given by

V(1) (x)= v0 + v1 x+ v2x2..+vn-2 xn-1

contains a polynomial of minimum degree (n-k) as a factor. This special factor, denoted by g(x),is called the generator polynomial of the code, The degree of g(x) is equal to the smallest of parity bits in the code.

The generator polynomial g(x) has the following properties: Property 1:

The generator polynomial of an (n ,k) cyclic code is unique in that it is the only code word polynomial of minimum degree (n-k).

Property 2:

Any multiple of the generator polynomial g(D) is a code word polynomial, as shown by

V(x)= a(x) g(x) mod(xn 1) 4.7

where a(x) is a polynomial in x. Let the polynomial a(x) be expressed as

a(x)=a0 + a1 x + ..+ak-1 xk-1 4.8

where a0,a1,ak-1 are equal to 0 or 1. We express the result of multiplying g(x) by a(x) in the expanded form

a(x)g(x)=a0g(x) + a1x g(x) +..+ak-1xk-1 g(x) 4.9

From the cyclic property , xn g(x) is a code word polynomial. Conversely, we may state that s binary polynomial of order (n-1) or less is a code word polynomial if and only if it is a multiple of g(x).

Suppose we are given the generator polynomial g(x),and the requirement is to encode the message sequence (m0,m1,.,mk-1) into an (n ,k) systematic cyclic code.

(b0,,m0,m1,.mk-1) …4.10

where b0,b1, ,bn-k-1 are (n-k) parity bits. Let the message polynomial be defined by

m(x)=m0 + m1 x+..+mk-1 xk-1 4.11

Let the polynomial xn-k m(x) be divided by the generator polynomial g(x), resulting in a quotient a(x) and remainder b(x).

xn-k m(x) =a(x) g(x) + b(x) 4.12

In modulo-2 arithmetic, b(x)= – b(x) .We may therefore rearrange Eq.6.12 as


(x)= vn-2 + vn-1 x+v0…+vn-3xn-1

b(x) +xn-k m(x) =a(x) g(x) 4.13

V(i)(x)= vn-i


x + vn-i+2

x2 ++vn-i-1


where a(x)g(x) is a code vector polynomial.

V(x) = b(x)+ xn-k m(x) 4.14

      1. Generator polynomial

        An (n, k) cyclic code is specified by the complete set of code word polynomials of degree (n-1) or less, which

      2. Parity -check polynomial

        An (n ,k) cyclic code is uniquely specified by its generator polynomial g(x) of order (n-k).such a code is also uniquely specified by another polynomial of degree k, which is called the parity-check polynomial. Correspondingly, the parity check polynomial, denoted by h(x), is an equivalent representation of the paritycheck matrix H.

        h(x) g(x) mod (xn -1)=0 4.15

        Property 3:

        The generator polynomial g(x) and the parity check polynomial h(x) are factors of the polynomial 1+ xn, as shown by

        h(x)g(x)=1+ xn ..4.16

        This property provides the basis for selecting the generator or parity check polynomial of a cyclic code.

      3. Encoder for cyclic codes

        Encoding using a cyclic code can be done by a multiplication of two polynomials a message polynomial and the generating polynomial for the cyclic code.

        Let C be an (n, k)-code over an field F with the generator polynomial

        The boxes represent flip flops or delay elements. The flip flop is a device that resides in one of two possible states denoted by 0 and 1.An external clock controls the operation of all the flip-flops. Every time the clock ticks, the contents of the flip flops are shifted out in the direction of the arrows. Adders are used to compute modulo-2 sums of their respective inputs. Finally, the multipliers multiply their respective inputs by the associated coefficients .In particular, if the coefficient gi= 1, the multiplier is just a direct connection. If, on the other hand, the coefficient gi=0,the multiplier is no connection.

        The operation of the encoder shown in Fig: 4.1 proceeds as follows:

        1. The gate is switched on. Hence, the k message bits are shifted into the channel. As soon as the message bits have entered the shift register, the resulting 9n-k) bits in the register form the parity bits.

        2. The gate is switched off, thereby breaking the feedback connections.

        3. The contents of the shift register are shifted out into the channel.

      4. Calculation of the syndrome

g(x) = g0 + g1 x + + gr 1 x r -1 of degree r = n – k.

Suppose the code word (v v v

) is

0, 1, n-1


If a message vector m is represented by a polynomial m(x) of degree k and m is encoded by

m c = mG1 4.18

transmitted over a noisy channel, resulting in the received word (y0,y1,.yn-1). Let the received word be represented by a polynomial of degree (n-1) or less, as shown by

y(x)= y y x,+..+y xn-1 4.20

then the following relation between m(x) and c(x) holds

0, 1


c(x) = m(x)g(x) 4.19

Such an encoding can be realized by the shift register shown in Figure below, where input is the k-bit message to be encoded followed by n – k 0' and the output will be the encoded message.

Encoder circuit can be implemented by means of a linear feedback shift register with (n-k) stages.

Message bits

Code word

Fig 4.1: Encoder for an (n,k) cyclic code

Let a(x) denote the quotient and s(x) denote the remainder, which are the results of dividing y(x) by the generator polynomial g(x).Therefore; y(x) is expressed as

y(x)=a(x) g(x) +s(x) 4.21

The remainder s(x) is a polynomial of degree (n-k-1) or less. It is called the syndrome polynomial in that its coefficients make up the (n-k)-by-1 syndromes. When the syndrome polynomial s(x) is non zero, the presence of transmission errors in the received word is detected.

Fig 4.2: Syndrome Calculation


In conventional OFDM systems, modulation technique such as BPSK and multilevel quadrature amplitude modulation (QAM) maps fixed number of information bits into signal constellation symbol. The continuous demand for increased data transmission rate has driven numerous technologies that exploit new degrees of freedom for better spectral efficiency.

The SIM ( Sub-carrier index modulation ) transmission technique employ the subcarrier index to convey information in an ON-OFF keying fashion. SIM OFDM aims at providing either BER performance enhancement or power efficiency improvement over conventional OFDM.


  1. Application number: 12/737,094 Publication number: US 2011/0090948 A1 Filing date: 22 May 2009, ADAPTIVE QAM TRANSMISSION SCHEME TO IMPROVE PERFORMANCE ON AN AWGN CHANNL.

  2. IEEE Std 802.11a-1999(R2003) (Supplement to IEEE Std 802.11- 1999) [Adopted by ISO/IEC and redesignated as ISO/IEC 8802- 11:1999/Amd 1:2000(E) Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications High-speed Physical Layer in the 5 GHz Band.

  3. Joaquin Garcia, Rene Cumplido Department of Computer Science, INAOE, Puebla, Mexico On the design of an FPGA-Based OFDM modulator for IEEE 802.11a.

  4. By Daniel J. Costello, Jr., Fellow IEEE, and G. David Forney, Jr., Life Fellow IEEE Channel Coding: The Road to Channel Capacity.

  5. Avatar Singh and S. Srinivasan, Digital Signal Processing,

    Thomson Learning, 2004.

  6. K Sam Shanmugam, Digital and Analog communication systems, John Wiley & Sons, 2008. [7] Simon Haykin, Digital Communications, John Wiley & Sons, 2009.

  1. D Ganesh Rao and Vineeta P Gejji, Digital Signal Processing, Pearson Education, 2008.

  2. Bernard Sklar, Digital Communication Fundamentals and Applications, Pearson Education, second edition, 2001.

  3. Fred Mullet, Wireless Telecom Systems and Networks, Thomson Learning, 2006.

Leave a Reply