Digital Signal Processing

DOI : 10.17577/IJERTCONV8IS04006

Download Full-Text PDF Cite this Publication

Text Only Version

Digital Signal Processing

Sindhoora K T, Lima K.D, Sona johny

Department of computer science Carmel college mala

Abstract : In this you can see the basics of digital signal processing and its applications. Digital Signal Processing is the branch of engineering that, in the space of just a few decades, has enabled unprecedented levels of interpersonal communication and of on-demand entertainment. By reworking the principles of electronics, telecommunication and computer science into a unifying paradigm, DSP is a the heart of the digital revolution that brought us CDs, DVDs, MP3 players, mobile phones and countless other devices.


This article will cover the basics of Digital Signal Processing to lead up to a series of articles on statistics and probability used to characterize signals, Analog-to-Digital Conversion (ADC) and Digital-to-Analog Conversion (DAC), and concluding with Digital Signal Processing software. Digital Signal Processing is the mathematical manipulation of an information signal, such as audio, temperature, voice, and video and modify or improve them in some manner.


When we pass a signal through a device that performs, as in filtering we say we have processed the signal. The type of operation performed may be linear or non linear .such operations are called Signal Processing.

The operations can be performed with a physical device or software. E g a digital computer can be programmed to perform digital filtering. In case of digital hardware operations the (logic circuits), we have a physical device that performs a specified operation. In contrast, in the digital processing of signals on a digital computer, the operations performed on a signal consist of a number of mathematical operations as specified by the signal are functions of continuous variable such as time or space. Such signals may be processed by analog systems such as filters or frequency analyzers multipliers. until above 2 decades ago, most signal processing was performed using specialized analog processors .As digital system became available and digital processing algorithms were developed, digital processing became more popular

.Initially, digital processing was performed on general purpose microprocessors .However ,for more sophisticated signal analysis, these devices were quite slow and not found suitable for real time applications .Specialized designs of microprocessors have resulted in development of digital signal processors ,which although perform a fairly limited number of functions ,but do so at very high speeds.

Digital signal processing (popularly known as DSP) requires an interface (figure1.1) between the analog signal and the digital processor, which is commonly provided by an analog-to-digital converter .Once the signal is digitized,

the DSP can be Programmed to perform the desired operations on the input signal .The programming facility provides the flexibility to change the signal processing operations through a change in software, were as hardwired machines are difficult to configure .Hence programmable signal processors are common in practice

.In some applications a hardwired implementation of the operations can be optimized so that it results in cheaper and faster signal processors. In case when the digital output from a processor is to be given to a user in the analog form

,a DA converter is required

Figure 1.0 basic elements of a digital signal processor(DSP) system

Digital signal processing is available as single chip devices and is commercially available. The most widely used DSP family is TMS320 from Texas instruments .Another range of processors available from Motorola is the DSP56001

.For the sake of comparison of speed ,the 16-bit Motorola 68000 microprocessor can handle 2,70,000 multiplications per second while the DSP56001 is capable of 10,000,000 multiplications per second , thus giving an increase in speed of 37 times .Because of the flexibility to reconfigure the DSP operations , they are used in most of the modern biomedical instruments for signal processing applications like transformation to the frequency domain averaging and a variety of filtering techniques.


Analysis of Speech Signals Using the STFT

As can be seen from this figure, a speech segment over a small time interval can be considered as a stationary signal, and as a result, the

Figure 1.1(a) narrow band spectroband (b) wide band spectroband DFT of the speech segment can provide a reasonable representation of the frequency domain characteristic of the speech in this time interval. As in the case of the DFT-based spectral analysis of deterministic signals discussed earlier, in the STFT analysis of non- stationary signals, such as speech, the window also plays an important role. Both the length and shape of the window

are critical issues that need to be examined carefully. The function of the window win is to extract a portion of the signal for analysis and ensure that the extracted section of axon is approximately stationary. To this end, the window length R should be small, in particular for signals with widely varying spectral parameters. A decrease in the window length increases the time resolution property of the STFT, whereas the frequency-resolution property of the STFT increases with an increase in the window length. A shorter window thus provides a wide-band spectrogram, while a longer window results in a narrow-band spectrogram. A shorter window developing a wide-band spectrogram provides a better time resolution, whereas a longer window developing a narrow-band spectrogram results in an improved frequency resolution. In order to provide a reasonably good estimate of the changes in the vocal tract and the excitation, a wideband spectrogram is preferable. To this end, the window size is selected to be approximately close to one pitch period, which is adequate for resolving the formants though not adequate to resolve the harmonics of the pitch frequencies. On the other hand, to resolve the harmonics of the pitch frequencies, a narrow- band spectrogram with a window size of several pitch periods is desirable. The two frequency-domain parameters characterizing the Fourier transform of a window are its main lobe width ML and the relative sidelobe amplitude As`. The former parameter determines the ability of the window to resolve two signal components in the vicinity of each other, while the latter controls the degree of leakage of one component into a nearby signal component. It thus follows that in order to obtain a reasonably good estimate of the frequency spectrum of a time-varying signal, the window should be chosen to have a very small relative sidelobe amplitude with a length chosen based on the acceptable accuracy of the frequency and time resolutions.

Spectral analysis of Random Signals

in the case of a deterministic signal composed of sinusoidal components, a Fourier analysis of the signal can be carried out by taking the discrete Fourier transform (DFT) of a finitelength segment of the signal obtained by appropriate windowing, provided the parameters characterizing the components are time-invariant and independent of the window length.

Neither the DFT nor the STFT is applicable for the spectral analysis of naturally occurring random signals as here the spectral parameters are also random. These type of signals are usually classified as noise like random signals such as the unvoiced speech signal generated when a letter such as "/f/" or "/s/" is spoken, and signal-plus-noise random signals, such asseismic signals and nuclear magnetic resonance signals.6 Spectral analysis of a noiselike random signal is usually carried out by estimating the power density spectrum using Fourier-analysis-based nonparametric methods, whereas a signal-plusnoise random signal is best analyzed using parametric-model- based methods in which the autocovariance sequence is first estimated from the model and then the Fourier transform of the estimate is evaluated. In this section, we review both of these approaches.

Musical Sound Processing

That almost all musical programs are produced in basically two stages. First, sound from each individual instrument is recorded in an acoustically inert studio on a single track of a multitrack tape recorder. Then, the signals from each track are manipulated by the sound engineer to add special audio effects and are combined in a mix-down system to finally generate the stereo recording on a two-track tape recorder. The audio effects are artificially generated using various signal processing circuits and devices, and they are increasingly being performed using digital signal processing techniques. some of the special audio effects that can be implemented digitally are reviewed in this section.

Figure 1.2: Single echo filter: (a) filter structure, (b) typical impulse response, and (c) magnitude response


  1. Kumaresan, Spectral analysis, In S.K. Mitra and J.F. Kaiser, editors, Handbook for Digital Signal Processing, chapter 16, pages 11431242.

    Wiley-Interscience, New York NY, 1993.

  2. Blesser and J.M. Kates, Digital processing in audio signals, In

    A.V. Oppenheim, editor, Applications of Digital Signal Processing, chapter 2. Prentice Hall, Englewood Cliffs NJ, 1978.

  3. J.M. Eargle, Handbook of Recording Engineering, Van Nostrand Reinhold, New York NY, 1986.

  4. S.J. Orfanidis, Introduction to Signal Processing, Prentice Hall, Englewood Cliffs NJ, 1996

Leave a Reply

Your email address will not be published.