Acoustic Localization Sensor for Embedded Surveillance Systems

DOI : 10.17577/IJERTV4IS100112

Download Full-Text PDF Cite this Publication

Text Only Version

Acoustic Localization Sensor for Embedded Surveillance Systems

Gintu George Jaison Jacob

Student

Department of Electronics & Communication Engineering Rajagiri School of Engineering and Technology

Kochi, Kerala

Asst. Professor

Department of Electronics & Communication Engineering Rajagiri School of Engineering and Technology

Kochi, Kerala

Abstract Nowadays acoustic perception in intelligent home applications, surveillance systems and autonomous robots are gaining great popularity. Many robotic devices are currently equipped with embedded acoustic sensors. This sound based localization also find enormous application in military as well as security systems.

In this paper sound source localization is implemented in analog circuit and also TOA (Time Of Arrival) estimation of sound is done in Matlab Simulink. TOA estimation is done based on GCC-PHAT algorithm.

Keywords Acoustic localization; GCC-PHAT(Generalised Cross Correlation and Phase Transformation); TOA(Time Of Arrival) estimation in Matlab Simulink; sound sensor.

  1. INTRODUCTION

    With the advent of technology artificial intelligence

  2. ACOUSTIC SOURCE LOCALIZATION

    A. Algorithm for source localization

    GCC-PHAT is one of the most popular algorithm that is been used for calculating delay on arrival between different sources

    . GCC stands for Generalized Cross Correlation which is the most prominent method for TDE (Time Difference in Arrival) in multiple devices[5]. In noisy environment in order to improve the performance of GCC weighting functions are incorporated .Among several weighting functions PHAT is best in noisy environment. The GCC-PHAT method is also known as Cross Power Spectrum Phase (CSP)[7].

    Consider the case of a microphone array, if distance of a microphone m to the sound source is (m=1,2,3,4) the total delay taken by signal to reach source is given by,

    uses the idea of sound source localization done by human ears. As a result of this embedded sound sensors are gaining great popularity. This can be implemented in various ways such as analog circuit based implementation[3], Matlab based

    =

    The equation for GCC is given by,

    (1)

    implementation and FPGA based implementation.

    Analog circuit based implementation is done by amplifying signals collected from an array of microphones and using this signal amplitude location of sound source is determined. While in the case of Matlab based implementation it can be done either in Matlab or Matlab Simulink. In this paper an experiment to determine TOA (Time Of Arrival ) at microphones was done in order to determine delay in arrival of sound at different microphones which help to determine position of sound source. This experiment was done based on GCC-PHAT (Generalized Cross Correlation) algorithm[1].

    G [k] = X1 [k] * X2 [k] (2)

    Xn is basically xn Defined in frequency domain. Inverse FFT can be used to transform this into time domain. Where k is the frequency value ranging from 0 to N-1.

    [] = 1G [k] (3)

    1 is the inverse FFT function. This is used to calculate the delay :

    The PHAT on the GCC can be represented as,

    In case of FPGA based implementation GCC-PHAT

    [] = [] []

    (4)

    algorithm itself can be used and localization can be done on a Spartan 3e board based on programming done on Verilog.

    = argmax P[N] (5)

  3. ANALOG CIRCUIT IMPLEMENTATION

    In analog circuit based implementation acoustic localization is done based on variation in amplitude of signal received at different microphones. Amplitude of signal received by microphones vary depending on position of source. So signal from microphone is amplified and analog signal can be used to determine position for camera as well as robotic systems

    1. Experiment done on Arduino based analog amplification sensor for servo control

      Fig.1. Analog circuit implementation

      Since output of microphone MX08 is in millivolt range we need to amplify it to required level here I used LM324 Opamp as amplifier with a gain of 100.The amplified signal is fed to ADC of Arduino, where Arduino is programed to rotate servo to location where sound is heard based on comparing analog inputs from microphone.

      TABLE I: Microphone output amplified values

      ANGLE IN DEGREES

      MICROPHONE 1 (3 TRIALS)

      MICROPHONE 2 (3 TRIALS)

      0

      10V,10.28V,11.16V

      4.76V,4.98V,5.2V

      90

      7.69V,8.6V,8.97V

      7.43V,8.52V,8.5V

      180

      4.8V,5.65V,5.73V

      10.1V,10.9V,10.7V

    2. Program algorithm

      Initially analog signal from microphones connected as shown in Figure 2.3 is fed to analog input pins A0 ,A1 ,A2 of Arduino and rotation of servo motor is done as Arduino is programed based on flow chart shown in figure 2.5

      • Rotate Servo motor to left half plane: If there is a sound from left side of microphone array amplitude of microphone amplified output will decrease below 10v if it happens need to check whether sound occur from extreme left that is 0 degree or in between 0 degree and 90 degree and rotate accordingly.

      • Rotate Servo motor to center: If sound occur from center left and right microphone output will be high while center microphone output will be low so rotate Servo to 90 degree according to amplitude

      • Rotate Servo motor to right half plane: If there is a sound from right side of microphone array amplitude of microphone amplified output will decrease below 10v if it happens need to check whether sound occur from extreme right that is 180 degree or in between 90 degree and 180 degree and rotate accordingly.

    Fig. 4. Program Flow chart

    Fig. 3. Program Flow chart

  4. DOA ESTIMATION DONE ON SIMULINK USING GCC-PHAT ALGORITHM

    In this experimental setup is done by attaching two microphones to laptop and audio signals from this microphones are captured using audio device block in Simulink and GCC PHAT equation was implemented on this signals and delay time in arrival of sound at two microphones were analyzed[6]. Figure 2 shows the model created in Simulink for this test.

    A. Time delay of arrival calculation

    Signals captured by microphones is given by, GCC-PHAT estimation in frequency domain is given by,

    () = ( ) + () (6)

    () = ( ) + () (7)

    GCC-PHAT estimation is given by,

    GCCPHA

    () =

    () ()

    (8)

    | ()|| ()|

    GCC-PHAT calculation for delay time 4 t is given by,

    () ()

    GCCPHA

    () =

    | ()|| ()|

    (9)

    Fig.7. Correlation of the two microphone outputs for sound produced at four

    FT- Fourier Transform

    FFT Fast Fourier Transform

  5. IMPLEMENTATION RESULTS

  6. Based on above equations Simulink model is implemented and delay in TOA(Time Of Arrival) is estimated with two microphones

Fig. 5. Simulink model for TOA estimation using GCC-PHAT algorithm

Fig. 6. Correlation of the two microphone outputs for sound produced at two different instances of time

different sources instances of time.

Fig. 8. TOA output obtained from two different microphones.

V. CONCLUSION

Based on the analog circuit implemetation of sound sensor using amplitude comparison, the Servo motor was rotated to various angles from -90 and +90 degree with respect to default position. More accurate results were obtained in less noisy environment. Analog circuit based implementation is cost efficient. This can be used in security systems like surveillance cameras, but more accurate positioning is not possible using Analog circuit implementation.

So we checked this TOA(Time Of Arrival) estimation on Matalb Simulink and was able to determine the delay difference in sound arrival at two microphones based on location. This value is used to determine exact angle of incidence of sound. For real time applications GCC-PHAT calculation is lot faster. So TOA based estimation is much efficient on comparison with Analog circuit based implementation of sound sensor.

REFERENCES

  1. C. N. Sakamoto, W. Kobayashi, T. Onoye, and I, Shirakawa, DSP Implementation of Low Computational 3D Sound Localization Algorithm, IEEE Workshop on Signal Processing Systems, pp. 109-116, September 2001 [2] MATLAB – The Language of Technical Computing MathWorks India, www.mathworks.in/products/matlab/

  2. Hoang Do,Harvey F. Silverman Robust cross-correlation-based techniques for detecting and Locating simultaneous, multiple sound sources TokBox Inc.,IEEE 2012 San Francisco, CA 94107, USA

  3. H. Kim, J. Choi, M. Kim, and C. Lee, Sounds Direction Detection and Speech Recognition System for Humanoid Active Audition Proceed- ings of International Conference on Control, Automation, and Systems,

    Gyeongju, Korea, Oct. 2003

  4. H. Do and H. F. Silverman,Srp-phat methods of locating simultaneous multiple sources using a frame of microphone array data, in Proc. IEEE Int. Conf. Acoust. Speech, Signal Process., Dallas, TX, March 2010, pp. 125128.

  5. J. H. DiBiase, A High-Accuracy, Low-Latency Technique for Talker Localization in Reverberant Environments Using Microphone Arrays Ph.D. thesis, Brown University, Providence, RI, May 2000.Ph.D. thesis, Brown University, Providence, RI, May 2000.

  6. H. Do and H. F. Silverman, robust sound-source separation algorithm for an adverse environment that combines mvdr-phat with the casa framework,in Proc. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA- 11), New Paltz, NY, Oct. 2011 [8] J. Capon, High resolution frequency-wavenumber spectrum analysis, Proc. IEEE, vol. 57, no. 8, pp. 1408 1418, Aug 1969.

  7. J. Valin, F. Michaud, and J. Rouat, Robust 3d localization and tracking of sound sources using beamforming and particle ltering, in Proc. IEEE Int. Conf. Acoust. Speech, Signal Process., Toulouse, France, May 2006, vol. 4, pp. 841844.

  8. B. Mungamuru and P. Aarabi, Sound Localization, IEEE Transactions on Systems, Man and Cybernetics – PART B:e Cybernetics, vol. 34, no. 3, June 2004

Leave a Reply