Frequency Hopped Signal Interception using Compressive Sensing

DOI : 10.17577/IJERTCONV3IS05006

Download Full-Text PDF Cite this Publication

Text Only Version

Frequency Hopped Signal Interception using Compressive Sensing

Ajan Manohar

Department of electronics and communication College of Engineering Cherthala

Cherthala, India

AbstractIn the present scenario there is a huge amount of data transfer around us in every field. It is a primary requirement that the data that is being transmitted is secure and does not land up with unauthorized party. Frequency Hopped Spread Spectrum (FHSS) is a reliable communication technique by which data security while the data is under transmission can be guaranteed. FHSS is a technique in which the carrier frequency is hopped or changed continuously. Thus it is more immune to external attack and cannot be accessed by unauthorized parties. It is also desirable if we can minimize the resource needed to sense or intercept the transmitted data at the receiver. By traditional methods we sense the data as a whole and then sample and discard some values that are redundant. This is a waste of valuable sensing resource. Instead of sensing the data as a whole, we try to sense and capture only those samples which are not redundant. This approach is known as Compressive sensing or compressive sampling technique. This paper proposes a method for interception of the frequency hopped signal using latest algorithm in compressive sensing.

KeywordsCompressive sensing, frequency hopped spread spectrum sparsity, sensing matrix.

  1. INTRODUCTION

    Frequency hopping spread spectrum is a transmission technology used in wireless networks and a technique to generate spread spectrum by hopping the carrier frequency. In this method data signal is modulated with a narrow-band carrier signal that hops in random and hopping happens in pseudo-random predictable sequence in a regular time from frequency to frequency which is synchronized at both ends. In other words Frequency hopping is the periodic change of transmission frequency and hopping happens over a frequency bandwidth which consists of numbers of channels. Channel which is used as a hopped channel is instantaneous bandwidth while the hopping spectrum is called total hopping bandwidth. The times at which the carrier frequency are changed are called hopepochs. The interval between any consecutive hop epochs is called a hop interval.Data transmission time interval is called dwell interval.FHSS can be broadly classified as slow FHSS and fast FHSS.When hopping rate isgreater than the data rate then we call it as Fast FHSS. Here data symboltransmission requires at least two dwell intervals. In other words, carrierfrequency changes in middle of bit period.When hopping rate is lesser thanor equal to the data rate we have Slow FHSS wherein one or more datasymbols are transmitted per dwell. The feature of FHSS that is exploited

    by compressive sensing is the sparsity of the FHSS signal in the frequency domain.

  2. BASICS OF COMPRESSIVE SENSING

    The traditional approach of reconstructing signals or images from measureddata follows the well-known Shannon sampling theorem[1], which states thatthe sampling rate must be twice the highest frequency. Similarly, the fundamental theorem of linear algebra suggests that the number of collected

    samples (measurements) of a discrete finite dimensional signal should be atleast as large as its length (its dimension) in order to ensure reconstruction.This principle underlies most devices of current technology, such as analogto digital conversion, medical imaging or audio and video electronics. Compressive sensing is a new approach of data acquisition based on sparse signal representation. According to the theory, as long as the signal is sparse or compressive in somebasis, much lower sample rate than the Nyquist samplingtheorem is required to obtain structure information of the signal and exactly reconstruct the signal through reconstruction algorithm.

    Suppose that x is a signal of length N. It is said to be a K sparse signal if it can be well approximated by K<<N coefficients under some linear transformation.

    = (1)

    Where is the sparsifying basis and is the transform coefficient vector that has K nonzero entries.

    According to the CS theory, such a signal can be acquired through the following random linear projection

    = (2)

    Where y is the sampled vector with M<<N coefficients

    and represents an M×N random matrix known as the measurement matrix or a sensing matrix. The CS framework is attractive as it implies that x can be faithfully recovered from only = ( log ) measurements, suggesting the potential of significant cost reduction in digital data acquisition.

    There is considerable amount of literature about the reconstruction algorithm used for reconstructing the original signal x from the compressed coefficients. The class of reconstruction algorithm can be broadly classified into six types, namely convex relaxation, greedy iterative algorithms, iterative thresholding algorithms, combinatorial algorithms, nonconvex minimization algorithms and bregman iterative algorithms.

  3. DESIGN OF MEASUREMENT MATRIX Choosing or constructing the sensing matrix denoted

    by is a major step in compressive sensing. The matrix represents a dimensionality reduction. In other words, it maps N where N is large to M.. In the standard CS framework we assume that the measurements are non- adaptive, meaning that the rows of are fixed in advanceand do not depend on the previously acquired measurements. However in certainsettings adaptive measurement schemes can lead to significant performancegains. The recovery of the original signal is guaranteed only if the measurement matrix satisfy certain properties. A property that would guarantee the recovery of the signal is the incoherence[2] of the measurement matrix with the sparsifying basis. The coherence of a matrix , denoted as µ() is the largest absolute inner product between any two columns i ,j of .

    It is possible to show that the coherence of the matrix is

    The matrix RN×N is either a uniform random permutation matrix or a diagonal random matrix whose diagonal entries are Bernoulli random variables with identical distributions A uniformly random permutation matrix scrambles signals sample locationsglobally while a diagonal matrix of Bernoulli random variables flips signals sample signs locally. Hence, we often refer the former as the global randomizerand the latter as the local randomizer.

    The matrix F is an N×N orthonormal matrix that in practice,is selected to be fast computable such as popular fast transforms. In the case of frequency hopped signals FFT or its block transformed version is preferred over other popular transforms. The purpose of the matrix F is to spread informationof the signals samples over allmeasurements. The number of rows selected would be M inaverage. In matrix representation, D is simply a randomsubset of M rows of the identity matrix of size

    always in the range:

    N×N. the scale coefficient

    is to normalize the

    µ() [ ,1] (3)

    (1)

    Note that when N>> M, the lower bound known as welsh bound[3] is approximately µ() 1 .

    Apart from coherence property it is desirable if the sensing matrix has additional properties such as universality and low complexity. By universality we mean that the sensing performance is equally good with almost all sparsifying bases. Low complexity and fast computation properties are desired for large scale real time applications. It is also desirable that the entries of the sensing matrix be such that they can be implemented easil in hardware.

    Random matrices have the property that they are incoherent with almost all sparsifying bases with overwhelming probability and hence is used as sensingmatrix in compressive sensing. The two main class of random matrices usedare Gaussian matrices and Bernoulli matrices[4].

    Gaussian matrix was used for non-uniform sparse signal recovery. We say that an M×N random matrix is Gaussian if its entries are independentand standard normal distributed random variables, that is, having mean zero and variance one. Bernoulli matrices are another class of random matrices where the elements are Bernoulli random variables. However, it is quite costly to realize random matrices in practical sensing applications as they require very high computational complexity and huge memory buffering dueto their completely unstructured nature. It is also possible to deterministically construct sensing matrix with the required incoherence, but such constructions suffer from the disadvantage that the required value of M becomes large.

    A special type of matrix called structurally random matrix can be used which incorporates the advantages of both the random matrix and deterministic matrix. Structurally random matrix[5] is defined as a product of three matrices

    = DFR (4)

    transform so that energy of the measurement vector is almost similarto that of the input signal vector.

    Equivalently, the proposed sensing algorithm SRM randomize a target signal by either flipping its sample signs or uniformly permutingits sample locations. This step corresponds to multiplyingthe signal with the matrix R, spread the energy of the signals samples over all measurements and finally it picks up M measurements out of N transform coefficients randomly.

  4. THE RECONSTRUCTION ALGORITHM Many efficient algorithms exist in literature which

    instead of finding the M largest coefficients at the same time attempt to find these coefficients iteratively. These algorithms may be divided into six classes.

    The convex relaxation algorithm solves a convex optimization problem through linear programming[6]to obtain reconstruction. The number of measurements required for exact reconstruction is small but the methods are computationallycomplex.

    The greedy iterative algorithms[7] solve the reconstruction problem by to obtain reconstruction by finding the answer, step by step, in an iterative fashion. The idea is to select columns of in a greedy fashion. At each iteration, the column of that correlates most with y is selected. Conversely, least square error is minimized in everyiteration. That rows contribution is subtracted from y and iterations aredone on the residual until correct set of columns is identified. This is usuallyachieved in M iterations. However, when the signal is not much sparse, recovery becomes costly.

    Iterative thresholding algorithm[8] approaches to CS recovery problem are faster than the convex optimization problems. For this class of algorithms, correct measurements are recovered by soft or hard thresholding starting from noisy measurements given the signal is sparse. The thresholding function depends uponnumber of iterations and problem setup at hand.

    The combinatorial algorithms[9] recovers sparse signal through group testing. They are extremely fast and

    efficient, as compared to convex relaxation or greedy algorithms but require specific pattern in the measurements; needs to besparse.

    The bregman iterative algorithms[10] provide a simple and efficient way of solving relaxation problem. It presents a new idea which gives exact solution of constrainedproblems by iteratively solving a sequence of unconstrained sub-problemsgenerated by a Bregman iterative regularization scheme. Non-convex optimization[11] is mostly utilized in medical imaging tomography, network state inference, streaming data reduction.

    In this paper the frequency hopped signal is recovered. It is proposed to use a new approach known as multipath matching pursuit(MMP)[12] to reconstruct the signal. It grafts the advantages offered by combinatorial approach and the traditional greedy approach.

    Fig.1. comparison between greedy algorithm and MMP

    Since all combinations of K-sparse indices can be interpreted as candidates in a tree and each layer of the tree can be sorted by the magnitude ofthe correlation between the column of sensing matrix and residual, the problem to find the candidate that minimizes the residual is readily modelled asa combinatoric tree search problem.MMP, performs the tree search with thehelp of the greedy strategy. Although each candidate brings forth multiplechildren and hence the number of candidates increases as an iteration goeson, the increase is actually moderate since many candidates are overlappingin the middle of search. Therefore, while imposing reasonable computationaloverhead, the proposed method achieves considerable performance gain overexisting greedy algorithms. MMP algorithm searches multiple promising candidates and then chooses one minimizing the residual in the final moment. In this sense, one can thinkof MMP as an approximate algorithm to find the candidate minimizing thecost function.In contrast to greedy algorithms where only one candidate path is maintained, in MMP each path generates L child paths. in the kth iteration,L indices of thecolumns that are maximally correlated with residual become newelements of the child candidates.

  5. CONCLUSION

In this paper we propose to employ the theory of compressive sampling for interception of frequency hopped signal which is sparse in frequency domain. For the sensing matrix we have grafted the advantages of both the random matrices and deterministic matrices and created structurally random matrix. In the reconstruction part we explore multiple candidates in the reconstruction of the sparse signal. The MMP algorithm examines the multiple promising candidates instead of partial ones. Thus it avoids the risk of choosing the wrong candidate and providing an inaccurate signal as the output. In the empirical results we could observe that MMP more effective than the existing algorithms both in noisy as well as noiseless situations.

REFERENCES

  1. M. Unser. Sampling50 Years after Shannon. Proceedings of the IEEE,

    88(4):569587, 2000)

  2. F. Krahmer and R. Ward. New and improved johnson-lindenstrauss em-

    beddings via the restricted isometry property. Preprint, 2010I.S. Jacobs and C.P. Bean, Fine particles, thin films and exchange anisotropy, in Magnetism, vol. III, G.T. Rado and H. Suhl, Eds. New York: Academic, 1963, pp. 271-350.

  3. L. Welch. Lower bounds on the maximum cross correlation of signals.

    IEEE Trans.Inform. Theory, 20(3):397399, 1974.

  4. Gesen Zhang, Shuhong Jiao and Xiaoli Xu Compressed Sensing and Reconstructionwith Bernoulli MatricesProceedings of the 2010 IEEE

    International Conference on Information and AutomationJune 20 – 23, Harbin, China

  5. Thong T. Do,Lu Gan,Nam H. Nguyen,Trac D. Tran,Fast and Efficient Compressive Sensing Using Structurally Random Matrices IEEE Trans. on signal processing vol.60 pp 1539-154.

  6. E. Candes and B. Recht, Exact matrix completion via convex optimiza-tion, Foundations of Computational Mathematics, vol. 9, no. 6, pp. 717-

    772, 2009

  7. S. Mallat and Z. Zhang, Matching pursuits with time-frequency dictio-naries, IEEE Transactions on signal processing, vol. 41, no. 12, pp. 3397-3415, 1993

  8. D. Donoho, De-noising by soft-thresholding, Information Theory, IEEE

    Transactions on, vol. 41, no. 3, pp. 613-627, 2002.

  9. A. Gilbert, S. Muthukrishnan, and M. Strauss, Improved time boundsfor near-optimal sparse Fourier representations, in Proceedings of SPIE, vol. 5914, p. 59141A, Citeseer, 2005

  10. W. Yin, S. Osher, D. Goldfarb, and J. Darbon, Bregman iterative algorithms for l1-minimization with applications to compressed sensing, SIAM J. Imaging Sci, vol. , no. 1, pp. 143-168, 2008.

  11. J. Murray and K. Kreutz-Delgado, An improved FOCUSS-based learning algorithm for solving sparse linear inverse problems, in Asilomar conference on signals systems and computers, vol. 1, pp. 347-54, IEEE;1998, 2001.

  12. SuhyukKwon,Jian Wang,Byonghyo Shim,multipath matching pursuitIEEE transactions on information theory vol.60 no.5 may 2014

Leave a Reply