Compressive Sensing Ability of Video Streaming For WI-FI Multimedia Sensor Networks

DOI : 10.17577/IJERTV2IS100184

Download Full-Text PDF Cite this Publication

Text Only Version

Compressive Sensing Ability of Video Streaming For WI-FI Multimedia Sensor Networks

P.Salomon Raj, K Kishore,

[II-M.Tech]-CSE, Asst.Profin CSE Dept.,DrKVSRIT Kurnool, Asst Prof in CSE Dept.

Abstract

Video streaming in large-scale multi- hop wireless networks of embedded devices is still open and largely unexplored. In fact, traditional video streaming systems based on transmitting predictively-encoded video through a layered communication protocol stack suffer from high complexity at the encoder and low resiliency to channel errors. The WMSN designs a cross-layer system that jointly controls the video encoding rate, the transmission rate, and the channel coding rate to maximize the received video quality. First, compressed sensing based video encoding for transmission over WI-FI multimedia sensor networks (WMSNs) is studied. It is shown that compressed sensing can overcome many of the current problems of video over WMSNs, primarily encoder complexity and low resiliency to channel errors. It is shown that the rate of compressed sensed video can be predictably controlled by varying only the compressive sensing sampling rate. It is then shown that the developed rate controller can be interpreted as the iterative solution to a convex optimization problem representing the optimization of the rate allocation across the network. The rate controller is shown to outperform existing TCP-friendly rate control schemes in terms of both fairness and received video quality.

Index TermsCompressed Sensing, Network Optimization, Multimedia Streaming, Congestion Control, Sensor Networks

  1. INTRODUCTION

    WMSNs will enable new applications including multimedia surveillance, storage and subsequent retrieval of potentially relevant activities, and person locator services. In recent years, there has been intense research and considerable progress in solving numerous wireless sensor networking challenges. However, the key problem of enabling real-time quality aware. Video streaming in large-scale multi-hop wireless networks of embedded devices is still open and largely unexplored. In fact, traditional video streaming systems based on transmitting predictively-encoded video through a layered communication protocol stack suffer from high complexity at the encoder and low resiliency to channel errors. plenary of the Emerging Issues session in Vilnius, a key element of Cloud Computing is that services operate consistently regardless of the underlying systems. Cloud Computing is importantly compared to a utility such as an electricity grid because end-users consume power without needing to understand the component devices or infrastructure required to provide the service.In this paper, we show that a new cross-layer optimized wireless system based on the recently proposed compressed sensing (CS) paradigm [9] can offer a promising solution to the aforementioned problems. Compressed sensing (aka compressive sampling) is a new paradigm that allows the faithful recovery

    Fig.1.Archietecture of C-DMRC system:

    of signals from M << N measurements, where N is the number of samples required for the Nyquist sampling. Hence, CS can offer an alternative to traditional video encoders by enabling imaging systems that sense and compress data simultaneously at very low computational complexity for the encoder. Image coding and decoding based on CS has recently been explored [10], [11]. So-called single-pixel cameras that can operate efficiently across a much broader spectral range (including infrared) than conventional silicon-based cameras have also been proposed [12]. For this reason, we introduce the Compressive Distortion Minimizing Rate Control (C-DMRC), a new distributed crosslayer control algorithm that jointly regulates the CS sampling rate, the data rate injected in the network, and the rate of a simple parity-based channel encoder to maximize the received video quality over a multi-hop wireless network with lossy links. By jointly controlling the compressive video coding at the application layer, the rate at the transport layer, and the adaptive parity at the physical layer, we leverage information at all three layers to develop an integrated congestion-avoiding and distortion- minimizing system.

  2. IMPLEMENTATION

    In existing layered protocol stacks based on the IEEE 802.11 and 802.15.4 standards, frames are split into multiple packets. If even a single bit is flipped due to channel errors, after a cyclic redundancy check, the entire packet is dropped at a final or intermediate receiver. This can cause the video decoder to be unable to decode an independently coded (I) frame, thus leading to loss of the entire sequence of video frames. Instead, ideally, when one bit is in error, the effect on the reconstructed video should be unperceivable, with minimal overhead. In addition, the perceived video quality should gracefully and proportionally degrade with decreasing channel quality. With the proposed controller, nodes adapt the rate of change of their transmitted video quality based on an estimate of the impact that a change in the transmission rate will have on the received video quality. While the proposed method is general, it works particularly well for security videos. In addition, all of these techniques require that the encoder has access to the entire video frame (or even multiple frames) before encoding the video.

  3. SYSTEM ARCHITECTURE

    Fig. 2. Architecture of the C-DMRC video rate control system:

    As illustrated in Fig. 2, there are four main components to the system, described in the following.

    1. CS CAMERA

      This is the subsystem where the compressed sensing image capture takes place. The camera can be either a traditional CCD or CMOS imaging system, or it can be a single- pixel camera as proposed in [21]. In the latter case, the samples of the image are directly obtained by taking a linear combination of a random set of the pixels and summing the intensity through the use of a photodiode. The samples generated are then passed to the video encoder.

    2. CSV VIDEO ENCODER

      The CSV video encoder receives the raw samples from the camera and generates compressed video frames. The compression is obtained through properties of CS and by leveraging the temporal correlation between consecutive video frames. The number of samples, along with the sampling matrix are determined at this block. The number of samples, or sampling rate, is based on input from the rate controller, while the sampling matrix is pre-selected and shared between sender and receiver.

    3. RATE CONTROLLER

      The rate control block takes as input the end-to-end RT T of previous packets and the estimated sample loss rate to determine the optimal sampling rate for the video encoder. This sampling rate is then fed back to the video encoder. The rate control law, which is designed to maximize the received video quality while preserving fairness among competing videos, is described in detail in Section V. The CS sampling rate determined by the C-DMRC block is chosen to provide the optimal received video quality across the entire network, which is done by using the RT T to estimate the congestion in the network along with the input from the adaptive parity block to compensate for lossy channels.

    4. ADAPTIVE PARITY

      The Adaptive Parity block uses the measured or estimated sample error rate of the channel to determine a parity scheme for encoding the samples, which are input directly from the video encoder.

  4. DESIGN GOALS

    CS VIDEO ENCODER (CSV)

    The CSV video encder uses compressed sensing to encode video by exploiting the spatial and temporal redundancy within the individual frames and between adjacent frames, respectively.

    Sensing the channel

    Those that have the cost of sensing channel have higher energy consumption and so they are not suitable for WMSNs.

    Using extra packets

    Using retransmission time of dropped packets includes not only retransmission request but also transmission of dropped packet. These methods waste a great amount of energy for congestion detection in sensor nodes.

    Low cost

    Some methods do not necessitate extra cost for congestion detection. These methods are the most suitable for congestion detection in WMSNs.

    VIDEO TRANSMISSION USING COMPRESSED SENSING

    We develop a video encoder based on compressed sensing. We show that, by using the difference between the CS Samples of two frames, we can capture and compress the frames based on the temporal correlation at low complexity without using motion vectors.

    RATE CHANGE AGGRESSIVENESS BASED ON VIDEO QUALITY

    With the proposed controller, nodes adapt the rate of change of their transmitted video quality based on an estimate of the impact that a change in the transmission rate will have on the received video quality. The rate controller Uses the information about the estimated received video quality directly in the rate control decision. If the sending node estimates that the received video quality is high, and round trip time measurements indicate that current network congestion condition would allow a rate increase, the node will increase the rate less aggressively than a node estimating lower video quality and the same round trip time. Conversely, if a node is sending low quality video, it will gracefully decrease its data rate, even if the RT T indicates a congested network. This is obtained by basing the rate control decision on the marginal distortion factor, i.e., a measure of the effect of a rate change on video distortion.

    ADAPTIVE PARITY-BASED TRANSMISSION

    For a fixed number of bits per frame, the perceptual quality of video streams can be further improved by dropping error samples that would contribute to image reconstruction with incorrect information. Which shows the reconstructed image quality both with and without including samples containing errors? It assume that the receiver knows which samples have errors, they demonstrate that there is a very large possible gain in received image quality if those samples containing errors can be removed.

    We studied adaptive parity with compressed sensing for image transmission, where we showed that since the transmitted samples constitute an unstructured, random, incoherent combination of the original image pixels, in CS, unlike traditional wireless imaging systems, no individual sample is more important for image reconstruction than any other sample. Instead, the number of

    correctly received samples is the only main factor in determining the quality of the received image.

  5. CONCLUSION

    This paper introduced a new wireless video transmission system based on compressed sensing. The system consists of a video encoder, distributed rate controller, and an adaptive parity channel encoding scheme that take advantage of the properties of compressed sensed video to provide high- quality video to the receiver using a low- complexity video sensor node. The rate controller was then shown to be an implementation of an iterative gradient descent solution to the optimal rate allocation optimization problem. Simulation results show that the C-DMRC system results in a 5%-10% higher received video quality in both a network with a higher load and a small load. Simulation results also show that fairness is not sacrificed, and is in fact increased, with the proposed system. Finally, the video encoder, adaptive parity and rate controller were implemented on a USRP2 software defined ratio. It was shown that the rate controller correctly reacts to congestion in the network based on measured round trip times, and that the system works over real channels. We intend to implement the remaining portions of the C-DMRC system on the USRP2 radios, including image capture and video decoding.

  6. REFERENCES

    1. I. F. Akyildiz, T. Melodia, and K. R. Chowdhury,

      A Survey on Wireless Multimedia Sensor Networks, Computer Networks (Elsevier), vol. 51, no. 4, pp. 921960, Mar. 2007.

    2. B . Girod, A. Aaron, S. Rane, and D. Rebollo- Monedero, Distributed Video Coding, Proc. of the IEEE, vol. 93, no. 1, pp. 7183, January 2005.

    3. A. Aaron, S. Rane, R. Zhang, and B. Girod,

      Wyner-Ziv Coding for Video: Applications to Compression and Error

    4. Resilience, in Proc. of IEEE Data Compression Conf. (DCC), Snowbird, UT, March 2003, pp. 93-102.

    5. D. Donoho, Compressed Sensing, IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 12891306, Apr. 2006.

    6. E. Candes, J. Romberg, and T. Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information, IEEE Transactions on Information Theory, vol. 52, no. 2, pp. 489509, Feb. 2006.

    7. J. Romberg, Imaging via Compressive Sampling, IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 1420, 2008.

    8. M. Wakin, J. Laska, M. Duarte, D. Baron, S. Sarvotham, D. Takhar, K. Kelly, and R. Baraniuk, Compressive imaging for video representation and coding, in Proc. Picture

      About The Authors

      Puli. Salomon Raj, received his B.Tech degree in Computer Science and

      Engineering at St.Johns College of Engineering and Technology, Yemmiganur, affiliated to Jawaharlal Nehru

      Coding Symposium (PCS), April 2006.

    9. M. Duarte, M. Davenport, D. Takhar, J. Laska, T. Sun, K. Kelly, and R. Baraniuk, Single-Pixel Imaging via Compressive Sampling, IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 8391, 2008.

    10. D. Donoho, Compressed Sensing, IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 12891306, Apr. 2006.

    11. M. Wakin, J. Laska, M. Duarte, D. Baron, S. Sarvotham, D. Takhar, K. Kelly, and R. Baraniuk, Compressive imaging for video representation and coding, in Proc. of Picture Coding Symposium (PCS), Beijing, China, April 2006.

    12. J. Romberg, Imaging via Compressive Sampling, IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 1420, 2008.

    13. M. Duarte, M. Davenport, D. Takhar, J. Laska, T. Sun, K. Kelly, and R. Baraniuk, Single-Pixel Imaging via Compressive Sampling, IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 8391, 2008.

    14. Z . Wang, A. Bovik, H. Sheikh, and E.Simoncelli, Image quality assessment: from error visibility to structural

    15. similarity, Image Processing, IEEE Transactions on, vol. 13, no. 4, pp. 600612, April 2004.

    16. L . Gan, T. Do, and T. D. Tran, Fast Compressive Imaging Using Scrambled Block Hadamard Ensemble, Preprint, 2008.

Technological University, Anantapur, Andhra Pradesh, India, in 2011. Currently pursuing M.Tech in Computer Science and Engineering at Dr. KVSR Institute of Technology, Kurnool, affiliated to Jawaharlal Nehru Technological University, Anantapur, Andhra Pradesh, India.

K Kishore, received his MCA from Jawaharlal Nehru Technological University, Hyderabad, India in 2006. M.Tech in Computer Science from Jawaharlal Nehru Technological University,

Anantapur, India.in 2012. He is an Asst.Professor at DR.K.V.S.R.I.T, Kurnool, Andhra Pradesh, India.

Leave a Reply