The Rotation of Camera System using Motor based on Motion Sensors

DOI : 10.17577/IJERTCONV3IS27009

Download Full-Text PDF Cite this Publication

Text Only Version

The Rotation of Camera System using Motor based on Motion Sensors

Ajay M C (Author)

Post Graduate, Department of Computer Science and Engineering, P.E.S College of Engineering,

Mandya, Karnataka, India

Abstract Motion sensors and cameras can be used together to build a video monitoring systems, where the rotating camera can only be activated when the motion sensors detects any motion nearby the camera placed surrounding. This saves both bandwidth (reduced video storage and/or transmission) as well as energy (if the cameras are battery- operated).

The camera rotates in all the directions and captures video with the help of rotational motor and sends the recorded video to the Server system. From this application we try to achieve better performance with minimum energy consumption, longest network lifetime and by using efficient protocols available.

Keywords Motion sensors, Rotating camera, Wireless transmission and Sever system.

1. INTRODUCTION

In present scenario, the security concerns have grown tremendously. The number of ceasefire violations has been in hundreds and is on the increase every day. Time and again these violations have been used to infiltrate militants from across. Monitoring such areas rely currently on technology and man power, however automatic monitoring has been advancing in order to avoid potential human errors that can be caused by different reasons.

Normal CCTV is not always a perfect working system for security. It does not keep an eye on every corner of your office, house or mall.

The word surveillance means "watching over". Surveillance is the monitoring of the behavior, activities, or other changing information, usually of people. It is sometimes done in a stealthy manner. It usually refers to the observation of any individuals or groups by an organization. Surveillance may be applied to observation from a distance by means of electronic equipment (such as CCTV cameras), or interception of electronically transmitted information (such as Internet traffic or phone calls). It may also refer to simple, relatively nor low-technology methods such as human intelligence agents and postal interception.

Surveillance is very useful to governments and law enforcement to maintain social control, recognize and monitor threats, and prevent or investigate criminal activity.

M C Girish Babu (Author)

Assistant Professor, Department of Computer Science and Engineering, P.E.S College of Engineering, Mandya, Karnataka, India

With the advent of programs such as the Total Information Awareness program and advice, technologies such as high speed surveillance computers and biometrics software, and laws governments now possess an unprecedented ability to monitor the activities of their subjects.

Some of the different types of surveillance systems are:

  • Computer surveillance.

  • Surveillance cameras.

  • Social network analysis.

  • Biometric surveillance.

  • Corporate surveillance.

  • Telephones.

Nowadays, there is an increasing need for video monitoring systems, for applications ranging from surveillance and intruder detection to elderly people and infant monitoring. For motion detection systems that utilize motion sensors, wireless systems are the preferred choice due to their easy deployment, less cluttered view and flexibility that they provide such as to be mounted at any location. However for video monitoring systems, due to high energy consumption of cameras, it is difficult to achieve the long lifetimes necessary when the video cameras are wireless and battery- operated.

Motion can be detected by:

  • Infrared (Passive and active sensors)

  • Optics (video and camera systems)

  • Radio Frequency Energy (radar, microwave and tomographic motion detection)

  • Sound (microphones and acoustic sensors)

  • Vibration (turboelectric, seismic, and inertia-switch sensors)

  • Magnetism (magnetic sensors and magnetometers)

    Sensors and sensor networks have an important impact in meeting environmental challenges. Sensor applications are useful in multiple fields such as smart power grids, smart buildings and smart industrial process control significantly contributing more efficient use of available resources and thus a reduction in the human interaction risks.

    Using a number of video cameras, a large amount of visual data is captured that is to be monitored and screened for intrusion detection. Presently, the surveillance systems used requires constant human vigilance. However, the humans have limited abilities to perform in real-time which reduce the actual usability of such surveillance systems. Also such surveillance systems are not reliable for real time threat detection. From the perspective of forensic investigation, a large amount of video data obtained from surveillance video tapes need to be analyzed and this task is very tedious and error prone for a human investigator. To overcome this drawback, automatic video analysis system is developed that continuously monitors a given situation and reacts in real- time. The proposed system has an ability to sense intrusion and respond to it in real time.

    Principles of Motion Sensing

    Various sensors capable of detecting motion in free space have been commercially available for several decades and have been used in automobiles, aircraft and ships. Initial size, power consumption and price, however, prevented their mass adoption in consumer electronics until the past few years.

    While there are other motion sensor technologies available, the following four fundamental motion sensors are the most relevant for tracking motion for consumer electronics.

    1. Accelerometer (G-sensors)

      Acceleration is defined as the rate of change of velocity, so they measure how quickly the speed of the device is changing in a given direction. Using an accelerometer, you can detect movement and, more usefully, the rate of change of the speed of that movement. Accelerometers are unable to differentiate between acceleration due to movement and gravity. As a result, an accelerometer detecting acceleration on the Z-axis (up/down) will read 9.8 m/s2 when its at rest.

      Accelerometers measure linear acceleration and tilt angle. Single and multi-axis accelerometers detect the combined magnitude and direction of linear and gravitational acceleration. They can be used to provide limited motion sensing functionality. For example, a device with an accelerometer can detect movement from a vertical to horizontal state in a fixed location. As a result, accelerometers are primarily used for sensing device orientation with respect to gravity, and delivering simple functions, such as changing the screen on a mobile device from portrait to landscape mode.

    2. Gyroscopes (Gyros)

      Gyroscopes measure the angular rate of rotation about one or more axes. Gyroscopes can measure complex motions

      accurately in free space, hence, making it a required motion sensor for tracking the position and rotation of a moving object. Unlike accelerometers and compasses, gyroscopes are not dependent on any external forces such as gravity or magnetic fields, and can therefore function fairly autonomously.

    3. Magnetic Sensors (E-Compasses)

      Compasses are used to detect heading based on the Earths magnetic field. Consumer electronics applications for e- Compasses include correctly orienting a down-loaded map on a mobile screen or providing basic heading information for navigatin applications. As the earths magnetic field is relatively weak compared to magnetic interference from electronic equipment and building materials, the compass sensor output can easily be impacted by varying environmental conditions particularly indoor. As such, e- Compasses require frequent calibration in order to maintain their heading accuracy.

    4. Pressure Sensors (Barometers)

      Pressure sensors measure relative and absolute altitude through the analysis of changes in the atmospheric pressure. Pressure sensors are being used in consumer devices for sports and fitness, and for location-based applications where map information can be adjusted as a consumer moves to different floors in a building.

      1. SYSTEM OVERVIEW

        In Single Point Operating Device for Surveillance System we are overcoming the drawbacks of previous system by using only one camera and single view point. We are providing remote control from the centralized server and also snaps can be stored for future references.

        The application is designed to sense any motion occurring near by the rotating camera. Hence on detecting any motion the sensor module sends the alert to the system and initiate the video capturing process based on the direction where the sensors are placed further concerned users can view the recorded video. Hence users may be free from continuous interaction with system. The camera rotates in all the directions with the help of steeper motor where sensors are placed and records video then sends the recorded video to the Server system using wireless technology. Assembling with this sort of system shatters the currently observed drawbacks, like unnecessary usage of memory by storing continuous run of camera without any motion and controlling the system using wired system.

      2. LITERATURE REVIEW

        Fast development in the technology has increased the risk of intrusion. Using security cameras allows a person to monitor his property. The majority of organization and

        administrations are making use of such security cameras with the intention to save their business as well as property from terrorists and illegal entry. Nowadays, the security cameras have become much more advanced, reasonable, smaller and straight forward. A number of video surveillance systems have been proposed for different purposes.

        Drew Ostheimer proposed an automated and distributed real-time video surveillance system which can be used for the detection of objects and events in a wide range of applications. The system captures video from multiple sources which is then processed and streamed over the internet for viewing and analysis. The proposed system is flexible as the components of the system can be interconnected in several manners.

        The experimental results of the system show that it can handle multiple video data running on standard computers and yielding fluid video. A number of interconnected clients can view the multiple video feeds simultaneously.

          1. Motion Sensor and Camera Placement Design for Video Monitoring system.[1]

            Motion sensors and cameras can be used together to build in-home video monitoring systems, where the cameras are only activated when the motion sensors detect motion. This saves both bandwidth (reduced video storage and/or transmission) as well as energy (if the cameras are battery- operated). However, motion sensors and cameras have different fields of view , and thus it is not clear that attaching motion sensors to cameras provides the most efficient system.

            It assume that the target area that is monitored is a rectangle, with length L and width W, For the sake of simplicity, we only consider the placement problem at the edges of a 1.5m- high flat plane, and assume that the cameras look into this plane, as shown in Fig. 1. On this plane, we model the motion sensors and the cameras coverage as sectors, with angle of field of view (FoV) m and c, and range Rm and Rc, respectively. We consider three placement options, where i) motion sensors are attached to cameras, and ii) motion sensors are placed separately.

            iii) A hybrid approach, where any motion sensor can be attached to a camera or detached from the cameras. If a motion sensor is attached to a camera, It will trigger the camera immediately upon detecting an event; if they are mounted separately, the motion sensor will use a transceiver to transmit a wake-up signal, which is received by the transceiver of the cameras and wakes up any cameras with overlapping FoV with itself. Thus, it is an interesting study to investigate whether deploying motion sensors and cameras together or apart provides overall better performance.

            A motion sensor and camera deployment optimization is presented in this paper, and issues of coverage, energy

            consumption, network lifetime, and cost of devices are investigated. Simulation results show that to meet the same coverage constraint, compared with the detached scenario, the attached scenario consumes less energy at low activity level, but more energy at high activity level. A hybrid approach is also proposed, which can use both detached and attached cameras. Due to its advantage in flexibility, the hybrid approach always achieves better or the same performance.

            For a certain coverage and cost constraint, an optimal placement strategy is given to achieve the maximum network lifetime. We also examine the impact of cost, battery and activity level on placement strategies.

            In the future, we will study how multiple overlapping cameras can collaborate to localize an object, as well as the coverage problem with obstacles.

          2. SmartVideo Surveillance[15]

            This paper explores the concepts of multiscale spatiotemporal tracking through the use of real-time video analysis, active cameras, multiple object models and long- term pattern analysis to provide comprehensive situation awareness. Smart video surveillance systems are capable of enhancing situational awareness across multiple scales of space and time. However, at the present time, the component technologies are evolving in isolation.

            Video analysis and video surveillance are active areas of research. The key areas are video-based detection and tracking, video-based person identification, and large-scale surveillance systems. A significant percentage of basic technologies for video-based detection and tracking were developed under a U.S. government-funded program called Video Surveillance and Monitoring (VSAM).

            This program looked at several fundamental issues in detection, tracking, auto-calibration and multi camera systems. There has also been research on real-world surveillance systems in several leading universities and research labs. The next generation of research in surveillance is addressing not only issues in detection and tracking but also issues of event detection and automatic system calibration.

            Surveillance cameras are installed in many places such as airports, parking lots, train stations and banks. In the past, the video imagery has been mainly used as a forensic tool after an event. To take advantage of the video in real-time, a human must monitor the system continuously in order to alert security officers if there is an emergency. Moreover, one person can only observe approximately four cameras at a time with good accuracy of event detection. Therefore, this requires expensive human resources for real-time video surveillance using current technology.

            Object tracking aims to obtain a record of the trajectory of one or more targets over time and space. By tracking various objects, the burden of detection by human sentinels is greatly alleviated. While this application

            scenario is important we want to examine how tracking can also be used in real-time situations and applications where it can be integrated with other sensors. What is needed is a smart surveillance system that can analyze the video data automatically. By locating and tracking moving objects in a Video sequence in real-time, we can develop a real-timealert system to enhance current surveillance techniques. Once the step of object tracking is done successfully, it can then serve as a means for higher level interpretation such as semantic description of objects motions and their interactions. Our object tracking tools have been tested on video sequences with moving objects including humans and vehicles. These tools can be deployed in many Applications such as visual surveillance, road traffic control, human-computer interface, and video compression.

          3. Security for Video Surveillance with Privacy[14]

            In this paper, a video surveillance system is designed to provide means for ensuring privacy information security and offer the capability of proving authenticity. First, a real-time scrambling approach to conceal video information is presented. The sign of transform coefficients for intra macro- block is pseudo-randomly flipped, and so only the authorized persons are allowed to correctly decode the code- stream. At the same time, a method for embedding digital watermark into videos is proposed.

            Video surveillance systems are usually installed to increase the safety and security of people or property in the monitored areas. Typical threat scenarios are robbery, vandalism, shoplifting or terrorism. Other application scenarios are more intimate and private such as home monitoring or assisted living. For a long time it was accepted that the potential benefits of video surveillance go hand in hand with a loss of personal privacy. However, with the on-board processing capabilities of modern embedded systems it becomes possible to compensate this privacy loss by making security and privacy protection inherent features of video surveillance cameras.

            The relationship among the DC components in several successive frames is used for hiding data. Simulation results based on MPEG-4 show that a good level of security is provided by the end-to-end security scheme. Furthermore, this is achieved with a small impact on coding performance and computation complexity.

          4. Real-Time Video Surveillance over IEEE 802.11 Mesh Networks[4]

        In recent years, there has been an increase in video surveillance systems in public and private environments due to a heightened sense of security. The next generation of surveillance systems will be able to annotate video and locally coordinate the tracking of objects while multiplexing hundreds of video streams in real-time. In this paper, we present OmniEye, a wireless distributed real-time

        surveillance system composed of wireless smart cameras. OmniEye is comprised of custom-designed smart camera nodes called DSPcams that communicate using an IEEE

        802.11 mesh network. These cameras provide wide-area coverage and local processing with the ability to direct a sparse number of high-resolution pan, tilt and zoom (PTZ) cameras that can home onto targets of interest. Each DSPcamperforms local processing to help classify events and pro-actively draw an operators attention when necessary.

        In video-streaming applications, maintaining high network utilization is required in order to maximize image quality as well as the number of cameras. Our experiments show that by using the standard 802.11 DCF MAC protocol for communication, the system does not scale beyond 5-6

        cameras while each camera is streaming at 1 Mbps. Also, we see high levels of jitter in video transmissions. This performance degrades further for multi-hop scenarios due to the presence of hidden nodes. In order to improve the systems scalability and reliability, we propose a Time- Synchronized Application level MAC protocol (TSAM) capable of operating on top of existing 802.11 protocols using commodity off-the-shelf hardware. Through analysis and experimental validation, we show how TSAM is able to improve throughput and provide bounded delay. Unlike traditional CSMA-based systems, TSAM gracefully degrades in a fair manner so that existing streams can still deliver data.

      3. SYSTEM ARCHITECTURE

        Today multiple cameras are used for surveillance system because each camera can at most capture up to 120 degree to 160 degree maximum on average cost camera. Opponents to using video surveillance systems emphasize several major drawbacks that need to be considered when studying the implementation of this kind of system.

        The below figure 3.1 shows the System Architecture of the proposed system, The motion sensors used in the system will be deployed around the rotational camera in the specified angle to where the monitoring is required and connect all the sensors used to signal and data transfer unit using wireless technology and rotational camera will be connected to the same system to monitor and record the video of that location.

        Cameras allow the field of view of observers to be extended into areas where they are not physically present. This virtual presence of an observer is not necessarily noticed by monitored persons. In the resulting, but misleading feeling of privacy, persons might act differently than they would in the obvious presence of other people.

        Display and storage unit on the other end used by the user to maintain the complete monitoring o peration and to control the hardwares associated with sensors and rotational

        camera. Both the Display & storage unit and Signal & data transfer unit are connected using wireless technology to transfer the recorded videos after motion detected by the sensors in real time.

        As the wireless link quality varies, video transmission rate needs to be adapted accordingly, Once the video received from the Signal & data transfer unit to Display & storage unit it uploads the received video to the cloud space in real time with minimum delay so that all the concerned users can access the video at the same time from various parts of the world and used for further processing.

        Fig 3.1 System Architecture

      4. SYSTEM MODULE DESIGN

        Video cameras are now being installed at an unprecedented pace in applications that require the coverage of large areas. In order for these systems to be effective, the cost and difficulty of deployment must be reduced. Though frequently discussed, there has been little success in terms of adding advanced machine interpretation of video. Continuous watching of multiple video streams by human operators and manual browsing of thousands of video frames for crime scene and forensic analysis are neither reliable nor scalable. This has brought about the need for a collaborative effort from the systems and vision research communities to develop a surveillance system that is low- cost, reliable, easy-to-manage, easy-to-deploy and can process video data for automated real- time alerts and effective retrieval of archived footage.

        The architectural diagram of figure 3.1 shows various hardware units associated with the monitoring system, data and control flow of an actual system whereas the system module design shows the various modules of the system such as a hardware unit mainly associated with all the hardwares used in the system like motion sensors and rotational camera.

        Once the motion sensor d e t e c t s the motion it sends the detection signal with its sensors unique ID to the signal and data transfer unit. Further the camera rotation instruction will be sent according to the sensor ID. Server in this system is used to control the camera functionalities and to collect the captured video remotely from signal and data transfer unit. So it works as display and storage unit to back up all the data it receives. Client

        module consist of all the users related functionalities. Who can access the video from the cloud server remotely from various region.

        SENSORS (Accelerometer) Detecting Acceleration Changes

        Acceleration can be measured along three directional axes: forwardbackward (longitudinal), leftright (lateral), and updown (vertical). The Sensor Manager reports sensor changes in all three directions

  • ertical Upward or downward, where positive represents upward movement such as the device being lifted up.

Longitudinal Forward or backward acceleration, where forward acceleration is positive. This represents a device fl at on its back, facing up, and in portrait orientation being moved along the desk in the direction of the top of the device.

Lateral Sideways (left or right) acceleration, where

positive values represent movement toward the right of the device, and negative values show movement toward the left. In the same configuration as described in Longitudinal movement, positive lateral movement would be created by moving the device along the surface to your right. The Sensor Manager considers the device at rest when it is sitting face up on a flat surface in portrait orientation.

Fig 3.2 System Module Design

NCRTS-2015 Conference Proceedings

CLIENT MODULE

  • The client starts by initializing the components and selects the default video device.

  • Client then invokes the camera functionalities.

  • Using this client sends the continuous streaming video to the admin view point.

Fig: Sever Module Flow Diagram

    1. CONCLUSION

      SEVER MODULE

      Fig: Client Module Flow Diagram

      From the analysis of the result it is clear that proper balance needs to be achieved between resolution and distance of client and server system in the network. For lower resolution and closer distance system is giving good performance by showing real time picture of what is happening in the field but increasing resolution of video will decrease the speed of video transmission and the performance in the system is consistent for certain distances.

      In the attached scenario, cameras are fully controlled by the connected motion sensors, and can be only triggered by events within the motion sensors detection range, though the camera may have a longer capture range. So it allows monitoring the area in panoramic view provides security in

      • Server accepts the client connection on a TCP port. This is the control connection. The server obtains the remote client's IP address via this connection.

      • Processes the client information. Then the server needs to know the TCP/IP port the client is listening.

      • Once the Send command is issued, the server begins streaming the video as requested by the client.

      • The server terminates sending data to the client when it receives Stop command from client or preview of video at server is closed.

      hidden areas that are physically difficult to monitor.

      Thus reduces the human interaction with the system and also reduces the number of cameras needed to monitor. Smart surveillance systems significantly contribute to situation awareness. Such systems transform video surveillance from data acquisition tool to information and intelligence acquisition systems. Real-time video analysis provides smart surveillance systems with the ability to react in real-time. Our system senses the intrusion and sends notifications to authorized persons so that action can be taken in response to the intrusion.

    2. FUTURE WORK

The future work of this paper would involve further insight on the issue of packets dropped for our video surveillance application. Therefore including ARQ would be of great benefit in the analysis of any loss in signal.

The evolution of the Internet has spawn some nifty new uses for all those cables running around the planet, and one of them was to introduce IP cameras. These are normal video surveillance cameras with a network interface and, most of the times, with a server included which allows the broadcast of the signal over the Internet. Get a computer with Internet access and, from anywhere in the world, you can keep a close eye on things.

REFERENCES

  1. Na Yan, IlkerDemirkol, and Wendi Heinzelman Motion Sensor and Camera Placement Design for In-home Wireless Video Monitoring Systems.

  2. J.-J. Gonzalez-Barbosa, T. Garcia-Ramirez, J.Salas,J.-B.Hurtado- Ramos and J.-d.-J. Rico

    Jimenez, Optimal camera placement for total coverage, in Robotics and Automation, 2009. ICRA 09.IEEE International Conference on 2009.

  3. E. Horster and R. Lienhart, On the optimal placement of multiple visual sensors in Proceedings of the 4th ACM international workshop on Video surveillance and sensor

    networks.

  4. Loyall J, Schantz R, Corman D, Paunicka J,

    Fernandez S, A distributed real-time embedded application for surveillance, detection and tracking of time critical targets, RTAS, 2005.

  5. Tian He, Pascal A.Vicaire, Ting Yan, Liqian Lup, Lin Gu, Gang Zhou, Radu Stoleru, Qing Cao, John A.Stankovic, and Tarek Abdelzaher, Achieving Real Time Target Tracking Using Wireless Sensor Networks, 12th IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS), 2006.

  6. C. Yu and G. Sharma, Camera scheduling and energy allocation for lifetime maximization in user-centric visual sensor networks, Image

    Processing, IEEE Transactions on, vol. 19, pp. 2042 2055, August 2010.

  7. P.vanBeekandM.U.Demircin, Delay-constrainedrate adaptation For robust video transmission over home net-works,IEEE International Conferenceon Image Processing,

    (ICIP05), Genova, Italy, vol.2, pp.173-176, Sept.2005.

  8. L.Haratcherev, J.Taal, K.Langendoen,

    R.Lagendijk and H.Sips, Optimized Video streaming over 802.11 bycross- layer signaling, IEEE Communications

    Magazine, vol.44, no.1, pp.115121, Jan.2006.

  9. S. Kompella, S. Mao, Y. T. Hou, H. D. Sherali, "On path selection and rate allocation for video in wireless mesh networks," IEEE/ACM

    NCRTS-2015 Conference Proceedings

    Transactions on Networking, vol. 17, no. 1, pp. 212-224, 2009.

  10. Hampapur, L. Brown, J. Connell, M. Lu, H. Merkl, S. Pankanti, A. Senior,Shu, and Y. Tian, The IBM smart surveillance system, demonstration, Proc.IEEE, CVPR 2004.

  11. Hae-Min Moon, Sumg Bum Pan: A New Human Identification Method for Intelligent Video

    Surveillance System, 978-1-4244-7116-4/10/$26.00 $©2010 IEEE.

  12. Wayne Wolf, Burak Ozer, and Tiehan Lv, Smart cameras for embedded systems, IEEE Computer,

    2005.

  13. Litzenberger, M., Belbachir, A.N., Schon, P., Posch, C., Embedded Smart Cameras for High Speed Vision, ICDSC, 2007.

  14. Thomas Winkler and Bernhard Rinner, Privacy and Security in Video Surveillance.

  15. Tasleem Mandrupkar, Manisha Kumari , Rupali

    Mane, Smart Video Security Surveillance with Mobile Remote Control in IJARCSSE on

    March 2013.

  16. Simon Moncrieff, Svetha Venkatesh, and Geoff West. Dynamic Privacy in Public Surveillance.

  17. IEEE Computer, 42(9):2228, September 2009.

  18. Drew Ostheimer, Sebastien Lemay, Mohammed Ghazal, Dennis Mayisela, Aishy Amer, Pierre F.

Dagba: A Modular Distributed Video

Surveillance System Over IP, 1-4244-0038-4 2006 IEEE CCECE/CCGEI, Ottawa, May 2006.

  1. Sven Fleck and Wolfgang Straßer. Smart Camera Based Monitoring System and its Application to Assisted Living. Proceedings of the IEEE, 96(10):16981714, 2008.

  2. Simon Moncrieff, Svetha Venkatesh, and Geoff West. Dynamic Privacy in Public Surveillance. IEEE Computer, 42(9):2228, September 2009.

Leave a Reply