Study of Various Measure to Monitor Social Distancing using Computer Vision: A Review

DOI : 10.17577/IJERTV10IS050194

Download Full-Text PDF Cite this Publication

Text Only Version

Study of Various Measure to Monitor Social Distancing using Computer Vision: A Review

1Saharsh Arya

Department of Computer Science and Engineering Priyadarshini Institute of Engineering and Technology Nagpur, India

2Leena Patil

Department of Computer Science and Engineering Priyadarshini Institute of Engineering and Technology Nagpur, India

3Ayushi Wadegaonkar

Department of Computer Science and Engineering Priyadarshini Institute of Engineering and Technology Nagpur, India

4Nishad Shinde

Department of Computer Science and Engineering Priyadarshini Institute of Engineering and Technology Nagpur, India

5Palash Gorsia

Department of Computer Science and Engineering Priyadarshini Institute of Engineering and Technology Nagpur, India

Abstract Millions of people worldwide have been impacted by the latest COVID epidemic, but the number of people getting sick is on the rise. The virus is one of the few pandemics the world has not yet been able to resist; numerous countries have responded to this by taking extraordinary steps to avoid its spread. We must strive to avoid the spread of infection, which can only be done by social distancing ourselves from those who might be infected with the virus. Interactions between people are avoided to prevent the virus from spreading, such as shaking hands, helping each other, and giving each other personal items to hold. A number of examples of the application of AI and deep learning on everyday life problems have been developed in the past. This review paper would detail how we could track public locations using Python, which could include monitoring peoples facial expressions, as well as identifying security threats to help identify risk of employee social recidivism using Deep Learning and Deep learning, using Computer Vision. The social distancing detection tool will monitor whether people are maintaining a safe distance from each other by analyzing real-time video streams from the camera to ensure social distancing protocol in public places and the workplace. We can incorporate this tool into their security camera systems and track whether people are maintaining a safe distance from each other at work, in warehouses, and in stores. In short various machine learning technique are studied and its application to social distancing protocols are heavily studied in our paper.

Keywords Machine Learning, Computer Vision, Social Distancing, Deep Learning.

  1. INTRODUCTION

    In late December 2019, Wuhan, China, identified a new generation of coronavirus disease (COVID-19). The virus spread globally in 2020 after just a few months. The condition was declared pandemic by the World Health Organization (WHO) in May 2020 [1, 2]. According to WHO figures released on April 18, 2021, 141 million people are sick in 200 nations, with 3,000,000 fatalities. There is also no successful vaccine or therapy for the infection, despite the increasing number of patients. Although doctors, healthcare organizations, and

    physicians are constantly trying to develop effective drugs or vaccinations for the deadly virus, no definitive progress has been identified at the time of this study, and no specific therapy or recommendation to avoid or cure this emerging disease has been produced. As a result, everybody in the world is taking steps to prevent illness from spreading. Due to these adversities, global governments have been compelled to seek alternate methods of limiting the viruss dissemination. As a result, in addition to wearing face masks, social distancing now claims to be much more necessary than previously believed, and one of the most effective strategies to prevent the transmission of the disease. It is also considered a mandatory norm in almost all nations. According to WHO guidelines, persons must be separated by at least 6 feet (1.8 m) in order to maintain sufficient social distance [3]. People who have moderate to no symptoms could be carriers of the novel coronavirus infection, according to recent reports [4]. As a result, its important that everybody maintains self- control and social distance. Many studies, such as [57], have shown that social distancing is an efficient nonpharmacological strategy and a key inhibitor in minimizing the spread of infectious diseases like H1N1, SARS, and COVID19.

    Figure 2 shows how adhering to proper social distancing guidelines will minimize the rate of infection transmission among individuals [8,9]. A longer Gaussian curve with a shorter spike within the span of the health systems service capability allows it simpler for people to combat the virus by providing them with consistent and timely assistance from health care organizations. Any unanticipated sharp increase in infection incidence (as seen in Figure 2) would result in service loss and, as a result, exponential rise in the number of fatalities

    Governments attempted to introduce a series of social distancing policies during the COVID-19 pandemic, such as banning transport, policing boundaries, shutting pubs and bars, and alerting society to keep a gap of 1.6 to 2 meters between them [10]. Monitoring the propagation of contamination and the effectiveness of the restraints, on the other hand, is a difficult challenge. People must go out for basic necessities such as food,

    medical treatment, and other activities and occupations. As a result, several other technology-based technologies [11, 12] and AI-related studies [1315] have attempted to assist the health and medical communities in dealing with COVID-19 issues and effective social distancing activities. These projects range from patient identification and location via GPS to crowd control and segmentation.

    Artificial Intelligence (AI) can help with social distancing tracking in these circumstances. Computer Vision, as a subfield of Artificial Intelligence, has proven its promise in chest CT scan or X-ray dependent COVID-19 detection [16, 17] and may even help with Social-distancing tracking. In addition, deep neural networks allow us to remove complex features from data, allowing us to analyze and identify these features to provide a more detailed interpretation of the videos. COVID19 avoidance and surveillance are examples of evaluation, health management, and recovery.

  2. SURVEY OF LITERATURE

    Dr. Syed Ameer Abbas and his co-authors suggested utilizing Raspberry Pi and Open-CV to create a framework for human monitoring and crowd control in 2017. Via OpenCV, a cascade classifier was educated for head detection from the scene using Haar features. The entire aim was to use a projector and a Raspberry Pi 3 with a quad core ARMv8 central processing unit to film the crowded scene and process the footage frame by frame. The crowd is calculated and handled by adjusting the value to the threshold, and if it exceeds the threshold, appropriate prevention will be implemented [18].

    Joel Joseph Joy and co-authors suggested a traffic density recognition method focused on image analysis in 2018. The length of the line and the density of traffic is measured using the cameras photographs. To tackle the principle of partial reality, fuzzy logic was extended to the video input. The partial reality concepts result may be anywhere from fully accurate to completely false [19].

    Adrian Rosebrock wrote an essay in 2020 about a social distancing detector that uses OpenCV, Computer Vision, and Deep Learning. The article reflects on social distance control by CCTV cameras mounted around streets during the pandemic time, and it sheds light on social distancing during the pandemic period. As a social distancing tracker, the camera record the difference between individuals in pixels and relates it to a normal measurement. The file.py script, which is responsible for looping across frames of a video stream and ensuring that individuals retain a safe distance from one another, contains the social distance detector function logic. It works for video files as well as camera streams [20].

    In 2019, Neel Bhave and his co-authors suggested a method that includes a Reinforcement Model and Object Tracking Algorithms and is a fully functioning model. They used YOLO (You Just Look Once) Real Time Object Detection in this experiment, which has less flaws, is much easier, produces reliable outcomes, and can be qualified for over 200 groups. Reinforcement learning is a form of machine learning that calculates the green step timing based on the current traffic situation and learns from the behavior taken [21].

    1. Expanding upon the attributes of COVID-19

      The CO19 information has only recently started to proliferate; in order to monitor this problem, an attempt is made to control the circulation of CO19 information in specific locations. When finding out the best route to being away from the active earthquakes, the most critical thing to consider was to think of was deciding about the safest distance to place to reopen buildings and industries. However, the mixed Wells Riley model was instituted in order to take into account the potential for airborne virus transmission, so the new research added a different variable that are not a part of the Wells-R model in order to do a preliminary infection simulations [22]. The authors stated that 1.6 to 3 meters was the ideal social distance to take into consideration when it came to contagious exhalations, but may be a little too much for other parts of speech, coughing, sneezing, and so on. it was assumed that there was at least a total of 8.2m available if a low-altitude air condition was to be regarded. When the model was run to completion, the numbers of hosts with virus who needed to retain distance for validity was found to be 40% and up to 20% less when distancing is already taken into account, they had more trust in their hypothesis.

      While, the other research looked into whether or not it could be transmitted using the local stop, this area would be a good method for prevention measures to apply to slower dissemination of disease [23]. Ferrets used in the research confirmed that SARS-CoV-2 was highly transmissible via all sorts of close communication and also that it could be spread through the air. This infection could happen in one to the time range of one to three days after exposure for ferrets and in the time frame of three to seven days. This research presented an experimental demonstration that the virus could be transmitted by air, but they will go on in the same way of distancing tactics already used in many other nations. In 10 different cultures on the lives of people with obesity, each followed by a change in body mass media image, with each one country having 10,000 to 50,000 individuals as participants [24]. the authors based on the cases that could be proven and had a deeper look at the death cases associated with the clinical CO19, often in countries such as Spain, Italy, Iran, to show how significant its effects are. These countries paid attention to how they had reacted to the outbreak which occurred between January 11 and May 2, 2020. Social distancing and its influence on COVID19 studies is used to ascertain the percentage of people that might successfully overcome these techniques. According In the light of studies that indicated a decrease in number of cases after the introduction of social interventions, the discovery of a timeframe of one to four weeks to demonstrate proof itself was correct since their methods were completely different, the findings differed across the various countries and other aspects of dehumanizing forces, including institutionalization and systemization, as they manifested in intergenerational transfer of poverty, discrimination, globalization, and structural adjustment in poverty reduction of wages [25]. The author used the information from Google communities as well as 58 nations from the various cultures to research the subject. so, the authors analysis led to the following conclusions that identified and sought to mitigate uncertainty as being the most successful steps to control the virus.

      In the other side, a scientific study conducted an experiment pursued a particular method to determine the importance of social distancing Using their research on as a metaphor, they created a method to measure the persons who were believed to

      be asymptotic (asyncopenic) [26]. They took into account both of the self-imposed and social isolation factors to see how people could predict and control whether an epidemic occurred, respectively, and they discovered that those who delayed or didnt want the illness to manifest themselves from talking to others and those who stayed to close to slow the progression of the situation. So, on the basis of this theory, they decided to use social distancing to prevent the spread of the plague, or believing that it to be critical to this goal, to enforce a measure of strict social detachment in the world.

      nforce a measure of strict social detachment in the world. More than half of the participants made the attempt to research whether distancing themselves from their peers could 3 actually be an efficient, and whether any aspects of the analysis could be relaxed to prevent a second wave of controversy [27]. They had been able to demonstrate that social distancing strategies such as social distancing reduced the virus and infection rate. To control the spread of the virus, the study authors demonstrated that a need to increase the virus identification rate, yet in order to protect the users, and compensate for the need for the socially unacceptable risk of infection, including sitting far away from infected persons with sanitary distance as a defense.

    2. Pedestrian detection

      To an increased use of target recognition in computer vision todays world, modern times and technological advancement have made it possible. Now that the pedestrian tracking has improved, it is our daily routines that cause us to see more often. Because of the aids given to pedestrians and efficient use of intelligent driving systems, improved driver control, and the assistance of a smart system, intelligent traffic systems, automated driving systems have become standard in cars. Although there are several algorithmic strategies to work with, it is a major problem to apply deep learning to anything. the erratic effect of any- and posture-changes have on identification is related to the post blockage induced by the movement of the people in the context, so to speak According to Mask R-CNN, it was postulated that internal aspects of the content were less essential in distinguishing between patterns than it was about eliminating the sources of contamination [28]. According to the test carried out on the pedestrians dataset, the proposed algorithm was stronger at detecting violations than other approaches that have previously been used. Differentiation methods of classification, to pedestrians who had a no data premined from that research where they attempted to build a novel classification scheme from scratch without deriving any type specific data [29]. This will provide them a greater recognition that they had met their goals between 96.73% and 100% of the time. A complementary part of their real-time pedestrian detection scheme was to aid autonomous vehicles, in the form of extra guidance to non-driver operators. The problem with automated vehicles and pedestrian identification systems can only be addressed by including an extra layer of protection for pedestrians. In certain cases, such as weather that is hazy or when there are a lot of vehicles or people, it is harder to distinguish the pedestrians from the background. These three different methods are presented in the paper to search for how toimprove computational efficiency by using deep learning [30]. They were able to use the linear skills and expand on them with various separable recursion techniques to reduce the number of parameters and the computational expense. Images of pedestrians were increased to make up a larger database, for

      enhanced analysis in poor lighting conditions. Their test results found that the pedestrian models could be less clear while they were obscured by cloudiness or haze.

      In found a new paradigm focused on the discovery of the way in which camera angle affects the meaning of things [31]. The way they proposed took advantage of a fast R-CNNs to detect pedestrians had an in-expansion neural network (SSDNN) and built an ensemble of cascaded single-shot detectors (Hierarch SSD) an innovative teaching methodology, which has a focus on finding the center of attention by filming each item from many different camera angles to ensure they captured all the points of equal footage (ROI). They also conducted their own study, and concluded that the procedure was superior to other orthodox approaches.

      Pedestrian identification is a very difficult challenge for autonomous driving systems since it looks for potential changes in a persons location in the environment with regard to these two functions, there are different approaches that are possible. For the identification, there are different strategies that split these two things into two groups: then there are techniques that test for their existence, and then the ones that often test for their nature. A novel two-end data structure concept known as the Spatial-Active Network (or GEO) was introduced with a middleware named the SpTI [32]. They mapped the pedestrians in three dimensions, but were able to incorporate temporal details into their model for each of them, giving them a more precise, lifelike image than had previously been possible. As well, the artefacts were assigned into an interaction graph, which assisted them with gathering data regarding the objects that are located nearby.

    3. Face Recognition

    Due to the coronavirus, the WHO has recommended certain steps that have had to be put into practice, including the use of face masks in public, including wearing one in your daily life. A recent study suggested a combined artificial intelligence (AI) methods and human study (physics) for searching the sea floor using these techniques [33]. It was based on two Resnet50- derived modules, the first of which were used to assist in the design process of extracting the functionality. And then the second stage used machine learning using ensemble approaches to classify the face masks, the SVM and decision trees. There were three distinct examples of classifiers that could distinguish the sound images with 99.64%, 99.49%, and 100% precision in their performance.

    Exposures such as expressions, particularly expressions, movement, changing the lighting conditions, and additional influences including facial expression, expression changes, and facial expressions have a distinct impact on the systems functionality. The researchers have developed a new face recognition system that is linked to the local binary pattern (local binary pattern, or LBP) and large convolutionaluation of neural networks [34]. This process used features of the facial textures as well as 10 CNNs to go on to an extraction of the features and to be refined by the five different network architectures in order to train and get classification results. The final facial recognition findings were added to the two-dimensional expanders to further increase the amount of material that could be printed. With the use of this tool, the results were an excellent 100% and their respective databases, they discovered a 97.51% and ald discrimination. Further to assist with the precision, while at the

    same time, increase the depth of the patient understandings of the descriptions provided was done by these tests. The day and age is just such that we are becoming a little more face-detection conscious since this 4-research showed that both face recognition and face detection were important, a proposal was made for face recognition as well as for new implementation of a [35]. Additionally, the suggested approach had spatial and channel focus applied to the same in combination with a hierarchy of features, both of which had varying degrees of supervision. The level of accuracy was 99.5%, and this enabled them to do both to broaden the use of the algorithms as well as introduce them to the general public for a new social use.

    Applying a bounding box across the mask would not only helps with most of the face recognition algorithms, and then fails because of its inefficiency in distinguishing between the image and the surroundings. thus, thus, they suggested a model that was developed based on a more advanced version of Mask R-CNN that sought to enhance the face segmentation and detection processes together, which they called Lin et al. (2022). The name they gave to this model was G-Mask. They defined their methodology in their paper by using the Resnet101 function extraction, then they applied the resulting regions of interest (ROIs) to the FCN to construct it. The structure tree frameworks detection algorithm used the Generalized Intersection over Union (GIoU) as the bounding box loss function to improve the detection accuracy. They learned that the expansion process (Faster R-CNN) was more efficient than the other traditional approaches (like R-CNN) but more importantly they discovered that their model was more capable of producing results than the majority of the competing models.

  3. TECHNICAL CAMERA SURVEILLANCE METHODS

    The smart phone camera captures the video frames to track the social distancing. Image rendered frame by frame are passed in loop in the image detection algorithm. Image processing are done for noise removal and passed to the pedestrian trained model. The outcome image will draw a rectangular bounding box when any persons are identified in the image. The classified image with person is feed to calculate distance using Euclidean algorithm.

    1. Predicting Pedestrian

      The pedestrian prediction training model is trained by TensorFlow environment. The following steps are required in order to train the model.

      • Gathering Images and labeling data

      • TF Records Creation

      • Training and Exporting model.

        For gathering the data set, the images are downloaded from INRIA Person data set and Penn-Fudan database which different image formats are converted to JPEG format. Maintained 80% of images to training and 20% to test testing folder. The tensor flow needs more number of images in various background to train the model with good precision and accuracy. After downloading the data set, the labelling of all the images are done by creating a rectangle box on the target object using the tool called labeling. The whole working process of the system is explained in Fig 1.

        Fig 1. System Flow.

    2. Medical Investigations

      Many medical and pharmaceutical experts are working on a vaccine for the COVID-19 infectious disorder, but no definitive cure has yet been discovered. Controlling the dissemination of the virus in public spaces, on the other side, is a problem that AI, computer vision, and robotics can assist with. Controlling the incidence is a contributing factor, and social distancing is an important way to limit the dissemination and deter the spread of the virus in population, according to a number of studies with diverse implementation methods [36]. The Susceptible, Infectious, or Recovered (SIR) model has been used by many scholars, including [37]. SIR is an epidemiological modelling system that uses time to calculate the theoretical amount of individuals contaminated with an infectious disease in a specific population. Kermack and McKendrick patterns, launched in 1927 [38], are one of the earliest and most often used SIR models. Eksin et al. [37] recenly published a revised SIR model that includes a social distancing parameter that can be used to estimate the amount of contaminated and healed persons.

      The aim of object recognition problems is to find and identify artefacts in images [39]. Item identification issues include the face mask and actual distance detection. For the last two decades, object recognition algorithms have progressed. Since 2014, deep learning has been used in target detection, resulting in significant improvements in accuracy and detection speed [40].

      Detection of objects is classified into two types. Two stage identification, such as Region-Based Convolutional Neural Networks, and one-stage detection, such as You Only Look Once (YOLO) (R-CNN). Two-stage detectors are more accurate at localization and target detection, whereas onestage detectors are faster at inference [41]. Girshick et al. [42] implemented R- CNN, or regions with CNN features (RCNNs), which introduces four steps: first, it selects multiple regions from an image as

      object candidate boxes, then rescales them to a fixed size image. Second, each regions features are extracted using CNN. Finally, the SVM classifier is used to predict the category of boundary boxes based on the characteristics of each field, [43]. However, since candidate boxes overlap, function extraction for each area is resource intensive and time-consuming, requiring the model to conduct repetitive computation. Quick R-CNN solves the problem by feeding the entire picture to CNN for feature extraction [44].

      Faster R-CNN reduces the amount of nominee boxes by replacing selective quest with a Region Proposal Network (RPF) to speed up the R-CNN network [45] [47]. The faster RCNN detector [48] is a near-real-time detector.

      Face mask identification determines whether or not a human in a photograph is wearing a mask [49], [50]. Physical distance recognition detects the actual distance between people in a photo after first recognizing them in a picture [51]. At the start of the COVID-19 pandemic, researchers have been looking for face masks and physical distancing of crowds. Jiang et al. [49] used Retina Facemask, a one-stage detector that used a feature pyramid network to fuse high-level semantic content. They added an attention layer to make it easier to detect the face mask in a picture. As compared to previously built versions, they had a better detection accuracy. Militante Dionisio [52] used the VGG-16 CNN model to detect whether or not people were wearing a face mask and got 96 percent precision. Rezaei and Azarmi [53] created a model focused on YOLOv4 to detect the use of a face mask and physical distancing. They trained their model on several publicly available large datasets and were able to obtain a precision of 99.8% for real-time detection. Ahmed et al. [51] used YOLOv3 to detect individuals in a crowd, and used the Euclidian distance to determine how far apart two people were physically. They have used a monitoring algorithm to detect individuals in videos that breached the physical separation. They started with a 92 percent accuracy rate and then improved that to 98 percent by using transfer learning.

      Drones are used in building projects to take real-time video to have aerial views that aid in the early detection of issues. Large-scale image data from drones was obtained in research experiments, and deep learning algorithms were used to locate artefacts. Asadi and Han [54] created a method to increase data collection from building drones. To monitor moving events, Li et al. [55] used deep learning object recognition networks on video taken by drones. Drone data may be used to classify staff, determine if theyre wearing face masks, and practice physical distancing.

    3. Detection of physical distance

      The physical distancing detector model created by Roth [57] was used in the study. People identification, image transformation, and distance estimation are all stages in the models detection of physical distance. Roth [58] trained models on the COCO data collection, which contains 120,000 videos, using the TensorFlow entity detection model Zoo.

    4. R-CNN model that is faster

      The Faster R- CNN models schematic design as seen in Figure 3. The Region Proposal Network (RPN) is used in the Faster R-CNN, as is the Fast R-CNN as the detector network. To retrieve the characteristics, the input picture is passed via the Convolutional Neural Networks (CNN) Backbone. The RPN

      then recommends bounding boxes for the Region of Interest (ROI) pooling sheet, which is used to pool the images attributes.

      Fig 3. A Schematic Architecture of the Faster RCNN

    5. Artificial Intelligence

      Artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, Leading AI define the field as the study of intelligent agents any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. Colloquially, the term artificial intelligence is often used to describe machines (or computers) that mimic cognitive functions that humans associate with the human mind, such as learning and problem solving. As machines become increasingly capable, tasks considered to require intelligence are often removed from the definition of AI, a phenomenon known as the AI affect AI is whatever hasnt been done yet. For instance, optical character recognition is frequently excluded from things considered to be AI, having become a routine technology. Modern machine capabilities generally classified as AI include understanding human speech, competing at the highest, autonomously operating cars, intelligent routing in content delivery networks, and military simulations.

    6. Python

      Python is an interpreted, high-level, general-purpose programming language. Created by Guido van Rossum and first released in 1991, Pythons design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and largescale projects. Python is dynamically typed and garbagecollected. Python is often described a language due to its comprehensive standard library. Python uses dynamic typing and a combination of reference counting and a cycle detecting garbage collector for memory management. It also features dynamic name resolution, which binds method and variable names during program execution.

    7. OpenCv

      OpenCV Python is a library of Python bindings designed to solve computer vision problems. Python is a general-purpose programming language started by Guido van Rossum that became very popular very quickly, mainly because of its simplicity and code readability. It enables the programmer to express ideas in fewer lines of code without reducing readability.

      Compared to languages like C/C++, Python is slower. That said, Python can be easily extended with C/C++, which allows us to write computationally intensive code in C/C++ and create Python wrappers that can be used as Python modules. This gives us two advantages: first,the code is as fast as the original C/C++ code (since itis the actual C++ code working in background) and second, it easier to code in Python than C/C++. OpenCV-Python makes use of NumPy, which is a highly optimized library for numerical operations with a MATLAB-style syntax. AlltheOpenCVarray structures are converted to and from NumPy arrays. This also makes it easier to integrate with other libraries that use NumPy such as SciPy and Matplotlib. OpenCV-Python is the Python API for OpenCV, combining the best qualities of the OpenCV C++ API and the Python language.

    8. Detection of a mask on the face

      Five separate object detection models from the TensorFlow object detection algorithm Zoo [56] were trained and evaluated on the face mask dataset to compare their accuracy and determine the best model for face mask detection.

    9. Tensorflow

      Tensor Flowis an open-source library for fast numerical computing. It was created and is maintained by Google and released under the Apache 2.0 open-source license. The API is nominally for the Python programming language, although there is access to the underlying C++ API. Unlike other numerical libraries intended for use in Deep Learning like Theano, Tensor Flow was designed for use both in research and development and in production systems, it can run on single CPU systems, GPUs as well as mobile devices and large-scale distributed systems of hundreds of machines. Computation is described in terms of data flow and operations in the structure of a directed graph. Nodes: Nodes perform computation and have zero or more inputs and outputs. Data that moves between nodes are known as tensors, which are multi-dimensional arrays of real values. Edges: The graph defines the flow of data, branching, looping and updates to state. Special edges can be used to synchronize behavior within the graph, for example waiting for computation on a number of inputs to complete.

    10. YOLOV3

    YOLOv3 is the latest variant of a popular object detection algorithm YOLO You Only Look Once. The published model recognizes 80 different objects in images and videos, but most importantly it is super-fast and nearly as accurate as Single Shot MultiBox (SSD). First, it divides the image into a 13×13 grid of cells. The size of these 169 cells varies depending on the size of the input. For a 416×416 input size that we used in our experiments, the cell size was 32×32. Each cell is then responsible for predicting a number of boxes in the image. For each bounding box, the network also predicts the confidence that the bounding box actually encloses an object, and the probability of the enclosed object being a particular class. Most of these bounding boxes are eliminated because their confidence is low or because they are enclosing the same object as another bounding box with very high confidence score. This technique is called non maximum suppression. Easy integration with an OpenCV application: If your application already uses OpenCV and you simply want to use YOLOv3, you dont have to worry about compiling and building the extra Dark net code. OpenCV CPU version is 9x faster: OpenCVs CPU implementation of the

    DNN module is astonishingly fast. For example, Dark net when used with OpenMP takes about 2 seconds on a CPU for inference on a single image. In contrast, OpenCVs implementation runs in a mere 0.22 seconds! Check out table below. Python support: Dark net is written in C, and it does not officially support Python. In contrast, OpenCV does.

  4. FUTURE WORK AND LIMITATIONS

    Lot of people have suggested algorithm attempts to reduce the frequency of social distance breaches rather than eliminate them. When people are seen in public spaces, they are still in business, which raises a lot of red flags in our system. However, an uptick in distance breaches in particular areas, such as cashier lines, could prompt enforcement. We proposed method for combination with a facial recognition algorithm, which will make touch tracing even easier.

    When an infection occurs, the details regarding any person 7 encountered in public spaces and the individuals with whom they have come into contact is collected in a database and retrieved when an infection occurs. Which will help to speed up the touch tracking method and give greater control over the diseases dissemination.

  5. CONCLUSION

We have discussed about the importance of social distancing. We have carried out description of various research papers which are linked to image processing, computer vision, which are linked to identify social distancing using images or video feed. We have also described various case study carried out by different research group to identity social distancing and face recognition. We have also discussed about surveillance method and integration to AI and Deep learning model. We have also thrown some light on medical application of the models generated by AI Technique. We have discussed also about coding environment like python, OpenCV, TensorFlow, and YOLO used to develop application which can be used for social distance monitoring, pedestrian surveying and maskface identification.

REFERENCES

  1. World Health Organisation. WHO Corona-viruses Disease Dashboard (August 2020). Available at https:= //covid19.who.int/table.

  2. WHO Director, Generals. Opening remarks at the media briefing on COVID-19 (2020). WHO generals and directors speeches.

  3. Olsen, S. J. et al.Transmission of the severe acute respiratory syndrome on aircraft, New Engl. J. Medicine 349, 24162422, DOI: 10.1056/NEJMoa031349 (2003).

  4. Adlhoch, C. et al.Considerations relating to social distancing measures in response to the COVID-19 epidemic, European Centre for Disease Prevention and Control, Technical report, 2020.

  5. Ferguson, N. M. et al.Strategies for mitigating an influenza pandemic, Nature 442,2006,448452, DOI: https: //doi.org/10.1038/nature04795.

  6. Thu, T. P. B., Ngoc, P. N. H., Hai, N. M. et al, Effect of the social distancing measures on the spread of COVID-19 in 10 highly infected countries, Science of the Total Environment 140430, DOI: https://doi.org/10.1016/j.scitotenv, 2020.140430(2020) .

  7. Morato, M. M., Bastos, S. B., Cajueiro, D. O. NormeyRico, J. E, An optimal predictive control strategy for COVID-19 (SARS-CoV-2) social distancing policies in Brazi, Elsevier Annu. Rev. Control. DOI: https://doi.org/ 10.1016/j.arcontrol.2020.07.001(2020).

  8. Fong, M. W. et al, Nonpharmaceutical measures for pandemic influenza in nonhealthcare settingssocial distancing measures, Emerging. infectious diseases 26, 976. DOI: 10.3201/eid2605.190995(2020).

  9. F, A., N, Z. A, U, Effectiveness of workplace social distancing measures in reducing influenza transmission: a systematic review, BMC public health,113, DOI: 10.1186/s12889-018-5446-1 (2018).

  10. Australian Government Department of Health. Deputy chief medical officer report on COVID-19. Dep. Heal. Soc. distancing for coronavirus DOI: https://doi.org/10.1136/bmj.m1845 (2020).

  11. Rezaei, M. Shahidi, M. Zero-shot learning and its applications from autonomous vehicles to COVID-19 diagnosis: A review, SSRN Mach. Learn. J. 3, 127, DOI:10.2139/ssrn.3624379 (2020).

  12. Togac¸ar, M., Ergen, B. Comert, Z. COVID-19 detection using ¨ deep learning models to exploit social mimic optimization and structured chest X-ray images using fuzzy color and stacking approaches, Comput. Biol. Medicine 103805, DOI: https://doi.org/10.1016/j.compbiomed.2020.103805 (2020).

  13. Ulhaq, A., Khan, A., Gomes, D. Paul, M. Computer vision for COVID19 control: A survey, Image Video Process, DOI:10.31224/osf.io/yt9sx (2020).

  14. Nguyen, T. T. Artificial intelligence in the battle against coronavirus (COVID-19): a survey and future research directions, arXiv Prepr. 10, DOI: 10.13140/RG.2.2.36491. 23846/1 (2020).

  15. Choi, W. Shim, E. Optimal strategies for vaccination and social distancing in a game-theoretic epidemiological model, J.Theor.Biol. 110422, DOI: https://doi.org/10. 1016/j.jtbi.2020.110422 (2020).

  16. C, E., K, P. S, W. J. Systematic biases in disease forecastingthe role of behavior change, J. Epidemics 96105, DOI: 10.1016/j.epidem.2019.02.004 (2019).

  17. O, K. W. G, M. A. A contributions to the mathematical theory of epidemicsI, The Royal Soc. publishing DOI: https://doi.org/10.1098/rspa.1927.0118 (1991).

  18. Dr. S. Syed Ameer Abbas, M. Anitha, X. Vinitha Jain. Realization of Multiple Human Head Detection and Direction Movement Using Raspberry Pi ,Electronics and Communication Engineering, Mepco Schlenk Engineering College, Sivakasi, This full-text paper was peerreviewed and accepted to be presented at the IEEE WiSPNET 2017 conference.

  19. Joel Joseph Joy Manali Bhat, Namrata Verma, Milind Jani. Traffic Management Through Image Processing and Fuzzy Logic, D.J.Sanghvi College of Engineering, Mumbai, India, Proceedings of the Second International Conference on Intelligent Computing and Control Systems(ICICCS 2018), IEEE Xplore Compliant Part Number: CFP18K74- ART; ISBN: 978-1-5386-2842-3.

  20. Article on OpenCV social distancing detector by Adrian Rosebrock on June 1, 2020, on pyimagesearch.

  21. Neel Bhave, Aniket Dhagavkar, Kalpesh Dhande, Monis Bana, Jyoti Joshi. Smart Signal-Adaptive Traffic Signal Control using Reinforcement Learning and Object Detection, Department of IT, RAIT, Nerul, Maharashtra, India, Proceedings of the Proceedings of the Third International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC 2019), IEEE Xplore Part Number: CFP19OSV- ART; ISBN:978-1-7281-4365-1.

  22. Sun, C. and Zhai, Z. The efficacy of social distance and ventilation effectiveness in preventing COVID-19 transmission, Sustainable Cities and Society 62,p.102390(2020).

  23. Richard, M., Kok, A., de Meulder, D., Bestebroer, T., Lamers, M., Okba, N., Fentener van Vlissingen, M., Rockx, B., Haagmans, B., Koopmans, M., Fouchier, R. and Herfst, S. SARS-CoV-2 is transmitted via contact and via the air between ferrets, Nature Communications ,11(1), (2020). 10.1016/j.epidem.2019.02.004 (2019).

  24. Thu, T., Ngoc, P., Hai, N. and Tuan, L. Effect of the social distancing measures on the spread of COVID-19 in 10 highly infected countries, Science of The Total Environment 742, p.140430(2020).

  25. Huynh, T. Does culture matter social distancing under the COVID-19 pandemic?, Safety Science 130, p.104872(2020).

  26. Aldila, D., Khoshnaw, S., Safitri, E., Anwar, Y., Bakry, A., Samiadji, B., Anugerah, D., GH, M., Ayulani, I. and Salim, S. A mathematical study on the spread of COVID-19 considering social distancing and rapid assessment: The case of Jakarta, Indonesia, Chaos, Solitons Fractals 139, p.110042(2020).

  27. Wu, J., Tang, B., Bragazzi, N., Nah, K. and McCarthy, Z. Quantifying the role of social distancing, personal protection and case detection in mitigating COVID-19 outbreak in Ontario, Canada, Journal of Mathematics in Industry 10(1)(2020).

  28. Yu, W., Kim, S., Chen, F. and Choi, J. pedestrian Detection Based on Improved Mask R-CNN Algorithm, Advances in Intelligent Systems and Computing, 2020, pp 1515-1522.

  29. Pranav, K. and Manikandan, J. Design and Evaluation of a Realtime Pedestrian Detection System for Autonomous Vehicles, Zooming Innovation in Consumer Technologies Conference (ZINC),2020.

  30. Li, G., Yang, Y. and Qu, X. Deep Learning Approaches on Pedestrian Detection in Hazy Weather, IEEE Transactions on Industrial Electronics,67(10), 2020, pp.8889-8899.

  31. Saeidi, M. and Ahmadi. A novel approach for deep pedestrian detection based on changes in camera viewing angle, Signal, Image and Video Processing, 14(6), 2020, pp.1273-1281.

  32. Zhang, Z., Gao, J., Mao, J., Liu, Y., Anguelov, D. and Li, C. STINet: Spatio- Temporal-Interactive Network for Pedestrian Detection and Trajectory Prediction.2020, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

  33. Loey, M., Manogaran, G., Taha, M. and Khalifa, N. A hybrid deep transfer learning model with machine learning methods for face mask detection in the era of the COVID-19 pandemic, Measurements167, p.108288(2020).

  34. Tang, J., Su, Q., Su, B., Fong, S., Cao, W. and Gong, X. Parallel ensemble learning of convolutional neural networks and local binary patterns for face recognition, Computer Methods and Programs in Biomedicine197, p.105622(2020).

  35. WANG, Y., ZHANG, X., YE, J., SHEN, H., LIN, Z. and TIAN, W. Mask-wearing recognition in the wild, SCIENTIA SINICA Informationis, 50(7), 2020, pp.1110-1120.

  36. Choi, W., Shim, E. Optimal Strategies for Vaccination and Social Distancing in a Game-theoretic Epidemiological Model, J. Theor. Biol, 2020, 110422.

  37. Eksin, C.; Paarporn, K.; Weitz, J.S. Systematic biases in disease forecastingThe role of behavior change, J. Epid, Feb 2019, pp 96105. [CrossRef].

  38. Kermack, W.O.; McKendrick, A.G. A Contributions to the Mathematical Theory of EpidemicsI, The Royal Society Publishing: London, UK, 1991.

  39. L. Liu et al. Deep Learning for Generic Object Detection: A Survey, Int. J. Comput. Vis., vol. 128, no. 2, Feb 2020 , pp. 261318, doi: 10.1007/s11263-019-01247-4.

  40. Z. Q. Zhao, P. Zheng, S. T. Xu, and X. Wu. Object Detection with Deep Learning: A Review, IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 11, Institute of Electrical and Electronics Engineers Inc.,Nov. 01, 2019, pp. 32123232, doi: 10.1109/TNNLS.2018.2876865.

  41. L. Jiao et al. A survey of deep learning-based object detection, IEEE Access, vol. 7, 2019, pp. 128837 128868, doi: 10.1109/ACCESS.2019.2939201.

  42. R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation, in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Sep. 2014, pp. 580587, doi: 10.1109/CVPR.2014.81.

  43. Z. Zou, Z. Shi, Y. Guo, and J. Ye. Object Detection in 20 Years: A Survey, 2019, Accessed: Aug. 02, 2020. [Online]. Available: http://arxiv.org/abs/1905.05055.

  44. Girshick, Fast R-CNN, 2015. Accessed: Aug. 03, 2020. [Online]. Available: https://github.com/rbgirshick/.

  45. S. Ren, K. He, R. Girshick, and J. Sun. Faster R- CNN: Towards Real- Time Object Detection with Region Proposal Networks, Advances in Neural Information Processing Systems 28 (NIPS 2015), 2015, pp. 91 99.

  46. V. Carbune et al. Fast multi-language LSTMbased online handwriting recognition, Int. J. Doc. Anal. Recognit., vol. 23, no. 2, 2020, pp. 89 102, doi: 10.1007/s10032-020-00350-4.

  47. H. Jiang and E. Learned-Miller. Face detection with the faster RCNN, 12th IEEE International Conference on Automatic Face Gesture Recognition (FG 2017) , 2017, pp. 650 657.

  48. M. Jiang, X. Fan, and H. Yan. RetinaMask: A Face Mask detector, 2020, [Online]. Available: http://arxiv.org/abs/2005.03950.

  49. Z. Wang et al., Masked Face Recognition Dataset and Application, Mar. 2020, Accessed: Aug. 02, 2020. [Online]. Available: http://arxiv.org/abs/2003.09093.

  50. S. Meivel , K. Indira Devi, S. Uma Maheswari, J. Vijaya Menaka . Real time data analysis of face mask detection and social distance measurement using Matlab, Materials Today: Proceedings , 2021, https://doi.org/10.1016/j.matpr.2020.12.1042.

  51. I. Ahmed, M. Ahmad, J. J. P. C. Rodrigues, G. Jeon, and S. Din. A deep learning-based social distance monitoring framework for COVID-19, Sustain. Cities Soc), p. 102571, Nov. 2020, doi: 10.1016/j.scs.2020.102571.

  52. S. V. Militante and N. V. Dionisio. Real-Time Facemask Recognition with Alarm System using Deep Learning, in 2020 11th IEEE Control andc System Graduate Research Colloquium, ICSGRC 2020 Proceesing 2017, pp .650-657, doi: 10.1109/ICSGRC49013.2020.9232610.

  53. M. Rezaei and M. Azarmi. DeepSOCIAL: Social Distancing Monitoring and Infection Risk Assessment in COVID-19 Pandemic, Appl. Sci., vol. 10, no. 21, Oct 2020, p.7514, doi: 10.3390/app10217514.

  54. K. Asadi and K. Han. An Integrated Aerial and Ground Vehicle (UAVUGV) System for Automated Data Collection for Indoor Construction Sites, in Construction Research Congress 2020: Computer Applications -Selected Papers from the Construction Research Congress 2020, Nov. 2020, pp. 846855, doi: 10.1061/9780784482865.090.

  55. C. Li, X. Sun, and J. Cai. Intelligent Mobile Drone System Based on Real-Time Object Detection, JAI, vol.1, no.1 , 2019, pp. 18, doi: 10.32604/jai.2019.06064.

  56. V Rathod, A-gogler, S. Joglekar, Pkulzc, and Khanh, TensorFlow 2 Detection Model Zoo, 2020.

  57. B. Roth, A social distancing detector using a Tensorflow object detection model, Python and OpenCV, Towards Data Science, 2020.https://towardsdatascience.com/a-social-distancing-detector- usinga-tensorflow-object-detection-model-python-and-opencv- 4450a431238 (accessed Dec. 22,2020).

  58. B. Roth. GitHub – basileroth75/covid-social- distancing-detection: Personal social distancing detector using Python, a Tensorflow model and OpenCV, 2020.https://github.com/basileroth75/covid-socialdistancing- detection (accessed Dec. 22, 2020).

Leave a Reply