A Robust Lane Detection using Edge Detection with Symmetric Molecules in Visual Perception for Self-Driving Cars

DOI : 10.17577/IJERTV10IS070097

Download Full-Text PDF Cite this Publication

Text Only Version

A Robust Lane Detection using Edge Detection with Symmetric Molecules in Visual Perception for Self-Driving Cars

Junbao Zheng

School of Information Science and Technology Zhejiang Sci-Tech University

Hangzhou, China

Amma Hazel Kargbo

Department of Information Science and Technology Zhejiang Sci-Tech University

Hangzhou, China

AbstractRoad lane detection is one important process within the vision-based self-driving vehicle system. It's the building block to other path planning and control actions like braking and steering. The lane edge detection results play an important role in feature-based lane detection. The edges left by the intricate conditions (illumination and Shadows) in the form of noise on roads interfere with the detection of lane lines hence posing a great challenge to their accurate detection. Thus, this paper proposes an improved algorithm to take advantage of edge orientations for robust lane detection based on edge detection with symmetric molecules which exploits the symmetry properties of directionally sensitive analyzing functions in multi-scale systems, adaptive HSI color model, perspective transformations, and histogram analysis. During this final technique, both straight and curved lane lines can be detected. The simulation results from different environments are presented to demonstrate the prominence of the proposed lane detection approach over the traditional approaches.

Keywords Lane Detection, Edge detection with symmetric molecules, Perception algorithm, Perspective transformation, Self- driving cars, Robotic Vision, Adaptive HSI color model

  1. INTRODUCTION

    It is widely known that road lane detection is an important component of any successful Autonomous driving system. A self-driving car consists of a large system of varied sensors and control modules. The primary key step for robust autonomous driving is to acknowledge and understand its environment. However, simple detection of obstacles and understanding of geometry around a vehicle is insufficient. Camera-based lane detection may be a vital step towards such environmental perception because it allows the car to properly position itself within the road lanes, crucial for any subsequent lane departure and trajectory planning decision. As such, performing accurate camera-based lane detection utilizing edge detection in real-time is paramount to autonomous driving and preventing traffic accidents [1], [2].

    With the continuous research from the Defense Research Project Agency (DARPA) in recent years, the research in autonomous vehicles has gained interest from researchers worldwide. Vision-based lane detection has been an active research topic for many systems such as lane departure warning [3][6], adaptive cruise control [7], [8], lane change

    assistant [9], time-to-time-change [10], [11] and fully autonomous driving systems.

    Edge detection is a fundamental operation in computer vision and image processing. It concerns with detecting significant variations in gray level images. The outputs of

    edge detectors, namely edge maps, are the foundation of high- level image processing such as object tracking [12], [10], image segmentation [13] and corner detection [14]. This plays an important role in road lane detection.

    Currently, lane detection is heavily based on visual sensors which have essentially become the eyes of smart cars that capture road scenes in front of vehicles through cameras. However, in daily natural conditions, the issues of vehicle occlusion, varied road twists, insufficient light, turns, and sophisticated backgrounds on each side have caused many difficulties for accurate lane recognition [15], [16]. This prompted various researches to tackle the topic of road lane detection focusing on how accurately edge points could be detected in the presence of various intricate conditions [17] [19]. Among them, the Canny edge detection algorithm coupled with the Hough transform [20], [21]. The Canny edge detection algorithm has the merit of faster calculation speed [22][24]. However, it may miss some obvious crossing edge details because it employs isotropic Gaussian kernels, this could have errors reflected in real-life systems where accuracy matters. Thus, the objective of this study is to propose a more accurate edge detection algorithm despite the presence of extreme noise conditions.

    This paper has the following structure: Section 2 describes the details of the proposed lane detection algorithm; Section 3 presents the experimental results and analysis of our proposed method; Section 4 presents the final conclusions and future work.

  2. DETAILS OF PROPOSED LANE DETECTION ALGORITHM

    From the failure points identified in the previous approaches in the literature that heavily depended on the Canny edge detection for feature extraction and the Hough transform for lane detection, we develop an improved framework to advance upon these failure points for a more robust lane detection algorithm. The algorithm is built upon the following phases: The first phase is input video or image frame which is passed to the pre-processing phase where it is refined by correcting any distortions caused by the camera, then passed to the Edge detection where edges are detected using edge detection with symmetric molecules combined with the adaptive HSI color model to produce a noise-free lane line edge map. The region of interest, perspective transform, histogram analysis, and finally project remapping is determined as illustrated in the diagram below:

    Fig 1. Shows the Framework of our proposed road lane detection algorithm

    1. Image refinement (Camera distortion correction)

      Camera and other optical instruments are commonly plagued by lens errors that distort the image by a variety of mechanisms associated with defects resulting from the spherical geometry of lens surfaces. There are three primary sources of non-ideal lens action(errors) that are observed in the camera. Of these three major classes of lens errors, two are associated with the orientation of wavefronts and focal planes concerning the camera optical axis. These include on-axis lens errors such as chromatic, spherical aberration, the major off- axis errors manifested as coma, astigmatism, and field curvature. The third class of aberrations commonly seen in cameras that contain zoom lens systems is geometrical distortion [25][27], which is manifested by changes in the shape of an image rather than the sharpness or color spectrum. The two most prevalent types of geometrical distortion, positive and negative (pincushion and barrel respectively) can often be present in very sharp images that are otherwise well- corrected for spherical and chromatic aberrations as well as coma and astigmatism. When images suffer from distortion, the true geometry of a specimen is no longer maintained in the image. Geometric distortion can be difficult to detect especially when the aberration is relatively slight and the specimen lacks periodic structures. This type of artifact is the most severe in specimens that have straight lines such as periodic grids, squares, rectangles, or other regular polygonal features that readily show the curvature present from distortion.

      In general, the ultimate effect of optical aberrations in the camera is to induce faults in the tiny features and specimen detail of an image that is being observed or digitally recorded. For extracting lane lines that bend, it is crucial to work with images that are distortion corrected because one of the goals of this pipeline is to determine the curvature radius, it is necessary to correct forany distortion from the start. Non- image-degrading aberrations such as pincushion or barrel distortion can easily be corrected using test targets [28], [29].

      Fig 2. Distorted image correction

    2. Edge detection

      The previous approaches commonly employed edge detections that include global extraction methods based on energy minimization criterion (fuzzy theory and neural networks) and edge derivative methods that use differential operators (Prewitt, Canny, LOG, and Robert operators). However, the traditional single edge detection algorithm has many problems including a too wide detection range and poor performance in presence of extreme noise [30][34]. This poses a big challenge in meeting the target requirements of lane detection. To overcome these challenges, this study adopted a modified edge detection algorithm that superimposes the binary edge map of the edge detection with symmetric molecules that accurately exploit the edge orientations and HSI color space model. This is because, for scenarios in which the lane markings are faded, simple color segmentation will not be sufficient for determining the lanes. Thus, it is possible to highlight this feature by using edge detection and HSI color space model.

      Fig 3. Shows Edge detection with symmetric molecules algorithm work flow

    3. Edge detection with symmetric molecules

      Edge detection with symmetric molecules is a new image processing method inspired by the notion of phase congruency [35]. It is said that it has superior performance in feature extraction especially edge extraction in presence of extreme noise and by definition invariant to changes in contrast. It is also calculated by utilizing the edge measure

      (, ) and the tangent orientation measure (, ) which are based on shapes that are defined by closed smooth spline curves and polygons. This is because in both cases, except for corner points in a polygon, the tangent can easily be computed for every point on the respective curve. The processing can be divided into several steps: Firstly, estimate the likelihood of a certain feature being centered at a point

      2 in the image domain for a given image 2(2) by considering the transformation of concerning systems of even and odd symmetric . Secondly, extract additional information about the detected features namely the local tangent orientations and heights (contrast). Lastly, with a soft-thresholding parameter > 0 and a possible non- integer scaling offset parameter >0, a two dimensional edge measure is given by;

      Fig 4. Shows Noisy image with (gaussian,0.01,0.06), Edge detection with symmetric molecules and Canny operator comparison under severe noise

      Where > 0 prevents a division by zero and corresponds to the odd-symmetric coefficient at the location of an ideal edge with jump-size one.

      Finally, the local height measure of the edge is identified by considering as;

    4. Adaptive Hue-Saturation-Intensity (HIS) color Model

      However, if only the edge detection with symmetric molecules processing is used for edge detection, then problems such as low detection accuracy and rough edges in the presence of shadows and extreme varying light conditions easily occur as seen in fig .1. To overcome these issues, we combined the edge detection with symmetric molecules edge map and the adaptive HSI color model in [36] to accurately detect the white and yellow lane markings. The RGB color images are converted to HSI color representation which can better reflect the visual field perception characteristics of the naked eye and each color channel in the HSI color space can be processed independently based on the following equations:

      Where,

      The hue (H) signifies the visual sensation that is based on the mixing of color type and calculated as the number of degrees around the axis. The saturation (S) represents the degree to which the color expresses its hue and is calculated as the radial distance from the diagonal axis. The intensity (I) signifies the visual brightness sensation.

      Fig 5. Shows the Output edge map of the combined HSI and edge detection with symmetric molecules binary edge maps. This method gets rid

      of the shadow and illumination interferences on the road surface

    5. Perspective Transform from Camera Angle to Birds Eye View

      The previous algorithms that employed Hough transform failed to detect curved lanes hence a significant flaw in their performance. For our lane detection method, we employ a perspective transformation to improve the lane detection performance. Images normally contain a perspective which causes lane lines in an image to appear converging at a distance even though they are parallel to each other. The elimination of this perspective simplifies the detection of lane line curvature. This is achieved by transforming the image captured by a center-mounted camera in 3-dimensional space to a 2-dimensional space Birds eye view where the lane lines are always parallel to each other. Since we are only interested in the lane lines, we selected four points on the original un- distorted image and transformed the perspective to Birds eye view as shown in fig. 5. Below;

      Mapping Coordinates

      Source Image

      Destination Image

      [ 531.2 475.2] [ 256. 0.]
      [ 748.8 475.2] [1024. 0.]
      [1126.4 673.2] [1024. 720.]
      [ 153.6 673.2] [ 256. 720.]

      Mapping Coordinates

      Source Image

      Destination Image

      [ 531.2 475.2] [ 256. 0.]
      [ 748.8 475.2] [1024. 0.]
      [1126.4 673.2] [1024. 720.]
      [ 153.6 673.2] [ 256. 720.]

      Table 1 Mapping coordinates for the perspective transformation

      Fig 6. Shows an aerial view of the warped image lane on the right

      This simplifies polynomial fittings to the lane lines, measure the lane curvature and vehicle position to the center of the car. Table 1 lists the mapping coordinates for the perspective transformation. Fig.5. shows the birds eye view of the original image after the perspective transformation [28], [37]. We hence calculate the perspective transformation matrix such that;

      Where ( , ), =0,1,2,3 are the coordinates of the quadrangle vertices in the destination image and(,),

      =0,1,2,3 are the coordinates of quadrangle vertices in the source image. We then use the calculated [3×3] matrix M to transform the image as shown below in equation 10:

      Where (, ) are the coordinates of pixels in the transformed image and (, ) are the coordinates of pixels in the source image.

    6. Histogram Analysis

    For lane detection and polynomial fitting, we first determine the maximum probability region of the lane markings which is determined by identifying the histogram of the birds eye view with two distinct peaks, the left, and right lane respectively. By sliding the window template using the sliding window approach across the image from left to right, the overlapping values are summed together and the convolved signal is created. The peak of the convolved signal is where the highest overlap of pixels and the most likely position of the lane marker reside as shown in fig.6. A third- order spline is fitted over these identified points of maxima once the regions are identified. The spline marks the detected lane. For repeated computation, using the previous fitting alleviates much difficulty of the search process by leveraging a previous fit and only searching for lane pixels within a certain range of that fit.

    Fig 7. Histogram results for lane detection

    Sanity checks are performed and successful detections are pushed into a FIFO queue of max length n. All the metrics are updated every time a new line is detected, if there is no line detected, the oldest result is dropped until the queue is empty and peaks need to be searched for from scratch.

    Once lanes are detected and passed through sanity checks, the radius of curvature and position of the vehicle is calculated as shown in equation (11). The remapping of the detected lane boundaries onto the original image is performed and the visual display of the lane boundaries is output. This concludes the numerical estimation of lane curvature and vehicle position. The final result of the proposed method is shown in fig.8.

    Fig 8. The final result of proposed method using Edge detection with symmetric molecules

  3. EXPERIMENTAL RESULTSANDANALYSIS Lane position detection has been studied for a few decades

    and the results are quite extensive. However, there is a big gap in this research field which is the lack of an effective way to evaluate and compare different researches due to the variation in used datasets [32]. Unlike other research topics in computer vision such as object recognition which uses almost identical datasets to test and learn each lane position detection algorithm mostly comes with its datasets. Thus, the comparison among the studies becomes much more difficult, although some studies have mentioned this problem and tried to propose their solutions [38]. In this paper, we employ the special case criteria to help in comparing our implemented algorithm with other available algorithms.

    1. Experimental setup

      The algorithm was implemented by using python programing language on an Intel® core (TM) i5-6500

      CPU@3.20 GHz, 8GB RAM, the windows 7 Ultimate PC. We use data from various sources, one being the Udacity Self-Driving Car Challenge dataset [39] which consisted of 8 images of size 960×540 and 3 videos of dimensions 1280×720 captured by a stereo camera. For faster calculations of the Fourier transformations within our algorithm, we recommend using a more powerful setup

    2. Algorithm performance comparison

      To verify the performance of the proposed lane detection algorithm, it was compared with other algorithms used in the literature.

      Fig 9. Our proposed algorithm on the left versus Canny and Hough transform lane detection based algorithm

    3. Lane Detection on Road Driving Video

    Road driving videos from the Udacity Self-Driving Car Challenge dataset [39] were utilized to carry out the simulation test experiment to verify the recognition performance of the lane detection algorithm in complex working conditions and dynamic environments, the videos in the experiment were captured by a real-time vehicle-mounted camera and lane lines detected as shown below;

    Fig 10. The test results of different detected frames under different road intricate conditions

    Additionally, the sliding window search method employed accurately tested the lane detection algorithm dynamically in the simulation test experiments. The images in fig.9 above instinctively demonstrate that the proposed algorithm could still accurately detect the left and right lane lines despite the intricate conditions in the form of shadows and illumination on the road. Compared to other road conditions, the curve degree of the lane lines was the smallest, the curvature radius was the largest and the corresponding sliding window offset was the smallest.

  4. CONCLUSION

This paper addressed the issue of lane detection with a novel approach employing edge detection with symmetric molecules combined with adaptive HSI model as feature extraction, perspective transform, and histogram analysis for

robust lane detection in visual perception for self-driving cars has been developed. This overcomes the shortcomings of previous methods in the literature by accurately detecting the generic curved and steep lane markings under intricate conditions. Additionally, the proposed algorithm does not necessitate any module for reducing noise image filtering.

In future work, the edge detection with symmetric molecules algorithm could be modeled as artificial neural networks for lane feature extraction, which may yield more robust results.

REFERENCES

  1. R. Zhang, K. Li, Z. He, H. Wang, and F. You, Advanced Emergency Braking Control Based on a Nonlinear Model Predictive Algorithm for Intelligent Vehicles, Appl. Sci., vol. 7, no. 5, 2017, doi: 10.3390/app7050504.

  2. L. Hu, J. Ou, J. Huang, Y. Chen, and D. Cao, A Review of Research on Traffic Conflicts Based on Intelligent Vehicles, IEEE Access, vol. 8, pp. 24471 24483, 2020, doi: 10.1109/ACCESS.2020.2970164.

  3. C. M. Kang, S.-H. Lee, S.-C. Kee, and C. C. Chung, Kinematics- based Fault- tolerant Techniques: Lane Prediction for an Autonomous Lane Keeping System, Int. J. Control. Autom. Syst., vol. 16, no. 3, pp. 12931302, 2018, doi: 10.1007/s12555-017-0449-8.

  4. D. Vajak, M. Vranje, R. Grbi, and D. Vranje, Recent Advances in Vision- Based Lane Detection Solutions for Automotive Applications, in 2019 International Symposium ELMAR, 2019, pp. 4550, doi: 10.1109/ELMAR.2019.8918679.

  5. H. Zhou and H. Wang, Vision-based lane detection and tracking for driver assistance systems: A survey, in 2017 IEEE International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics (RAM), Nov. 2017, pp. 660665, doi: 10.1109/ICCIS.2017.8274856.

  6. J. C. McCall and M. M. Trivedi, Video-based lane estimation and tracking for driver assistance: survey, system, and evaluation, IEEE Trans. Intell. Transp. Syst., vol. 7, no. 1, pp. 2037, Mar. 2006, doi: 10.1109/TITS.2006.869595.

  7. S. Magdici and M. Althoff, Adaptive Cruise Control with Safety Guarantees for Autonomous Vehicles, IFAC-PapersOnLine, vol. 50, no. 1, pp. 5774 5781, 2017, doi: https://doi.org/10.1016/j.ifacol.2017.08.418.

  8. A. Vahidi and A. Eskandarian, Research advances in intelligent collision avoidance and adaptive cruise control, IEEE Trans. Intell. Transp. Syst., vol. 4, no. 3, pp. 143153, 2003, doi: 10.1109/TITS.2003.821292.

  9. C. Y. Low, H. Zamzuri, and S. A. Mazlan, Simple robust road lane detection algorithm, in 2014 5th International Conference on Intelligent and Advanced Systems (ICIAS), Jun. 2014, pp. 14, doi: 10.1109/ICIAS.2014.6869550.

  10. A. Doshi, B. Morris, and M. Trivedi, On-road prediction of drivers intent with multimodal sensory cues, IEEE Pervasive Comput., vol. 10, no. 3, pp. 2234, Jul. 2011, doi: 10.1109/MPRV.2011.38.

  11. S. Martin, S. Vora, K. Yuen, and M. M. Trivedi, Dynamics of Drivers Gaze: Explorations in Behavior Modeling and Maneuver Prediction, IEEE Trans. Intell. Veh., vol. 3, no. 2, pp. 141150, Jun. 2018, doi: 10.1109/TIV.2018.2804160.

  12. W. He et al., A Low-cost High-speed Object Tracking VLSI System Based on Unified Textural and Dynamic Compressive Features, IEEE Trans. Circuits Syst. II Express Briefs, p. 1, 2020, doi: 10.1109/TCSII.2020.3020883.

  13. M. M. Adão, S. J. F. Guimarães, and Z. K. G. Patrocínio Jr, Learning to realign hierarchy for image segmentation, Pattern Recognit. Lett., vol. 133, pp. 287294, 2020, doi: https://doi.org/10.1016/j.patrec.2020.03.010.

  14. W. Zhang and C. Sun, Corner detection using second-order generalized Gaussian directional derivative representations, IEEE Trans. Pattern Anal. Mach. Intell., p. 1, 2019, doi: 10.1109/TPAMI.2019.2949302.

  15. J. Wang, H. Ma, X. Zhang, and X. Liu, Detection of Lane Lines on Both Sides of Road Based on Monocular Camera, in 2018 IEEE International Conference on Mechatronics and Automation (ICMA), Aug. 2018, pp. 1134 1139, doi: 10.1109/ICMA.2018.8484630.

  16. Y. Li, W. Zhang, X. Ji, C. Ren, and J. Wu, Research on Lane a Compensation Method Based on Muli-Sensor Fusion, Sensors, vol. 19, no. 7, 2019, doi: 10.3390/s19071584.

  17. C. Hasabnis, S. Dhaygude, and S. Ruikar, Real-Time Lane Detection for Autonomous Vehicle Using Video Processing, in ICT Analysis and Applications, 2020, pp. 217225.

  18. J. Son, H. Yoo, S. Kim, and K. Sohn, Real-time illumination invariant lane detection for lane departure warning system, Expert Syst. Appl., vol. 42, no. 4, pp. 18161824, 2015, doi: https://doi.org/10.1016/j.eswa.2014.10.024.

  19. H. Amini and B. Karasfi, New approach to road detection in challenging outdoor environment for autonomous vehicle, in 2016 Artificial Intelligence and Robotics (IRANOPEN), Apr. 2016, pp. 7 11, doi: 10.1109/RIOS.2016.7529511.

  20. X. Yan and Y. Li, A method of lane edge detection based on Canny algorithm, in 2017 Chinese Automation Congress (CAC), Oct. 2017, pp. 21202124, doi: 10.1109/CAC.2017.8243122.

  21. Y. Li et al., Nighttime lane markings recognition based on Canny detection and Hough transform, in 2016 IEEE International Conference on Real-time Computing and Robotics (RCAR), Jun. 2016, pp. 411415, doi: 10.1109/RCAR.2016.7784064.

  22. B. Barua, S. Biswas, and K. Deb, An Efficient Method of Lane Detection and Tracking for Highway Safety, in 2019 1st International Conference on Advances in Science, Engineering and Robotics Technology (ICASERT), May 2019, pp. 16, doi: 10.1109/ICASERT.2019.8934664.

  23. R. Y. Dhawale and N. L. Gavankar, Lane Detection and Lane Departure Warning System using Color Detection Sensor, in 2019 2nd International Conference on Intelligent Computing, Instrumentation and Control Technologies (ICICICT), Jul. 2019, vol. 1, pp. 830834,

    doi: 10.1109/ICICICT46008.2019.8993254.

  24. G. Deng and Y. Wu, Double Lane Line Edge Detection Method Based on Constraint Conditions Hough Transform, in 2018 17th International Symposium on Distributed Computing and Applications for Business Engineering and Science (DCABES), Oct. 2018, pp. 107 110, doi: 10.1109/DCABES.2018.00037.

  25. P. Yang, A. Antonacopoulos, C. Clausner, S. Pletschacher, and J. Qi, Effective geometric restoration of distorted historical document for large-scale digitisation, IET Image Process., vol. 11, no. 10, pp. 841 853, 2017, doi: https://doi.org/10.1049/iet-ipr.2016.0973.

  26. X. Xiao, L. Zhang, X. Lin, J. Zang, and X. Tan, Division model-based distortion correction method for head-mounted displays, J. Soc. Inf. Disp., vol. 27, no. 3, pp. 172180, 2019, doi: https://doi.org/10.1002/jsid.754.

  27. T. Yamamoto, M. Fukunaga, S. K. Sugawara, Y. H. Hamano, and N. Sadato, Quantitative Evaluations of Geometrical Distortion Corrections in Cortical Surface-Based Analysis of High-Resolution

    Functional MRI Data at 7T, J. Magn. Reson. Imaging, vol. n/a, no. n/a, doi: https://doi.org/10.1002/jmri.27420.

  28. R. Muthalagu, A. Bolimera, and V. Kalaichelvi, Lane detection technique based on perspective transformation and histogram analysis for self-driving cars, Comput. Electr. Eng., vol. 85, p. 106653, 2020, doi: https://doi.org/10.1016/j.compeleceng.2020.106653.

  29. J. Cao, C. Song, S. Song, F. Xiao, and S. Peng, Lane Detection Algorithm for Intelligent Vehicles in Complex Road Conditions and Dynamic Environments, Sensors, vol. 19, no. 14, 2019, doi: 10.3390/s19143166.

  30. W. Zhang, Y. Zhao, T. P. Breckon, and L. Chen, Noise robust image edge detection based upon the automatic anisotropic Gaussian kernels, Pattern Recognit., vol. 63, pp. 193205, 2017, doi: https://doi.org/10.1016/j.patcog.2016.10.008.

  31. K. Mostafa, J. Y. Chiang, and I. Her, Edge-detection method using binary morphology on hexagonal images, Imaging Sci. J., vol. 63, no. 3, pp. 168 173, 2015, doi: 10.1179/1743131X14Y.0000000098.

  32. Y. Xing et al., Advances in Vision-Based Lane Detection: Algorithms, Integration, Assessment, and Perspectives on ACP-Based Parallel Vision, IEEE/CAA J. Autom. Sin., vol. 5, no. 3, pp. 645661, May 2018, doi: 10.1109/JAS.2018.7511063.

  33. R. M. Yousaf, H. A. Habib, H. Dawood, and S. Shafiq, A Comparative Study of Various Edge Detection Methods, in 2018 14th International Conference on Computational Intelligence and Security (CIS), Nov. 2018, pp. 9699, doi: 10.1109/CIS2018.2018.00029.

  34. S. Singh and R. Singh, Comparison of various edge detection techniques, in 2015 2nd International Conference on Computing for Sustainable Global Development (INDIACom), Mar. 2015, pp. 393 396.

  35. R. Reisenhofer and E. J. King, Edge, Ridge, and Blob Detection with Symmetric Molecules, SIAM J. Imaging Sci., vol. 12, no. 4, pp. 1585 1626, 2019, doi: 10.1137/19M1240861.

  36. T.-T. Tran, C.-S. Bae, Y.-N. Kim, H.-M. Cho, and S.-B. Cho, An Adaptive Method for Lane Marking Detection Based on HSI Color Model, in Advanced Intelligent Computing Theories and Applications, 2010, pp. 304 311, doi: https://doi.org/10.1007/978-3-642-14831- 6_41.

  37. T. N. Tan, G. D. Sullivan, and K. D. Baker, On Computing The Perspective Transformation Matrix and Camera Parameters., in BMVC, 1993, pp. 110.

  38. A. V Vinuchandran and R. Shanmughasundaram, A real-time lane departure warning and vehicle detection system using monoscopic camera, in 2017 International Conference on Intelligent Computing, Instrumentation and Control Technologies (ICICICT), Jul. 2017, pp. 15651569, doi: 10.1109/ICICICT1.2017.8342803.

  39. UDACITY, Self-driving Car Engineer Nanodegree, UDACITY. https://www.udacity.com/course/self-driving-car-engineer-nanodegree-

– nd013.

Leave a Reply