Research on Contour Detection for Natural Images Based on Retina Inspired Differential Signal Memory (RIDSM) Algorithms and Adaptive Thresholding

DOI : 10.17577/IJERTV13IS050209

Download Full-Text PDF Cite this Publication

Text Only Version

Research on Contour Detection for Natural Images Based on Retina Inspired Differential Signal Memory (RIDSM) Algorithms and Adaptive Thresholding

Yussuf Abdulla Hamad, Junbao Zheng, Zou Zhanggang

School of Computer Science and Technology (School of Artificial Intelligence), Zhejiang Sci-Tech University.

840 Xuelin Street, Jianggan District, Hangzhou, 310018, Zhejiang, P.R. China

AbstractIn computer vision, contour detection is essential for recovering object boundaries from natural images and can be divided into traditional and deep learning methods. Despite the advancements in deep learning, it often faces various challenges. Therefore, this study designs a new and straightforward algorithm for contour detection. Inspired by the human retinal mechanism and aiming to overcome the drawbacks of traditional methods, we propose detecting the contours of natural images using a combination of the Retina-Inspired Differential Memory (RIDSM) algorithm and the adaptive thresholding method. The proposed method consists of three steps, preprocessing, RIDSM, and adaptive thresholding. The Preprocessing involves bilateral filtering and sharpening of the original image to enhance the visibility of edges/contours. The RIDSM is inspired by the human retinal mechanisms to modify memorizing and learning variables to noise handling and enhance the ability to detect contours effectively. The adaptive thresholding method improves detection accuracy by fine-tuning threshold values based on image characteristics to discern contour strength. The experimental results demonstrate quantitatively the superiority of the proposed algorithm in gradient-based methods and fuzzy logic-based methods, showcasing higher precision, F1 measure, and robustness in contour extraction even under low-contrast conditions and blurry environments.

KeywordsContour Detection, Edge Detection, Adaptive Threshold, Retina structure, Memorizing factor, Learning factor.

  1. INTRODUCTION

    Contour detection refers to the process of identifying connected sequences of pixels that form the boundary of objects within an image. It is unavoidable to exclude edge detection in the contour detection process because, before contour extraction, the initial step involves edge detection within the image [1][2], focusing on identifying intensity transitions to pinpoint closed curves or outlines that delineate objects or regions

    Through the comprehensive reviews of the existing literature, the researchers efforts categorized contour detection into two main classes: traditional and deep learning methods, as shown in Fig. 1. These two methods are also subdivided into other classes. Deep learning comprises a special clustering-based method, cross-layer multiscale fusion-based method, and encoding and decoding-based method. On the other hand,

    traditional methods are also categorized into Pixel-based methods and methods based on sub-pixel level.

    Traditional methods are straightforward and manually designed features while deep-leaning methods harness artificial intelligence to enhance precision detection. Despite the advancement of deep learning, it may still suffer from some drawbacks such as the need for extensive training data, computational resources, lack of interpretability, and overfitting leading to poor localization. Therefore, due to these challenges, this paper focuses on the traditional methods to improve achieving accurate contour detection and enhance the detectors capability of distinguishing between relevant object contours from noise background [3].

    In addition to the classification of traditional methods, Canny, Sobel, and Prewitt are gradient-based methods of contour detection techniques that operate at the pixel-level methods [4][5][6], facing challenges in both local and global methods. Local methods analyze pixel neighborhoods but struggle with larger patterns[7][8], while global methods consider entire images but face challenges with computational efficiency and intricate details [9]. Combining local and global approaches was considered a potential solution, but challenges persisted [10][11] particularly in cluttered background images. However, several reasons lead authors to prioritize the pixel level, particularly in gradient methods such as Canny and Sobel which need deeply ingrained in image processing and extensive principles for edge detection to overcome the hindering obstacles of detecting accuracy and efficiency of image contours, estimating edge width accuracy, and fine- tuning parameters for diverse images.

    To address these drawbacks, its required to improve the traditional methods to enhance contour detection in natural images by extracting individual objects, suppressing textures, highlighting the salient edges, and shaping features accurately. The focus on contour detection aims to mitigate false detections that often happen in the edge detection process, particularly in scenarios where traditional operators struggle to suppress irrelevant information effectively. The innovative solution also determines stability and adaptability in addressing challenges such as poor lighting conditions and noise interference.

    Figure 1: Classification of contour detection methods

    In recent years, contour detection technology marked a significant advancement in algorithms and methodologies to mimic human visual perception in quick and accurate feature extraction and object detection [12]. The researchers explored understanding how the human retina perceives and interprets contours in visual stimuli. [13][14]. The human visual system adeptly extracts object contours from complex scenes with speed and accuracy, by integrating local elements into coherent shape contours. This proficiency is crucial for effectively detecting and identifying targets in cluttered environments [15]. Thus, developing a biologically inspired vision model that captures this aspect of visual processing is logical. The center-surround mechanism found extensively in the initial stages of the human visual system like the retina, lateral geniculate nucleus (LGN), and primary visual cortex (V1), plays a significant role in contour detection, [15] by analyzing disparities between the center and surrounding information, to extract salient contours while suppressing textures. Moreover, contours are closely related to two additional concepts: edges and boundaries [16], with edges defined as a significant variation in intensity or color among neighboring pixels in an image [17]. Edge detection is fundamental for various image-processing tasks, including object recognition, image segmentation, and pattern recognition [18], playing a pivotal role in identifying essential features and structures within an image.

    Due to the challenges mentioned above on how much traditional methods suffer in detecting contours, the RIDSM algorithm with the adaptive thresholding technique is proposed and the primary contribution in detecting contours in the natural image is listed as follows:

    • Develop an enhanced contour detection framework by integrating RIDSM based biological feature extraction with adaptive thresholding mechanisms.

    • Investigate the performance of the proposed method across BIPED datasets, encompassing different levels of complexity and variability.

    • Compare the efficacy of the proposed approach with the Sobel and the Canny edge detectors.

    The rest of this paper is organized as follows: Related work is explored in Section 2, and Section 3 will explain the methodology that is leveraed for providing accurate contour. Performance evaluation of the experimental results is

    discussed in section 4, and finally, the conclusion is in section 5.

  2. PRINCIPLE OF RETINA

    1. The retina structure.

      The retina, a complex neural structure located at the back of the eye, consists of three primary layers (photoreceptor cells, bipolar cells, and ganglion cells) which are crucial in capturing and processing visual data from the surroundings. When the light enters the eye, the photoreceptor cells (rods and cones) initiate the visual signals process and then transmit them to the bipolar cells, the ganglion cells receive signals from the bipolar and then transmit them to the brain for further processing. With its intricate layers of specialized cells, the retina acts as the primary sensory organ converting light inputs into electrical signals sent to the brain for processing [19]. An essential role of the retina is identifying contours and edges in natural images, which is fundamental for perceiving shapes, objects, and spatial arrangements in our environment.

      The retina's principle is based on its complex structure and advanced neural connections, finely tuned to extract visual elements like contours, edges, and textures [20]. Central to this process are the photoreceptor cells, such as rods and cones, which detect light and initiate signal transmission within the retina. Subsequently, the processed information flows through layers of bipolar and ganglion cells, aiding in refining and transmitting visual signals along the visual pathway.

      Within the retina, memory and forgetfulness mechanisms are crucial for enhancing visual processing efficiency by managing the retention and discarding of information over time. These mechanisms are essential for balancing short-term fluctuation and long-term trends in visual stimuli [21], thereby improving the accuracy and reliability of contour and edge detection algorithms.

    2. The retina structure in computer vision.

    The human eye retina provides a valuable model for developing efficient visual processing systems in computer vision. Computer vision systems, like the retina, aim to extract important features from images for various applications. A key concept drawn from the retina is the need to balance memorizing and learning coefficients when dealing with past and present information. This concept is implemented through algorithms that use memory factors to retain significant features from previous data and learning factors to eliminate irrelevant or outdated information. This equilibrium allows for adaptation to changing visual inputs and enhances the accuracy of tasks such as contour and edge detection.

    Moreover, the human eye retina's organization significantly enhances computer vision systems by enabling efficient preprocessing through inspiring wide-angle imaging capabilities, adapting to varying lighting conditions, reducing data redundancy, prioritizing high-resolution image processing for detail perception, and guiding dynamic parameter adjustments [22]. By incorporating these principles, computer vision systems can achieve improved performance, resilience, and efficiency in analyzing and interpreting visual information

    across diverse applications. Consequently, the principles of memory and learning inspired by the human retina in computer vision systems to determine the current retention of information based on past data and level of learning data is expressed in equation 1.

    (1)

    Where represent the information that has been memorized after time step , represent the information that has been memorized before the time step , represents new

    information, and and are memorizing and learning coefficients respectively.

  3. METHODOLOGY

    To overcome the challenges of traditional contour detection methods, we integrate the RIDSM algorithm and adaptive threshold to suppress noise, boost contour detection precision, and amplify salient features of the natural images. The workflow depicted in Fig. 2, showcases how feature extraction from preprocessing for identifying and enhancing specific edges or contours in natural images.

    retinal neuron group, to enhance edge and contour detection in images. The human eye's ability to perceive object edges and boundaries is attributed to the retina's photoceptor cells (rods and cones), which are highly sensitive to light changes. This sensitivity, coupled with an optical memory effect where photoreceptor cells retain past impressions, enhances visual continuity. This algorithm incorporates memorizing and learning factors to smooth data and learn temporal dynamics from extracted image features. By balancing long-term trends and short-term fluctuations, based on the retina's three-layer structure, RIDSM enhances contour detection accuracy by mitigating noise and clutter in natural images. The method's efficiency is achieved through a three-layer process, emphasizing the importance of optical memory and differential signals in improving algorithm performance.

    Figure 3: Retina-Inspired Differential Signal Memory Algorithm flowchart.

    There are three neurons in the first layer of the retinal

    structure:

    L1 ,

    L2 , and

    L3 . Each neuron has two parameters

    Figure 2: An overview of methodology flowchart for contour detection.

    Preprocessing: Feature extraction from the input image is performed in the preprocessing and edge sharpening stage by bilateral filtering and sharpening effect. Contour detection, noise reduction, edge reservation, and improving the visual

    to represent photoreceptive characteristics: learning factors ( F ) and memorizing factors ( M ). These parameters could be related to how each neuron processes light information over time. Due to the limited total resources for memory and learning, the learning and memorizing factors are complementary, therefore, they should have a relationship F M 1. Consequently, neuron L1 contains F1 and M1 ,

    L2 contains F2 and M2 , and L3 contains F3 and M 3 .

    quality of the image. Typically achieved by the Laplacian kernel and shown in the mathematical formula.

    Hence, Fn and Mn

    represent the learning and Memorizing

    1 1 1

    factors of the nth neuron, respectively.

    L 1 9 1

    The learning factors of neuron

    L2 is supposed to be

    f1 and

    the learning difference between neurons

    L and L is f ,

    1 1 1

    1 2 2

    Where L represents the Laplacian kernel.

    Horizontal and vertical sequences: the horizontal and vertical sequences take place by transforming the original image presented by M into two-dimensional directions that correspond to pixel position. These directions are labeled as matrices by Mh and Mv , with the size of m*n , where m , and n are the dimensions of the original image. Each element is represented as

    which is the same as the difference between L2 and L3 . This means that the neurons are arranged in a way that each

    subsequent neuron is more learning than the previous one by

    a constant factor. The learning factors between the neurons is F1 F2 F3 . This indicates that neuron L3 is the most learning, followed by L2 and then L1 .

    To determine the current retention of information based on past data and level of learning factors refer to formula (1)

    The second and third layers of neurons are differential,

    h

    M

    i , j

    list[Mil /2, j , Mil /21, j , , Mil /22, j , Mil /21, j ]

    (2)

    differentiating the information processed by the first layer.

    Mvi , j

    list[M

    i, jl /2

    , Mi, j l /21

    , , M

    i, j l /22

    , Mi, j l /21 ]

    (3)

    The result is generated in the formula.

    (4)

    Retina-Inspired Differential Signal Memory Algorithm (RIDSM): This algorithm is designed to mimic te biological mechanisms of the human visual system, particularly the

    Where represents the neuron difference obtained by the final difference results.

    Inspired by the retina's layer structure, the algorithm aims to extract crucial features from visual stimuli to improve the

    accuracy and efficiency of contour detection. When applied to a grayscale image, RIDSM processes sequences of adjacent gray values along horizontal and vertical directions, generating gradient sequences. The algorithm identifies the main gradient in each sequence, enhancing the detection of edges and contours. By following principles similar to the retina's function, RIDSM optimizes the process of extracting important image features for precise contour detection.

    Maximum absolute value: the maximum gradient value is retrieved by selecting the maximum absolute value within each list as the gradient value for the respective direction at that position. The matrix formulas are

    threshold technique. During this conversion, the horizontal and vertical gradient of the gray image will change as the edge of the image changes. If the brightness of the original grayscale image is low it might cause the overall image to become dark, which can reduce the overall gradient of the image. Therefore, it is necessary to adjust the gradient threshold based on the characteristics of the image. Here is the

    mAathemflaattitceanl idnegsc(rGip)tion.

    Matrix Flattening and Sorting:

    S sort( A)

    Hi, j Max(aVbs(Hi, j ))

    Vi, j Max(abs(Vi, j ))

    (5)

    (6)

    The process initiated by flattening the two-dimensional

    gradient matrix G into a one-dimensional array A , then sort

    A in ascending order to generate the sort array S .

    Where H and

    represent horizontal and vertical gradients

    High and Low Threshold:

    Adaptive gradient threshold algorithm. After obtaining the gradient matrix through the above procedure it is necessary to convert it into a binary image using an adaptive gradient

    T | S[ pT * N] |

    L | S[ pL * N] |

    Figure 4: Row (a) represents original images from the BIPED dataset, (b) corresponding edge detection obtained by the Sobel operator, (c) corresponding edge detection obtained by the Canny operator, (d) ground truth image, and (e) Contour detection obtained by our proposed approach

    Based on a sorted array S , T , L , pT , pL , and N to represent high threshold and low threshold, the percentage for high threshold (0.05), the percentage for low threshold (0.45), and the length of S respectively, corresponding to specific percentage values in S. The threshold is used for subsequent edge and texture.

    Hence, " di " represents an element of the gradient matrix obtained from the "S" array, ensuring the presence of adequate preceding data points crucial for contour detection in natural images, as further explained below.

    If di T , then di is a contour.

    If di L , then di is not a contour

    If L di T , then di represents there is some texture or

    twice the maximum value of the five most recent gradient measurements. If it is a texture, it will not be satisfied.

  4. PERFORMANCE EVALUATION AND EXPERIMENTAL RESULTS

    The evaluation performance is done using both subjective (qualitative) and objective (quantitative) measures. Subjective evaluation involves human visual inspection, as humans serve as the final evaluators, although this way is not the most reliable to assess. edge detection because its evaluation is by selecting pictures to prove its effectiveness. On the other hand, objective evaluation is conducted through various matrices. The BIPED Dataset used for evaluation comprises 250 images with training and corresponding ground truth, each image dimension is 1280 * 720 pixels.

    noise. If a point represents a weak edge, then the peak value of its previous gradient fluctuations will be relatively low. Consequently, the gradient value at the edge point will exceed

    1. Quantitative evaluations

      The following matrices are used in an experiment to evaluate the performance of our proposed methods against classical edge detection methods as depicted in Table 1 in terms of data, Fig. 5. and Fig. 6. in the form of a graphical representation and box-and-whisker plot respectively. Those performance indicators are employed to measure the effectiveness of the contour detection algorithm. These include Accuracy ( A ) for overall correctness, Precision ( P ) to measure the accuracy of identified contour pixels, Recall ( R ) to assess the algorithm's ability to capture all actual contour pixels, F1-score ( F1) for a balanced measure of precision and recall, and Specificity ( S ) to evaluate the algorithm's capability to identify non-edge areas correctly [23]. They are defined as follows:

      methods that are compared to them. Additionally, while presenting individual results for each image is impractical, Table 1 provides a representative indication. The graph representation based on the indicators utilized for evaluation showcases the stability, robustness, and general competitive Specificity and Accuracy. The performance of our method seems to fluctuate in terms of recall, occasionally falling short and in other instances surpassing both the Sobel and Canny methods. In the recall matrix, our method appears to have slightly better performance compared to Sobel and Canny in certain indices, however is not consistent across all metrics throughout the entire range. Finally, we mentioned earlier that, the F1 represents the harmonic mean of precision and recall, with a higher F1 score indicating a balanced performance between the two, which is typically preferred. Our method

      A TP TN

      TP TN FP FN

      P TP

      TP FN

      (7)

      (8)

      exhibits performance comparable to the other two methods in this regard.

      R TP TN

      TP FN

      F1 2* P * R

      P R

      (9)

      (10)

      S TN

      TN FN

      (11)

      Figure 5: The graphs present comparisons of the methods

      Where TP represents the number of samples that are positive and correctly predicted by the model, FP represents the number of samples that are negative and incorrectly predicted by the model, TN represents the number of samples that are negative and correctly predicted by the model, and FN represents a number of samples that are positive and incorrectly predicted by the model. Edge detection algorithms are usually evaluated based on accuracy, but the complexity of natural image backgrounds and textures makes strict pixel- level matching challenging. To address this, our algorithm uses a tolerance window for evaluating contour detection effectively. Additionally, TP undertakes the supplementary task of predicting edge pixels by analyzing a 3×3 pixel area around each point in the real edge map, enhancing the evaluation process. The highest values of S , A , P , R , and F1indicate good performance of the edge detection compared with our proposed algorithm you can see that our results outperform the Canny detector and the Sobel detector to detect better contours in natural images.

    2. Experimental results

    Due to the comparison between the Canny and the Sobel detectors in detecting accurate contours, suppressing texture and noise, and robust efficiency, the result shows that the overall performance of our experiment is near to the ground truth image. Fig. 4. shows a comparison with the original image, the result obtained from two classical detectors, the result produced by our algorithm, and the ground truth image. We trained various bench-marked natural images from the BIPED dataset the result shows our algorithm outperforms the

    across 250 tests or data points

    Figure 6: Box-and-whisker plot presents the comparisons of the results of the methods across multiple data points. (a) Specificity (b) Accuracy () Precision (d) Recall and (e) F1

    The yellow color in the box-and-whisker plot represents the performance of our proposed method, the first blue color (left side) represents the Sobel, and the second blue color (right side) represents the Canny. The horizontal lines at the upper and lower ends of the box-and-whisker diagram represent the minimum and maximum values of the dataset, the middle red line represents the median values, and points outside the whiskers plot represent outliers. Any values that fall above or below the dataset of the box-and-whisker plot are considered outliers.

    Table 1: Performance comparison between the Sobel, the Canny, and the proposed Approach

    by the comparative study, particularly evident in the metrics of PSNR and MSE. The mathematical formulation utilized for edge detection evaluation is defined as follows:

    Performance Indicators

    Sobel

    Canny

    Our

    S

    0.9116

    0.9149

    0.9241

    A

    0.9047

    0.9020

    0.9163

    P

    0.2436

    0.1585

    0.3068

    R

    0.7290

    0.4999

    0.7430

    F1

    0.3651

    0.2408

    0.4343

    S

    0.9116

    0.9149

    0.9241

    Image 2

    S

    0.9085

    0.9017

    0.9120

    A

    0.8913

    0.8790

    0.8923

    P

    0.3161

    0.2365

    0.3379

    R

    0.6334

    0.5035

    0.6210

    F1

    0.4217

    0.3219

    0.4377

    Image 3

    S

    0.8765

    0.8635

    0.9185

    A

    0.8668

    0.8478

    0.9029

    P

    0.2443

    0.1591

    0.3057

    R

    0.6974

    0.5262

    0.6296

    F1

    0.3618

    0.2443

    0.4116

    MSE

    1 [I (i, j) K (i, j)]2

    (12)

    m n

    m * n i1

    j 1

    MAX 2

    PSNR 10*log10 I

    (13)

    MSE

    SSIM (x, y) [l(x, y)] .[c(x, y)] .[s(x, y)]

    (14)

    Whereas m and n indicates the size, which represents the number of rows and columns in the input images, and MAX represents the maximum possible pixel intensity value of the image.

    In addition to presenting our quantitative experimental results as depicted in Table 1, we conduct a comparative analysis of our algorithm's performance against [24] algorithm, as evidenced in Table 2.

    Table 2: Comparison results between our proposed Approach and the algorithms from [23]

    Methods

    PSNR

    SSIM

    MSE

    Sobel

    12.876

    0.6649

    3740.9

    Prewitt

    13.230

    0.6499

    3171.7

    Roberts

    13.487

    0.6621

    2997.6

    Canny

    9.1863

    0.3731

    7898.9

    Type-1 Fuzzy

    10.442

    0.5267

    5947.7

    Type-2 Fuzzy

    12.601

    0.5899

    3620.1

    Hybrid-1 Fuzzy

    13.154

    0.6310

    3222.7

    Hybrid-2 Fuzzy

    9.2007

    0.4870

    7897.6

    Our

    28.135

    0.6768

    100.05

    This study investigates a comprehensive analysis of both classical methodologies (namely Sobel, Prewitt, Robert, and Canny) and fuzzy-logic-based systems (including type-1, type- 2, hybrid-1, and hybrid-2), utilizing metrics such as Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR), and Structural Similarity Index Measure (SSIM) to evaluate the simulation outcomes. MSE serves to quantify the total squared error between the original and compressed images, while PSNR offers insights into the peak error, and SSIM addresses limitations inherent in MSE by providing a more nuanced assessment of image similarity. Our experimentation employs the BIPED dataset, a choice made to facilitate direct comparison and substantiate the superior performance of our algorithm in contour detection within natural images. It's worth noting that, the higher PSNR values and the lower MSE values signify better results, hence table 2 showcases a substantial margin by which our algorithm surpasses the results obtained

  5. CONCLUSION

This paper presents the algorithm to enhance contour detection for natural images by integrating the retinal layer structure and differential signal principle with Adaptive thresholding approaches. Our algorithm can suppress some insignificant textures and noise in the image background, which sometimes might lead to false edges in the traditional methods such as the Canny and the Sobel detectors. The experimental results show that the proposed method has better performance and offers accurate and robust contour extraction not only to the Sobel and the Canny detectors but also in other algorithms such as Type-1 and Type-2 Fuzzy, Hybrid-1, and Hybrid-2 Fuzzy. The experimental evaluation shows the efficacy of the method is achieving high levels of precision and F1-measure, outperforming existing techniques in various metrics. However, it shows the robustness of the results in specificity and accuracy, this means that the primary objective of our experiment is achieved to contour extraction, even though, further refinements are necessary to eliminate or significantly reduce unwanted textures for better results than this. This will help align the results more closely with ground truth images and enhance the overall accuracy of the contour extraction process.

REFERENCES

  1. X. Y. Gong, H. Su, D. Xu, Z. T. Zhang, F. Shen, and H. Bin Yang, An Overview of Contour Detection Approaches, Int. J. Autom. Comput., vol. 15, no. 6, pp. 656672, 2018, doi: 10.1007/s11633- 018-1117-z.

  2. G. Papari and N. Petkov, Edge and line oriented contour detection: State of the art, Image Vis. Comput., vol. 29, no. 23, pp. 79103, 2011, doi: 10.1016/j.imavis.2010.08.009.

  3. A. P. Kelm, V. S. Rao, and U. Zölzer, Object Contour and Edge Detection with RefineContourNet, Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 11678 LNCS, pp. 246258, 2019, doi: 10.1007/978-3-030-29888-3_20.

  4. S. Krishnan Nair et al., RETRACTED ARTICLE: Prewitt Logistic Deep Recurrent Neural Learning for Face Log Detection by Extracting Features from Images(Arabian Journal for Science and Engineering, (2023), 48, (2589)), Arab. J. Sci. Eng., vol. 48, no. 2,

    p. 2589, 2023, doi: 10.1007/s13369-021-05609-4.

  5. C. Lyu, Y. Chen, A. Alimasi, Y. Liu, X. Wang, and J. Jin, Seeing

    the Vibration: Visual-Based Detection of Low Frequency Vibration Environment Pollution, IEEE Sens. J., vol. 21, no. 8, pp. 10073 10081, 2021, doi: 10.1109/JSEN.2021.3059110.

  6. R. V. Patil and Y. P. Reddy, An AutonomousTechnique for Multi Class Weld Imperfections Detection and Classification by Support Vector Machine, vol. 40, no. 3. Springer US, 2021. doi: 10.1007/s10921-021-00801-w.

  7. K. Muntarina, S. B. Shorif, and M. S. Uddin, Notes on edge detection approaches, Evol. Syst., vol. 13, no. 1, pp. 169182, 2022, doi: 10.1007/s12530-02109371-8.

  8. K. Rouis, P. Gomez-Kramer, and M. Coustaty, Local Geometry Analysis for Image Tampering Detection, Proc. – Int. Conf. Image Process. ICIP, vol. 2020-Octob, pp. 25512555, 2020, doi: 10.1109/ICIP40778. 2020.9190762.

  9. Z. Dorrani, H. Farsi, and S. Mohamadzadeh, Image edge detection with fuzzy ant colony optimization algorithm, Int. J. Eng. Trans. C Asp., vol. 33, no. 12, pp. 24642470, 2020, doi:

    10.5829/ije.2020.33.12c. 05.

  10. L. Fang, T. Qiu, H. Zhao, and F. Lv, A hybrid active contour model based on global and local information for medical image segmentation, Multidimens. Syst. Signal Process., vol. 30, no. 2,

    pp. 689703, 2019, doi: 10.1007/s11045-018-0578-0.

  11. B. Wu and Y. Yang, Local- and global-statistics-based active contour model for image segmentation, Math. Probl. Eng., vol. 2012, 2012, doi: 10.1155/ 2012/791958.

  12. A. Barnawi, P. Chhikara, R. Tekchandani, N. Kumar, and B. Alzahrani, Artificial intelligence-enabled Internet of Things-based system for COVID-19 screening using aerial thermal imaging, Futur. Gener. Comput. Syst., vol. 124, pp. 119132, 2021, doi: 10.1016/j.future.2021.05.019.

  13. X. Liu, J. Zheng, and L. Wang, Research on contour detection model based on primary visual path response mechanism, ACM Int. Conf. Proceeding Ser., pp. 16, 2021, doi: 10.1145/3508546.3508563.

  14. J. Xu and S. Yue, by Using the Improved Short Path Finding, pp. 145151, 2012.

  15. Y. Ding, H. Shi, S. Song, Y. Wang, and Y. Li, Perceptual learning in contour detection transfer across changes in contour path and orientation, no. 199, Mar. 2024, doi:

    https://doi.org/10.48550/arXiv. 2403.11516.

  16. D. R. Martin, C. C. Fowlkes, and J. Malik, Learning to detect natural image boundaries using local brightness, color, and texture cues, IEEE Trans. Pattern Anal. Mach. Intell., vol. 26, no. 5, pp. 530549, 2004, doi: 10.1109/TPAMI.2004.1273918.

  17. M. Hagara and P. Kubinec, About edge detection in digital images, Radioengineering, vol. 27, no. 4, pp. 919929, 2018, doi: 10.13164/re.2018.0919.

  18. A. Azeroual and K. Afdel, Fast Image Edge Detection based on Faber Schauder Wavelet and Otsu Threshold, Heliyon, vol. 3, no. 12, pp. 119, 2017, doi: 10.1016/j.heliyon.2017.e00485.

  19. D. W. Pfaff and N. D. Volkow, Neuroscience in the 21st century: From basic to clinical, second edition. 2016. doi: 10.1007/978-1- 4939-3474-4.

  20. Q. Hao et al., Retina-like imaging and its applications: A brief review, Appl. Sci., vol. 11, no. 15, pp. 120, 2021, doi: 10.3390/app11157058.

  21. J. H. Medina, Neural, cellular and molecular mechanisms of active forgetting, Front. Syst. Neurosci., vol. 12, no. February, pp. 110, 2018, doi: 10.3389/fnsys.2018.00003.

  22. V. P. Boyun, L. O. Voznenko, and I. F. Malkush, Principles of Organization of the Human Eye Retina and Their Use in Computer Vision Systems, Cybern. Syst. Anal., vol. 55, no. 5, pp. 701713, 2019, doi: 10.1007/s10559-019-00181-0.

  23. D. M. W. Powers, Evaluation: from precision, recall and F- measure to ROC, informedness, markedness, and correlation, pp. 3763, 2020, [Online] Available: http://arxiv.org/abs/2010.16061

  24. G. OZDEMR, Comparison of Classical and Fuzzy Edge Detection Methods, Konya J. Eng. Sci., vol. 8055, pp. 177191, 2024, doi: 10.36306/konjes. 1116833.