Adaptive PM for Earlier Detection of Errors on OTN and ASON Connections using Data Mining

DOI : 10.17577/IJERTV10IS040004

Download Full-Text PDF Cite this Publication

Text Only Version

Adaptive PM for Earlier Detection of Errors on OTN and ASON Connections using Data Mining

Sandeep Dabhade, Sumit Kumar, Anil Lancy, Shishir Kumar, Bhanuprakash S,

Rajitha V, Santosh Narayankhedkar

Abstract In large Optical Networks, there are large number of circuits, connections, rings, provisioned for the increasing demand of connectivity with high speed and service assurance. When we talk about service assurance, we must ensure and prove the quality of service of the large Optical Networks, which talks about reliability and reliance of the companys service provided to the users. Due to rapid growth in consumers, Optical Network, which is like a backbone to most of the latest technologies and faster connectivity, is in great demand. Performance Monitoring provides a measure to quality of service of such large Optical Networks. Due to its wide utility, Performance Monitoring of Large Optical Networks needs to be done extensively. This paper covers how early detection of errors in the OTN and ASON connections can predict the connections with high risk. This will give user an early information about those connections and give ample time for correction. Performance Monitoring Testing of Large Optical Networks is done in both control and Managed plane scenarios. Test bench has more than five thousand networks elements and more than three lacs paths. Nodes are added in Network Management System on which network connections are provisioned through automation. We have successfully implemented and tested Adaptive PM for earlier detection of errors on OTN and ASON connections using data mining in our lab.

Keywords:- PM, Bin, Counters, Granularity, Termination Point

  1. INTRODUCTION

    Performance Monitoring is an important aspect in Optical Networking, this is the measure of service assurance. Quality of signal is very important as that of communication. Customers need service assurance which is like guarantee of service. Now question arises, what is PM?

    (PM) Performance Monitoring is the ability to perform low level quality monitoring in the network by counting certain parameters (e.g. number of errors).

    Why PM?

    1. Service Level Agreement (SLA) monitoring

      To verify that the service level provided by the network is consistent with the agreement with the customer.

    2. Fault Localisation

    To find low level faults in the network

  2. PM TERMINOLOGY

    Generally we measure certain events, called Counters for a fixed period of time, called Granularity and store the results internally for sometime. Traditionally in transport networks two granularity periods are used:

    15 min Used for fault localization 24 hour Used for SLAs

    Other granularities : 1 Hour, Immediate

    The management system can collect this data, called Historical PM across bins and store them internally, for a certain period.

    The management system can also request the current value of counter to Network Element, called Current PM. End user can generate reports to visualize/access this data.

    Monitoring can be considered as a number of different types:

    Digital PM

    Traditional SDH/SONET PM which counts the number of errors in a received signal

    • Based on the ITU standard G.826.

    • Can be performed at many layers: (RS, MS, VC4, VC12, etc…)

    • Counters like: Background Block Errors (BBE), Errored Seconds (ES), etc..

      • Generally 0 is working OK and non-zero means

    as error.

    (G.826 – End-to-end error performance parameters and objectives for international, constant bit-rate digital paths and connections)

    Figure 1: Digital PM Counter on 15min interval

    Analogue PM

    • Used for monitoring changes in analogue data e.g. laser power level, laser temperature

    • The values arent counted across the interval, instead the value is polled at certain periods of time.

    • Analogue PM is usually used for

    • Physical layer monitoring in SDH/SONET NEs

    • Optical Channel level monitoring in WDM equipment [ pre OTN ]

    • The data is usually non-zero and the customer is looking for changes in values to identify up coming failures.

      Figure 2: Analog PM Counter on 15min interval

      Ethernet PM

    • Mechanisms for monitoring the performance of Ethernet networks

    • Like digital PM it is counted e.g. bytes transmitted, packets dropped.

    • Can be performed both at a port level and at a flow level.

    • No general rules for good or bad values it depends both on the counter type and customer.

    For instance, a high byte transmitted for a certain customer may mean they have sent too much data today.

    Figure 3: Ethernet PM Counter on 15min interval

  3. CONNECTION AND PM COUNTERS Consider the below Figure 4:

    Figure 4: Enable PM for End-to-End Connection

    In the above Figure 4, we can consider following Four PM TP (Performance Monitoring Termination Point)

    NODE A : NEND : TRANSMIT : 15-MIN NODE A : NEND : RECEIVE : 15-MIN NODE B : FEND : RECEIVE : 15-MIN NODE B : FEND : TRANSMIT : 15-MIN

    Another Four PM TP for different granularity, e.g. 1-Day NODE A : NEND : TRANSMIT :1-DAY

    NODE A : NEND : RECEIVE :1-DAY NODE B : FEND : RECEIVE :1-DAY NODE B : FEND : TRANSMIT :1-DAY

    In case of a multi-hop connection, PM can be enabled at all the points: (Figure 5)

    Figure 5: Enable PM for All-Points on the Connection

    Figure 6: OTS with 15min and 24h PM enabled

    Figure 7: Routing Display shows 15min and 24h PM enabled

  4. FUNCTIONAL OVERVIEW

    A typical Performance Monitoring System does following functions:

    -Manages requests toward the network to activate/deactivate PM

    -Manages storing for historical PM data collected via the adapters

    -Manages historical PM data visualization and reports

    -Manages historical data archiving

    -Periodically generates PM report to be exported to external OSS

    -Manages TCA

  5. TEST PROCEDURE

    Capture the PM files from real setup and use them to mimic the PM files.

    Write scripts to generate PM files by taking certain inputs like NE details on which PM files need to be generated.

    Different types of PM data have different file structure and hence need to create separate scripts to generate the PM files for Analog/ETH/OCS/SDH etc.

    15min and 24Hour PM granularity have different values in PM files.

    Hence need to take care which script to use for what data types and of what granularity.

    For 15min granularity, every 15min, PM files should be generated and for 1Day granularity, PM files should be generated only once in 24Hours.

    Measure the Performance Monitoring of the Large Optical Networks by simulating large PM data sets.

    We have tested with large PM counters in our lab. Test Bench is having approx. ~415K CSV records. Avg CSV report generation time is ~50-60 Sec.

    Avg Load

    Avg CPU utilized

    Avg memory used

    ~20-25

    ~50-60%

    ~80 GB

    Table 1: Average Load with Large PM processing

    S.

    N.

    File name

    No. of Records

    csv generation time

    1

    PM_CSV_Report_15MIN_0_N00000..csv

    413862

    37599 ms

    2

    PM_CSV_Report_5MIN_1_N00000.csv

    414610

    42327 ms

    3

    PM_CSV_Report_15MIN_2_N00000.csv

    414746

    36993 ms

    4

    PM_CSV_Report_15MIN_3_N00000.csv

    414929

    37355 ms

    5

    PM_CSV_Report_15MIN_4_N00000.csv

    414677

    44098 ms

    6

    PM_CSV_Report_15MIN_5_N00000.csv

    415164

    56475 ms

    7

    PM_CSV_Report_15MIN_6_N00000.csv

    414325

    35316 ms

    8

    PM_CSV_Report_15MIN_7_N00000.csv

    414569

    40214 ms

    9

    PM_CSV_Report_15MIN_8_N00000.csv

    415197

    47461 ms

    10

    PM_CSV_Report_15MIN_9_N00000.csv

    415407

    45178 ms

    Table 2: PM CSV Report generation timing(sample)

    According to ITU standard G.826 whenever consecutive Severely Errored Second (SES) counters detected on a particular connection then it may have service impact on it.

    e.g.

    If the error blocks are >= 30% in a given second, then SES will increase.

    The formula 2<=T<10 where, for T seconds, if SES persists then it will impact the service severely which may cause signal degradation and lead to disconnection of services.

    Hence in our approach we monitor OTN and ASON connections with SES counter values to predict the connections with high risk.

    With the help of Data Mining technique, we have correlated different PM Databases and sorted the connections with high SES values and presented the data in the tabular form as below:

    Table 3: Displays connections with high SES counters

    Due to the high SES counter values, these connections are most vulnerable and needs immediate attention.

  6. CONCLUSIONS

    Performance Monitoring Testing in large optical network is achieved with like-real transportation system by mimicking the real data which is captured and emulated on thousands of Network Elements to test large dataset.

    It is good to test performance monitoring on large Optical Network involving thousands of nodes, lacs of connections since service assurance is very much required.

    This Performance Monitoring test on large Optical Network demonstrated to satisfaction of the test results.

    We have detected connections with high risks by fetching the connections based on SES counter values and timestamp. This is achieved though data mining the different DB tables and correlating them with the connection names.

  7. REFERENCES

  1. G.826 : End-to-end error performance parameters and objectives for international, constant bit-rate digital paths and connections https://www.itu.int/rec/T-REC-G.826/en

  2. Simin Chen, Trevor Anderson, Don Hewitt, An V. Tran, Chen Zhu, Liang B. Du, Arthur J. Lowery, Efstratios Skafidas, "Optical performance monitoring for OFDM using low bandwidth coherent receivers", Optics Express, vol. 20, pp. 28724, 2012.

  3. Islam, Minhajul & Majumder, Satya. (2009). Optical and higher layer performance monitoring in photonic networks: Progress and challenges. International Conference on Advanced Communication Technology, ICACT. 3. 1591 – 1596.

  4. Chao Lu et al., "Optical performance monitoring techniques for high capacity optical networks," 2010 7th International Symposium on Communication Systems, Networks & Digital Signal Processing (CSNDSP 2010), Newcastle upon Tyne, 2010, pp. 678-681.

  5. Lu, Chao & Lau, Alan Pak Tao. (2018). Optical Performance Monitoring in Fiber-Optic Networks Enabled by Machine Learning Techniques. M2F.3. 10.1364/OFC.2018.M2F.3.

  6. Khan, Faisal Nadeem et al. Machine Learning-Assisted Optical Performance Monitoring in Fiber-Optic Networks. 2018 IEEE Photonics Society Summer Topical Meeting Series (SUM) (2018): 53-54.

  7. A. Napoli et al., "Next generation elastic optical networks: The vision of the European research project IDEALIST," in IEEE Communications Magazine, vol. 53, no. 2, pp. 152-162, Feb. 2015. doi: 10.1109/MCOM.2015.7045404

  8. Guiling Wu, Feirang Su, Xinwan Li, Weiwen Zou, and Jianping Chen "Real-time Ethernet based on passive optical networks," Optical Engineering 52(2), 025007 (7 February 2013).

  9. Zhilong Wang, Min Zhang, Danshi Wang, Chuang Song, Min Liu, Jin Li, Liqi Lou, and Zhuo Liu, "Failure prediction using machine learning and time series in optical network," Opt. Express 25, 18553- 18565 (2017)

Leave a Reply