Prediction of Telemetry Data using Machine Learning Techniques

DOI : 10.17577/IJERTV11IS090048

Download Full-Text PDF Cite this Publication

Text Only Version

Prediction of Telemetry Data using Machine Learning Techniques

Rinkal Jaina, Minal Rohit a, Anand Kumar a, Ayush Bakliwal a, Dr. Ashwinkumar Makwana b, Mrugendra Rahevar b

a SEnsors Development Area, Space Applications Centre (ISRO),

Ahmedabad, India

b U & P.U. Patel Department of Computer Engineering, CSPIT, CHARUSAT

Abstract:- Satellites are categorized as highly complex remotely operated systems due to much-connected equipment aboard. Satellites contain many telemetry data, requiring operators and designers to manage and control their mode of operation fully. It must analyse a high amount of telemetry data to monitor and regulate the health of subsystems and make decisions quickly. Telemetry parameters also are evaluated to ensure the spacecrafts performance. As per some researchers, predictions of telemetry parameters can be made using Telemetry Mining(TM) and Machine Learning (ML) approaches. Telemetry processing makes data visualization easier, allowing operators to better understand the satellites behaviour and reduce the chance of failure. We compare different machine learning techniques used to mine satellite telemetry data in this research. The prediction accuracy is calculated using mean error and coefficient correlation, and the techniques are compared. In addition, we used satellite telemetry data that revealed a parameter anomaly. This study discusses telemetry data processing methods such as XGBoost (Extreme Gradient Boosting), Recurrent Neural Network (RNN), Auto-regressive integrated moving average (ARIMA), Support Vector Regression (SVR), Multi-layer Perceptron (MLP), and Long Short-Term Memory Recurrent Neural Network (LSTM RNN).

Keywords: Machine Learning, Telemetry Data Mining, Neural Networks, Satellite.


    Space structures are vehicles and infrastructure operating col- lectively to carry out a task in the space environment. We rely upon space structures each day for communication, navigation, and climate prediction services. Space structures enhance our information about the physical universe via celestial observa- tion and planetary exploration. Space structures additionally offer intelligence and surveillance and are essential for coun- trywide defense [1]. Space structures are expensive because of the worth of modification and launch into space. The scarcity of such structures is long-term, and it is typically caused by the gradual breakdown of gadget additives and equipment. It is pre- ferred for a spacecrafts onboard fault analysis system to detect, isolate, diagnose, or classify defects inside the equipment

    due to unavoidable conditions [2].

    Protection and reliability are the utmost significant character- istics of any space mission. The advancement of new anomaly detection and fault analysis techniques updated with current sta- tistical technology, including synthetic intelligence, is essential for the safe functioning of large-scale complex space construc- tions. Therefore, anomaly identification and analysis strate- gies based on apriori professional knowledge and deductive rea-soning systems, including professional structures and model- based reasoning, were investigated for synthetic intelligence. While understanding-extensive strategies have been shown to outperform the traditional limit checking approach, putting to- gether the understanding- base or model-based, which may be necessary for them, is sometimes costly and time-consuming [3]. The authors analyzed the potential utility of telemetry data kept in- ground stations and investigated primary Data mining (DM) and Machine learning(ML) approaches based on space- craft anomaly detection and fault analysis. These approaches use a vast quantity of historical telemetry kept in the ground control station as training data to properly update different pa- rameter values included in diagnosis models provided by pro- fessionals or automatically learn the rules, patterns, and models related to spacecraft systems. Then, by reviewing anomalies us- ing real online telemetry, they utilize the modified or received patterns or policies to discover and identify irregularities.

    As a result, the techniques assist the earth station operator in detecting certain different behaviour of the satellite at an early stage. The standard procedure for dealing with irregularities is to look into past the telemetry records, which will reveal the cause of the problem. The issue of defect identification, detec- tion, and isolation has been investigated by several researchers

    [4] [5] [6]. Neural networks [7], Principal Component Analysis (PCA) [8], Support Vector Machine (SVM)) [9], Kalman fil- ters (KF) [10], Parameter Estimation [11], and Expert Systems [12], are among the methods reviewed by Marzat et al. [4] for aerospace systems.

    This research investigates Machine Learning (ML) and Data Mining (DM) methodologies that might use to investigate the spacecrafts overall performance. To know the health of a par- ticular onboard unit, DM is used to utilise the telemetrys over- all performance level. It enables operators of satellites to ex- amine the entire health of their satellites to avoid the risk of inaccurate and automated failure. Using the satellites internet data to analyze its state is a feasible approach to monitoring its health. With the prediction of telemetry characteristics, the operator may more quickly identify the capacity or operating mode of an upcoming satellite, which can improve decision- making in emergencies. Its a critical issue since an emergency might result in the satellites total loss. The numerical character of the satellite telemetry parameter is often formatted/offered as a time collection due to the apparent characteristics of satel- lite operation. Time series regression of satellite telemetry pa- rameters can affect the telemetry parameter value change trend, potentially resulting in satellite subsystem failure. Such devel- opments will trigger an alarm in the case of a severe failure. Another easy method is to forecast the values of one parame- ter in the future and do a limit check to detect likely failure. When the expected value possibility exceeds the percentage of error potential specified by satellite operators/designers, the rel- evant subsystem may immediately go into the defective mode, causing the satellite system to fail; the operator must take pre- cautions to avoid this occurrence.

    The following is the papers outline: Section 1 introduces the concept of analyzing the overall quality of satellites. The satel- lite subsystems are then described in Section 2. Section 3 re- views the literature on machine learning approaches for various applications, followed by Section 4, which assesses individual algorithms. The layout of the telemetry facts acquired from the dataset and its relevant correlation concept is explained in Section 5. The assessment process is then explained, with the impacts of utilizing the selected algorithms on telemetry data and future work being illustrated.


    1. Spacecraft

      A vehicle meant to fly in a structured flight pattern above earths lower atmosphere with or without crew is known as a space- craft. Even though early depictions of spaceflight often in- cluded streamlined vehicles, they have no benefit in space. De- pending on its use, actual vehicles are made from various de- signs. Indians launched their first unmanned spacecraft, Aryab- hata, on April 19, 1975, and had intended to launch a manned spacecraft, Gaganyaan 1, with 2-3 astronauts in June 2022, after a long period. Several developments to enhance unmanned ve- hicles ave been introduced to improve scientific knowledge, boost national defense, and provide essential services in in- dustries such as telecommunications and weather forecasting [13]. Almost all spacecraft are dependent on a launch vehicle to generate initial velocity, and the launch vehicle then sepa- rates from the spacecraft when the mission is

      completed. If enough velocity is given to spacecraft, it will escape Earths gravity and is often placed into an orbit around Earth or will move towards any other location in space. Examples of such spacecraft are Chandrayaan-1, Chandrayaan-2, and Mars Or- bital Mission. Spacecraft usually carry small rocket engines to move and adjust themselves in space. NASAs Apollo pro- grams manned spacecraft landed on the Moon using rocket en- gines and then returned their crew to lunar orbit. The equipment carried by spacecraft needs an onboard electrical power supply to operate. Solar panels and batteries are integrated into space- craft designed to stay in Earths orbit for longer duration. The shuttle orbiters hydrogen-oxygen fuel cells will last one to two weeks in space.

    2. Telemetry

      The automated communication mechanisms from many data sources refer to as telemetry. Customer experiences improve through the usage of telemetry data, which is also used to mon- itor security, application health, quality, and performance [14]. Telemetry is the process of monitoring and delivering located at remote or unreachable origins to a monitoring and analysis IT system at a given place [15]. Telemetry allows for dependable data collection and transmission to centralized systems, where it may use effectively. The emergence of big data technolo- gies and big data techniques, which take massive volumes of relatively unstructured data and aggregate it in centralized sys- tems, is part of the evolution of telemetry [16]. Any enterprise or government organization can benefit from sophisticated and extensive reports and intelligence provided by these data-rich data transit networks. Depending on the requirements, infrared, radio, GSM, ultrasonic, satellite, or cable can all be utilised to deliver telemetry data used in software development, medical, intelligence, meteorology, and other sectors. Telemetry may provide insights into which features end-users utilise the most, uncover defects and problems, and increase visibility into the performance without requiring direct input from users in the software development environment.

    3. Machine Learning

    Machine Learning is an intelligence technique that allows soft- ware products to improve their predictive performance without being specifically designed. Machine learning algorithms uti- lize previous data as input to predict new output values [17]. Machine learning is commonly applied in recommender sys- tems and is also frequently used in spam filtering, business pro- cess automation (BPA), malware threat detection, and predic- tive maintenance.

    Machine learning is essential because it helps companies dis- cover patterns in consumer preferences and business operations while designing new solutions. Machine learning is used by some of the worlds most well-known companies, like Face- book, Google, and Uber, and it has therefore become a signifi- cant differentiator for many businesses.


    Advanced strategies for forecasting satellite system performance and planning for quick decisions by monitoring and assessing satellite subsystem performance were recently presented by re- searchers. Many algorithms have been developed in this field to predict failure before it occurs using telemetry data collected from satellites. For example, for artificial satellites housekeep- ing data, Yairi et al. [18] developed a health monitoring ap- proach based on probabilistic categorization and dimension re- duction. Furthermore, a supervised learning method for man- aging expected or unexpected behaviour of the spacecraft was introduced by Nassar and Hussein [19] to overcome defective states in space mission operations. One of the essential aspects of spacecraft health monitoring is fault detection. Hence Yang et al. [20] presented DM approaches for in-orbit satellites.

    Different telemetry data mining methods are compared in this research. These algorithms include the Multilayer Perceptron (MLP), Long Short-Term Memory Recurrent Neural Network (LSTM RNN), Extreme Gradient Boosting (XGBoost), autore- gressive integrated moving average (ARIMA) [21] [22], sup- port Vector Regression (SVR)and Recurrent Neural Network (RNN) [23] [24]. We chose these strategies after reviewing sev- eral previous studies [25] [26]. Even though many academics have based their findings on satellite telemetry accessible over the internet, their results have confidence [27], [28]. Our re- search used telemetry data with a strong belief due to the avail- ability of both design documents for each spacecraft module and data format corresponding to the coverage ranges of each sensor.

    A.Limit Checking

    The most fundamental approach is limit checking, which works by selecting an appropriate range for the applied parameter, and has been widely used in the past. Therefore, if the range of any metric exceeds our predictions, we may immediately monitor it. The only benefit of this method is its simplicity, as it allows for the setting and modification of limitations to monitoring space- craft operation. In addition, one sensor value can be subjected to limit checking. However, to evaluate the functioning of a spacecraft must monitor specific sensors simultaneously. How- ever, this technique is still ineffective for analyzing telemetry data in depth [29] [30].

    B. Expert System

    Because of its applications, artificial intelligence has expanded in popularity, and ES is one of the most often used algorithms. The system may be used by building a repository and an ex- perience and understanding reasoning engine, which allows the ES to forecast issues based on telemetry data. As a result, pre- defined knowledge rules must first be developed, which neces- sitates a full comprehension of the systems probable circum- stances. This systems drawback is that it does not make use of the self-learning concept. Therefore, ES is unable to generate new knowledge [29] [30].


  1. Auto-Regressive Integrated Moving Average For time-series data, ARIMA, a statistical method, is used to analyse the dataset and forecast the outcome [31]. A statisti- cal model that forecasts possible values based on earlier values is referred to as auto- regressive. For example, an ARIMA model may estimate a companys valuation based on recent pe- riods or forecast a stocks future pricing based on prior perfor- mance. This model is a regression analysis that indicates the strength of one dependent variable in relation to other changing variables. The purpose of the model is to estimate future stock or financial market movements by looking at the variations be- tween values in a sequence rather than the real values. The following is a breakdown of each ARIMA model component: Values of variables regress based on their own historical values in an auto-regression model (AR). The difference of observa- tions to stabilize the time series is referred to as integrated (I). The moving average model takes into account the correlation between an event and a latent error. In ARIMA, each com- ponent is represented as a metric with a more straightforward form. ARIMA with p, d, and q is a typical ARIMA model for- mat in which integer values replace the components to identify the kind of ARIMA model applied.

  2. Multilayer Perceptron

A multilayer perceptron is a neural network that links many layers in a directed graph, suggesting that data travels in one direction between nodes. Every node has a non-linear function, with the exception of the input nodes. An MLP uses backprop- agation as a supervised learning approach. Since MLP utilizes many layers ofneurones, it is considered as a deep learning technique. MLP is frequently used in study on parallel dis- tributed processing, supervised learning issues, and computa- tional neuroscience. Picture recognition, speech recognition and machine translation are examples of applications. There are three main levels in it: an output layer, the input layer and hidden layer. The input layer receives signals that will be pro- cessed. The output layer is in charge for activities such as pre- diction and categorization. The real processing power of the MLP is derived from the infinite hidden layers that lie between the output and input layers [32]. Data is transferred from the input to the output layer of an MLP, similarly it does in a feed- forward network. The MLPs neuron are generated using the backpropagation learning technique. MLPs are capable of esti- mating any continuous function and even issues that arent dif- ferentiable. Its most principal applications are pattern approx- imation, classification, prediction, and recognition.

  1. Recurrent Neural Network

    RNNs are Neural Networks in which the output of one phase is used as the input for the following phase. Although the neural networks input and output are completely independent of one another, some last words must be memorized when predicting the next term of a

    phrase [33]. To tackle this problem, RNN was created, which included a Hidden layer. The hidden layer, which is an important element of RNN, remembers precise de- tails about a sequence. RNNs are a sort of neural network that is both strong and dependable, and they are now among the most popular algorithms because they are the only ones with an internal memory [34]. RNNs utilize their internal memory to recollect important data about the input received, contributing in the prediction of future values and achieving high accuracy. As a result, it is the algorithm of choice for time – series data, text, audio, weather, video, financial data and other sorts of data in sequence. Compared to other algorithms, RNNs have a su- perior knowledge of sequence and context.

  2. Long Short-Term Memory Recurrent Neural Network LSTM is the more advanced form of RNN developed to de- scribe historical sequences and their long-range connections more precisely than basic RNNs [35]. The interior design of a ba- sic LSTM cell, the variations offered in the LSTM architec- ture, and a few popular LSTM applications are in great de- mand currently. Hidden layer neurons of RNN are replaced with blocks of LSTM to create LSTM. Each block features a memory unit which helps to solve the problem of vanishing gra- dients. The LSTM block comprises of three multiplication units called gates, each of which operates as a switch with values of 0(off) and 1(on) [36], [37] and is activated using the sigmoid ac- tivation function. According to Lessmann and Srivastava [38],

    a properly designed LSTM model beats competing global hori- zontal irradiance techniques utilizing satellite data.

  3. Extreme Gradient Boosting

    XGBoost (Extreme Gradient Boosting) is an open- source li- brary that utilizes the gradient boosting approach efficiently and effectively [39]. It was proposed as part of a research initiative at the University of Washington. Carlos Guestrin and Tianqi Chens presentation at the SIGKDD Conference in 2016 in- spired the Machine Learning sector. Boosting is an ensemble- based sequential approach [40]. To improve prediction accu- racy, it brings together a group of inefficient learners. The model results are evaluated based on the previous instant t-1 at any particular time t. Correctly predicted outcomes are given less weight, whereas incorrectly categorised outcomes are given higher weight. A slow learner is just modestly better than a ran- dom guess. Consider a decision tree with a prediction rate of little more than 50%.

  4. Support Vector Regression

    Vapnik [41] invented the SVM (Support Vector Machine). SVM is a statistic learning-based artificial intelligence approach that aims to eliminate the over- fitting problem by reducing the learn- ing machines expected error. To overcome classification and regression challenges [23], the SRM (structural risk minimiza- tion) approach is used. Support Vector Regression (SVR) and Support Vector Classification (SVC) are two forms of SVM. SVC divides data into

    groups, depending on the introduced characteristics, optimising the margins between them, such as text classification [42]. SVR is a method for forecasting future values from time analysis by minimising the sum of the distance between data and the hyper-plane, as in stock price forecasting.

    By finding the hyperplane and minimizing the range between the predicted and observed values, SVR tries to reduce error.


      The results of a comparison of six algorithms (ARIMA, MLP, RNN, LSTM, XGBoost, and SVR)

      for two consecutive teleme- try parameters are presented in this section. The ability of this strategy to detect possible faults has been demonstrated using time series regression of telemetry parameters. The dataset of two telemetries of one of Indian Remote Sensing [IRS] payload is shown in Figure 1.

      Figure 1: Plot for Input of IRS data

      A time series is created for each parameter and used as an in- put to the method under evaluation. For each algorithm, around 1800 values/readings were utilized for training, followed by ap- proximately 780 readings for testing (i.e., data used for training is 70 percent and data used for testing is 30 percent). To evalu- ate prediction accuracy, the predicted values are then compared to the actual values. High prediction accuracy refers to the tech- niques ability to predict future values (in either a abnormal or normal state). The output of the LSTM is represented in Figure 2 and 3.

      Figure 2: Output of Telemetry-1 using LSTM

      The blue lines, which are always behind the orange line, repre- sent the original data values.The predicted values used in train-

      ing are represented by the orange lines. The values of the train- ing data are forecasted in order to acquire more reliable data for the algorithms overall predictive accuracy as the prediction accuracy of training data is not always 100 percent. In the case of LSTM and ARIMA, predicted values are nearly similar to original values. When all algorithms are evaluated for predict- ing telemetry data behavior, LSTM comes out on top, followed by XGBoost and RNN. RNN has a memory block, but it also causes a vanishing gradient problem by just remembering some critical data while predicting data [43]. The advanced form of RNN, i.e., LSTM, predicts values faster than any other algo- rithm with the best results.

      Figure 3: Output of Telemetry-2 using LSTM

      Although ARIMA takes longer to predict values than other neu- ral network algorithms, ARIMA is suitable for time series data because it uses a statistical approach. As machine learning ad- vances toward neural networks, neural networks also provide the best results in terms of execution time compared to statis- tical approaches. While MLP is a basic neural network, other advanced neural networks perform well. The Support Vector Regression (SVR) is a popular technique for solving regression issues and analyzing time-series data. It performs well in terms of accuracy metrics and execution time; however, LSTM per- forms better for forecasting telemetry behavior when dealing with dynamic large real- time datasets.

      Figure 4: Model comparison for telemetry-1

      For classification and regression on tabular information, XG- Boost is designed. However, XGBoost can be used for time series forecasting. However, when using a predictive model like XGBoost to evaluate a time series, rational thinking seems to vanish [44]. Rather, we enter the data into the model in a black-box manner and expect it to produce correct results inde- pendently. A little- known aspect regarding time series analysis is that it can not predict every time series, nomatter how com- plex the model is. Attempting to do so frequently results in inaccurate or misleading predictions. The dataset used to gen- erate this prediction is quite clean, with no missing, trash, or null values, and it is organized sequentially. All of the models have performed admirably thus far on the data provided, as seen in Table 1.

      Figure 5: Model comparison for telemetry-2


This study compares machine learning methods for predicting spacecraft telemetry data (ARIMA, MLP, RNN, LSTM, SVR, and XGBoost). Actual telemetry data is used to forecast the values of spacecraft parameters. According to the findings, XG- Boost has a high prediction accuracy (as measured by correla- tion accuracy). In contrast, LSTM has the most remarkable pre- diction accuracy (from the mean error accuracy point of view). We discovered that RNN and SVR models run the fastest when these algorithms are applied to the supplied parameters. For im- plementation for low earth orbit satellite telemetry data min- ing, we propose more straightforward regression approaches such as SVR. Because there are fewer gates in the LSTM, it performs better. Although XGBoost outperforms LSTM, it is unsuitable for time series data or massive datasets. As a result, the ideal choice for this purpose will be LSTM, which may be employed for prediction, fault diagnosis, and classification. Af- ter LSTM is implemented, the following work will involve us- ing the K-means classification approach and the dimensional- ity reduction t-SNE function to categorize data (nonfailure and failure) according to distinct modes of operation. K-means will be used to classify data, which will then be fed into the LAD (Logical Analysis of Data) approach for training, resulting in specific patterns that imply conditional parameter values.


The author would like to express their gratitude to the team members of the Space Application Centre (SAC, ISRO) – Ahmed- abad for delivering the necessary data and information to man- age the satellites telemetry data collection. They would also









– 1






209 sec






49 sec






309 sec






12680 sec






269 sec






34 sec


– 2






214 sec






96 sec






352 sec






12675 sec






265 sec






41 sec

Table 1: Comparison of models

want to express their gratitude to managing editor reviewers for their helpful remarks and recommendations, which improved the papers quality.


[1] I. University, Space systems, research-areas/space- systems, 2022.

[2] S. K. Ibrahim, A. Ahmed, M. A. E. Zeidan, and I. E. Ziedan, Machine learning techniques for satellite fault diagnosis, Ain Shams Engineering Journal, vol. 11, no. 1, pp. 4556, 2020.

[3] , Machine learning methods for spacecraft telemetry mining, IEEE Transactions on Aerospace and Electronic Systems, vol. 55, no. 4, pp. 18161827, 2018.

[4] J. Marzat, H. Piet-Lahanier, F. Damongeot, and E. Walter, Model-based fault diagnosis for aerospace systems: a survey, Proceedings of the Insti- tution of Mechanical Engineers, Part G: Journal of aerospace engineer- ing, vol. 226, no. 10, pp. 1329 1360, 2012.

[5] Z. Gao, C. Cecati, and S. X. Ding, A survey of fault diagnosis and fault- tolerant techniquespart i: Fault diagnosis with model- based and signal- based approaches, IEEE transactions on industrial electronics, vol. 62, no. 6, pp. 37573767, 2015.

[6] , A survey of fault diagnosis and fault-tolerant techniques part ii: Fault diagnosis with knowledge-based and hybrid/active approaches, IEEE Transactions on Industrial Electronics, vol. 62, no. 6, pp. 3768 3774, 2015.

[7] Z. Liu and H. He, Model-based sensor fault diagnosis of a lithium-ion battery in electric vehicles, Energies, vol. 8, no. 7, pp. 65096527, 2015.

[8] I. Trendafilova, M. P. Cartmell, and W. Ostachowicz, Vibration- based damage detection in an aircraft wing scaled model using principal com- ponent analysis and pattern recognition, Journal of Sound and Vibration, vol. 313, no. 3-5, pp. 560566, 2008.

[9] L. Ren, W. Lv, S. Jiang, and Y. Xiao, Fault diagnosis using a joint model based on sparse representation and svm, IEEE Transactions on Instru- mentation and Measurement, vol. 65, no. 10, pp. 23132320, 2016.

[10] A. Rahimi, K. D. Kumar, and H. Alighanbari, Fault estimation of satel- lite reaction wheels using covariance based adaptive unscented kalman filter, Acta Astronautica, vol. 134, pp. 159 169, 2017.

T. Jiang, K. Khorasani, and S. Tafazoli, Parameter estimation-

[11] based fault detection, isolation and recovery for nonlinear satellite

Transactions on control systems technology, vol. 16, no. 4, pp. 799808, 2008.

[12] P. Manikandan and M. Geetha, Takagi sugeno fuzzy expert model based soft fault diagnosis for two tank interacting system, Archives of Control Sciences, vol. 24, no. 3, pp. 271 287, 2014.

[13] T. E. o. E. Britannica, Spacecraft, technology/spacecraft, 2021.

[14] S. Logic, Telemetry, telemetry/, 2019.

[15] Technopedia, Telemetry def, 14853/telemetry, 2022.

[16] A. Altvater, Telemetry data, tutorial/, 2017.

[17] E. Burns, Machine learning, searchenterpriseai/definition/machine-learning-ML, 2021.

[18] T. Yairi, N. Takeishi, T. Oda, Y. Nakajima, N. Nishimura, and

N. Takata, A data-driven health monitoring method for satellite housekeeping data based on probabilistic clustering and dimensionality reduction, IEEE Transactions on Aerospace and Electronic Systems, vol. 53, no. 3, pp. 1384 1401, 2017.

[19] B. Nassar and W. Hussein, State-of-health analysis applied to spacecraft telemetry based on a new projection to latent structure discriminant anal- ysis algorithm, in 2015 IEEE Aerospace Conference. IEEE, 2015, pp. 111.

[20] T. Yang, B. Chen, Y. Gao, J. Feng, H. Zhang, and X. Wang, Data mining- based fault detection and prediction methods for in-orbit satellite, in Pro- ceedings of 2013 2nd International Conferenc on Measurement, Infor- mation and Control, vol. 2. IEEE, 2013, pp. 805808.

[21] G. P. Zhang, Time series forecasting using a hybrid arima and neural network model, Neurocomputing, vol. 50, pp. 159 175, 2003.

[22] P.-F. Pai and C.-S. Lin, A hybrid arima and support vector machines model in stock price forecasting, Omega, vol. 33, no. 6, pp. 497505, 2005.

[23] P.-S. Yu, S.-T. Chen, and I.-F. Chang, Support vector regression for real- time flood stage forecasting, Journal of hydrology, vol. 328, no. 3-4, pp. 704716, 2006.

[24] G. Zhang, B. E. Patuwo, and M. Y. Hu, Forecasting with artificial neu- ral networks:: The state of the art, International journal of forecasting, vol. 14, no. 1, pp. 3562, 1998.

models, IEEE

[25] Y. Gao and D. Glowacka, Deep gate recurrent neural network, in Asian conference on machine learning. PMLR, 2016, pp. 350 365.

[26] A. Graves, M. Liwicki, H. Bunke, J. Schmidhuber, and S. Ferna´ndez, Unconstrained on-line handwriting recognition with recurrent neural networks, Advances in neural information processing systems, vol. 20, 2007.

[27] P. Malhotra, L. Vig, G. Shroff, P. Agarwal et al., Long short term memory networks for anomaly detection in time series, in Proceedings, vol. 89, 2015, pp. 8994.

[28] Y. Gao, T. Yang, N. Xing, and M. Xu, Fault detection and diagnosis for spacecraft using principal component analysis and support vector ma- chines, in 2012 7th IEEE Conference on Industrial Electronics and Ap- plications (ICIEA). IEEE, 2012, pp. 19841988.

[29] Q. Li, X. Zhou, P. Lin, and S. Li, Anomaly detection and fault diag- nosis technology of spacecraft based on telemetry-mining, in 2010 3rd International Symposium on Systems and Control in Aeronautics and As- tronautics. IEEE, 2010, pp. 233236.

[30] T. Yairi, Y. Kawahara, R. Fujimaki, Y. Sato, and K. Machida, Telemetry- mining: a machine learning approach to anomaly detection and fault diag- nosis for space systems, in 2nd IEEE International Conference on Space Mission Challenges for Information Technology (SMC-IT06). IEEE, 2006, pp. 8pp.

[31] A. Hayes, Arima, autoregressive-integrated-moving-average-arima.asp/, 2021.

[32] P. C. S. Abirami, Mlps, computer- science/multilayer-perceptron, 2020.

[33] Aishwarya, Rnn, to- recurrent-neural-network/, 2022.

[34] N. Donges, Rnns, neural- networks-and-lstm, 2021.

[35] G. Editors, Lstm, lstm- networks/, 2021.

[36] A. Graves, Supervised sequence labelling, in Supervised sequence la- belling with recurrent neural networks. Springer, 2012, pp. 513.

[37] K. Greff, R. K. Srivastava, J. Koutn´k, B. R. Steunebrink, and J. Schmid- huber, Lstm: A search space odyssey, IEEE transactions on neural net- works and learning systems, vol. 28, no. 10, pp. 22222232, 2016.

[38] S. Srivastava and S. Lessmann, A comparative study of lstm neural net- works in forecasting day-ahead global horizontal irradiance with satellite data, Solar Energy, vol. 162, pp. 232 247, 2018.

[39] V. A. S. Vishal Morde, Xgboost, https-medium-com- vishalmorde-xgboost-algorithm-long-she-may-

rein-edd9f99be63d, 2019.

[40] M. Pathak, Xgboost, xgboost-in- python, 2019.

[41] V. Vapnik, The nature of statistical learning theory. Springer science & business media, 1999.

[42] T. Joachims, Text categorization with support vector machines: Learn- ing with many relevant features, in European conference on machine learning. Springer, 1998, pp. 137142.

[43] M. Bukhsh, M. S. Ali, M. U. Ashraf, K. Alsubhi, and W. Chen, An interpretation of long short-term memory recurrent neural network for approximating roots of polynomials, IEEE Access, vol. 10, pp. 28 194 28 205, 2022.

[44] M. Grogan, Result of xgboost, xgboost-for-time-series-forecasting-dont-use-it-blindly- 9ac24dc5dfa9, 2021.