A Survey on Predicting Solar Radiation Based on Neural Networks

DOI : 10.17577/IJERTV3IS091073

Download Full-Text PDF Cite this Publication

Text Only Version

A Survey on Predicting Solar Radiation Based on Neural Networks

Ms. T. Sandhya, Ms. V. R. Kavitha,

Second year M.E, Associate Professor,

Computer Science And Engineering, Computer Science And Engineering, Prathyusha Institute of Technology And Management, Prathyusha Institute of Technology And Management,

Chennai, India. Chennai, India.

Abstract:- This paper presents a survey on predicting the global solar radiation based on neural networks. Solar radiation is considered as the primary factor in many applications that makes use of solar energy. This solar radiation data provides information about the use of solar energy in various locations. Hence the need for predicting the solar radiation is increasing day-by-day. The solar radiation can be predicted by considering some climatic parameters such as air temperature, air pressure, humidity, wind speed, wind direction and so on. There are various algorithms in neural networks. The ultimate goal of this survey is to provide an overview of predicting solar radiation based on neural networks and training the network using back-propagation algorithm.

Keywords: Neural Network, Back-propagation, Global Solar Radiation, Root Mean Bias error, Mean Bias Error


Solar energy is one of the basic elements for all renewable and fossil fuels. Solar radiation data is always a necessary basis for the design of any solar energy conversion device and for a feasibility study of the possible use of solar energy [5].In the recent years the need for forecasting and predicting solar radiation has increased to optimize energy distribution between buildings.

This paper presents the survey on designing, implementing and validating a new approach to predict daily monthly average global solar radiation by considering various climatological factors along with neural networks. The neural network is an interesting tool of modeling linear and non-linear systems. The neural network is adjusted, trained so that a particular input leads to a specific target. The neural networks are trained with back propagation algorithm as this algorithm is capable of training the neural network at a faster rate by minimizing errors.

  1. GLOBAL SOLAR RADIATION PREDICTION Solar radiation is one of the most difficult

    meteorological parameters to be estimated as it depends on several climatic, geographical and astronomical parameters [9]. The parameters such as temperature, pressure, humidity, wind direction, wind speed, sunshine duration are some common parameters that are used for predicting solar radiation.

    The solar radiation was predicted at different locations like Abu Dhabi, Nigeria, Egypt, China, India and so on. The multi layer feed forward technique in neural network were used most frequently to predict global solar radiation. The Multi-layer feed forward networks were trained by Back-propagation algorithm as they train the neurons at a faster rate.

    The solar radiation data that are recorded at the meteorological centers for few years are collected. These data are divided as two sets in the neural networks [2]:

    1. The first few years of data are used as a training data set which are given as input that is helpful in training the neural networks

    2. The next few years of data are used as the test data test which is used to test the built network

    In literature the neural networks have found great success in modeling and predicting solar radiation at hourly basis

    [11] [12], daily and monthly basis for several locations [2]- [10].



    Neural networks are generally composed of large number of interconnected neurons and they are typically organized in layers. These layers are made up of interconnected nodes which contain an activation function. The Patterns are presented to the network through the input layer', which communicates to one or more hidden layers where the actual processing by weighted connections. The general outline of the process of neural network is represented below,



    Neural Network including connections (called weights) between neurons

    Input output

    Adjust weight

    Fig1: General Outline of Neural Networks

    The above Fig1 represents the general outline of neural network where the input is passed to various layers of neurons and the connections between them are defined through weights. The output that is obtained from the network is then compared with the target data. The weights between the layers can be adjusted to get better results by minimizing errors.The neural networks analyze data by three major steps [4]:

    1. Training

      The data is presented to the network during training and the weights are assigned to the network to minimize errors.

    2. Validation

      Data is used to measure network generalizations and halt training when generalization stops improving. A set of target data is given to validate the network.

    3. Testing

    The data that has no effect on the training and provides measure of network program during and after training. The data can be tested with some estimated values.

    The neural network toolbox supports supervised learning with feed-forward, radial basis function and dynamic function.


    Feed forward neural networks are artificial neural networks that do not form a directed cycle when connections are made between neurons. There are two types of feed- forward neural networks they are single-layer perceptron and multi-layer perceptron


    The single layer feed forward network consists of single layer of output nodes. The inputs are fed directly to the outputs via a series of weights.

    Input Layer Output Layer

    Fig2: Structure of Single Layer Perceptron

    The single layer perceptron can model only linearly separable classes. Hence to overcome the limitations of single layer feed forward, multi layer feed forward technique was introduced.


      This class of networks consists of multiple layers of computational units which are interconnected in feed forward way. The multi-layer Feed forward network consists of three layers: an input layer, hidden layer and an output layer [3]. The input signal propagates through the network in a forward direction, on a layer-by-layer basis [10].

      Input layer Hidden Layers Output Layer

      Fig3: Structure of Multi-Layer Perceptron

      Multi-layer networks use a variety of learning algorithms the most commonly used algorithm is the Back- propagation algorithm.


      A recurrent neural network is a class of artificial neural network where the connections between units form a directed cycle. Recurrent neural networks use their internal memory to process sequence of inputs. A Jordan recurrent network is the primary type of recurrent neural network. The structure of Jordan recurrent neural network is given below [6]:

      Context Units

      Input Layer

      Hidden Layer

      Output Layer

      Fig4: Structure of Jordan Recurrent Neural Network

      The above fig4 consists of an extra node called context units, beside the input layer. Such context units are connected to the hidden layer and hold the output of the neural network to feed it back to the hidden layer [6].

      In general, recurrent neural networks show good performance in prediction if there is temporal structure in data [6].


    The Back-propagation algorithm is a common method for



    ( )


    training neural networks. It requires a known, desired

    ( )2

    output for each input value. It is a multi-layered feed




    forward networks based on sigmoid function, like the delta rule. Back-propagation requires the activation function used by the artificial neurons. The back propagation learning algorithm can be divided into two phases: propagation and weight update.

      1. Phase 1: Propagation

        Each propagation involves the following steps:

        1. Forward propagation of training the input data through the neural network in order to generate the output.

        2. Backward propagation of the output through the

          Where yi is an estimated value, xi is a measured value, and N is equal to the number of observations [5]. This Mean Bias error is used to measure the errors by finding the difference between the estimator and what is to be estimated.


          Radial Basis function is a real valued function whose values depend only on the distance from the origin. RBF has three distinct layers [13] called the input layer, Hidden layer and the output layer. The input layer is a set of sensory units. It has only single hidden layer and this layer non-linear. The output layer is a linear layer. The output of the radial basis function is given as follows [13]:

          neural network using the training pattern target in

          = , , + 0


          order to generate the deltas of all output and hidden neurons.

      2. Phase 2: Weight update

        For each weights assigned follow the following steps:

        1. Multiply its output delta and input activation to get the gradient of the weight.

        2. Subtract a ratio (percentage) of the gradient from the weight.

    This ratio influences the speed and quality of learning; it is called the learning rate. The greater the ratio, the faster the neuron trains. The lower the ratio, the more accurate the training is. The gradient of a weight indicates where the error is increasing; therefore the weight must be updated to minimize errors.

    There are fourteen back-propagation algorithms that are available they are Levenberg- Marquardt, Bayesian regularization, BFGS quasi- Newton, Powell -Beale conjugate gradient, Gradient descent with adaptive learning rate, Gradient descent with momentum & adaptive learning rate, Gradient descent, Gradient descent with momentum, One step secant, Fletcher-Powell conjugate gradient, Random order incremental training with learning, Resilient, Polak-Ribiere conjugate gradient and Batch training with weight & bias learning rules[2]. These algorithms are very useful in training the neural network.


These back propagation algorithms are mainly used in providing comparison between the estimated value and measured value through correlation and error analysis [5]. The error analysis is performed by computing the Mean bias error (MBE) and root mean square error (RMSE). The Back propagation algorithm uses Mean Bias error as the cost function. The MBE and RMBE is calculated using the following equations,


Where n is the number of neurons in the hidden layer, 0 is the bias term, is the weight between the hidden layer and the output layer.


The prediction of solar radiation is necessary for many scientific and environmental applications. This survey presents the ability to predict monthly average daily global solar radiation with different meteorological parameters based on neural networks. Neural networks are suitable for any kind of applications that involves prediction. The neural network is trained by back- propagation algorithm as this training algorithm helps in minimizing the global error such as the Root Mean Squared Error (RMSE) and so on. The neural network compares the measured value with that of a predicted value and is capable of providing better statistical results with minimized errors.


  1. A. Assi and M. Jama, Estimating global Solar Radiation on Horizontal from Sunshine Hours in Abu Dhabi UAE, Proceeding of the 4th International Conference on Renewable Energy Sources (RES10), pp. 101-108, May 2010, Sousse, Tunisia.

  2. A. Assi and M. Al-Shamisi, Prediction of Monthly Average Daily Global Solar Radiation in Al Ain City, UAE, Using Artificial Neural Networks, in Proceedings of the 25th European Photovoltaic Solar Energy Conference, pp. 508512, Valencia, Spain, September 2010.

  3. Rasheed H. AL-Naimi, Ali M. AL-Salihi, Dher I. Bakr Neural network based global solar radiation estimation using limited meteorological data for Baghdad, Iraq, in the proceedings of the International Journal of Energy and Environment, Volume 5, Issue 1, 2014 pp.79-84.

  4. Rajesh Kumar, RK Aggarwal* and J D Sharma, New Regression Model to Estimate Global Solar Radiation Using Artificial Neural Network, in the proceedings of Advances in Energy Engineering, Volume 1 Issue 3, July 2013.

  5. Emad A. Ahmed and M. El-Nouby Adam, Estimate of Global Solar Radiation by Using Artificial Neural Network in Qena, Upper Egypt, in the proceeding of Journal of Clean Energy Technologies, Vol. 1, No. 2, April 2013.

  6. Rami El-Hajj Mohamad, Mahmoud Skafi, Ali Massoud Haidar, Predicting Global Solar Radiation Using Recurrent Neural Networks and Climatological Parameters, World Academy of Science, Engineering and Technology International Journal of Mathematical, Computational, Physical and Quantum Engineering Vol:8 No:2, 2014 .

  7. M. A. AbdulAzeez, Artificial Neural Network Estimation of Global Solar Radiation Using Meteorological Parameters in Gusau, Nigeria, Scholars Research Library Archives of Applied Science Research, 2011, 3 (2):586-595

  8. Sthitapragyan Mohanty, ANFIS based Prediction of Monthly Average Global Solar Radiation over Bhubaneswar (State of Odisha), International Journal of Ethics in Engineering & Management Education, Volume 1, Issue 5, May2014.

  9. Hicham El Badaoui, Abdelaziz Abdallaoui, Samira Chabaa, Using MLP neural networks for predicting global solar radiation, The International Journal Of Engineering And Science (IJES),Volume 2, Issue 12, 2013.

  10. N. Premalatha, Dr. A. ValanArasu, estimation of global solar radiation in india using artificial neural network, International Journal of Engineering Science & Advanced Technology Volume-2, Issue-6, 1715 1721.

  11. Hong-Tzer Yang, , Chao-Ming Huang, Yann-Chang Huang, and Yi-Shiang Pai, A Weather-Based Hybrid Method for 1-Day Ahead Hourly Forecasting of PV Power Output, IEEE Transactions on Sustainable Energy, vol. 5, no. 3, july 2014.

  12. Cyril Voyant, Marc Muselli, Christophe Paoli, Marie-Laure Nivet, Hybrid methodology for hourly global radiation forecasting in Mediterranean area, Renewable Energy 53 (2013).

  13. Jianwu Zeng et al Short-Term Solar Power Prediction Using an RBF Neural Network IEEE Power and Energy Society 2011

Leave a Reply