Artificial Neural Network based Runoff Prediction Model for a Reservoir

DOI : 10.17577/IJERTV1IS3232

Download Full-Text PDF Cite this Publication

Text Only Version

Artificial Neural Network based Runoff Prediction Model for a Reservoir


Yogesh Shirke1, Prof.Dr.Rameshwar Kawitkar2,

1,2Department of Electronics and Telecommunication,

Sinhgad College of Engg., UOP, Pune, India.

Mr. Selva Balan3

3Central Water and Power Research Station, Khadakwasla, Pune, India.


In recent ye ars research on Ar tificial Neur al Ne tworks (ANN) pr ove d to be most conve nient and easy tool for modeling and analysis of non -linear e vents. This ability of ANN to model non-line ar e vents is impor tant i n hydr olog y to model vari ous hydr ological e vents which are domi nantl y non-line ar in nature. It is also capable of Modelling Non-linear relationshi p be twee n Rainfall and Runoff as compare d to other Mathe matical modeling techni ques. In this paper, ANN modeling is done with the help of MATLAB for pre diction of water arri ving at reservoir. B ack Propagation (BP) algorithm is used to e valuate err or and back propag ate it for more acc urate training of ANN.

  1. Introduction

    Ra infall-Runoff (RR) prediction is one of the most complicated processes in environmental modeling. Th is is due to the spatial and temporal variab ility of topographical characteristics, rainfa ll patterns, and the number of parameters to be derived during the calibration. Accurate RR predict ions largely depend on the long -term observation and recordings of precipitation and runoff. Hydrologica l cycle is a highly nonlinear system, and it makes hydrological modeling very complicated. Ra infa ll -Runoff modeling plays an important role in flood control, water resources and water environment manage ment. Modeling of such non-linearity and uncertainty associated with ra infall- runoff process has lot of importance. An A NN (Artific ia l Neural Net work) may be treated as a universal approximator


    since it is capable to learn and generalize knowledge or data fro m suffic ient data pairs. This makes ANN a powerful tool to solve large-scale comple x proble ms such as pattern recognition, nonlinear modeling, classification, association, control, hydrology and many other. ANN models are well suited for hydrological time series modeling since they can approximate virtually any measu rable functions upto an arbitrary degree of accuracy. There fore they are increasingly being applied in daily runoff forecasting. ANN is e xpert at mapping non-linear relationship between inputs and outputs. Thus daily runoff forecasting based on artific ial Neural Net work (ANN) models has become quite important for effective planning and management of water resources. ANN models perform better than Process -based models. Several studies indicate that ANN have proven to be potentially useful tools in hydrological modeling such as for modeling of ra infall- runoff processes, flow prediction, water quality predict ions, operation of reservoir system, groundwater rec la mation problems etc. The objective of the present study is to develop rainfall-runoff mode ls using ANN methods.

  2. Lite rature Review

    Artificia l Intelligence(AI) has been popular since 1990s and has been widely used in many areas. In 1890, Willia m James published the first work about brain activity patterns. In 1943, McCu lloch and Pitts produced a model o f the neuron that is still used today in artificia l neural networking. In 1949, Donald Hebb published The Organizat ion of Behavior, wh ich outlined a law for synaptic neuron learning. This law, later known as Hebbian Learning in


    honor of Donald Hebb, which is one of the simplest and most straight-forward lea rning rule for art ific ial neura l networks. In 1951, Marv in Minsky created the first ANN while working at Princeton. In 1958 The Co mputer and the Bra in was published, a year after John von Neu manns death. In that book, von Neumann proposed many radical changes to the way in which researchers had been modeling the brain.

    Back-propagation neural network (BPNN) is the most popular neuron network, which can applied in rainfa ll-runoff modeling successfully. BPNN technique has the capability to model various characteristics of hydrologic resources system, including randomness, fuzziness, non -linearity, etc. BPNN is usually used for function approximation through training a network by input vector and corresponding output vector. A BPNN consists of input layer, hidden layer and output layer, and it propagates backward the error at the output layer to the input layer through the hidden layer to decrease global error.

  3. Proposed System Architecture

    This paper includes an Artific ial Neura l Net works (ANN) with Back propagation for Modelling of Hydrolig ical event. ANN is a parallel d istributed processing system made up of highly interconnected neural computing ele ments. Fig.1 shows System Architecture.

    Fig.1. ANN with Back-Propagation

    As shown in above, System ma inly consists of 5:5:1 ANN. Here ANN consists of three layers: the input layers, where the data are introduced to the network; one or more hidden layers, where the data are processed and the output layer, where the results for given inputs are produced. ANN consists of many nodes, which are processing units called neurons. These Neurons, processes the information.


    The signals are transmitted by means of connecting links. The links posses an associated weights, which are multiplied along with the inco ming signal (Net Input).Output Signal is obtained by applying activation functions to the net input. Various learning mechanis ms exist to enable the ANN to acquire knowledge.

    Each layer is made up of s everal nodes, and layers are interconnected by sets of correlation we ights. Each input node unit (i=1,,m) in input layer broadcasts the input signal to the hidden layer. Each hidden node (j=1,,n) sums its weighted input signals according to

    Z_inj = bj + W

    applies its activation function to compute its output signal fro m the input data as

    Zj = f (Z_inj)

    and sends this signal to all units in the hidden layer. Where wij is the weight between input layer and hidden layer, b j is the weight for the bias and xi is the input signal.

    The net of a neuron is passed through an activation or transfer function to produce its result.Therefore, continuous-transfer functions are desirable. The transfer function, denoted by f(k), defines the output of a neuron in terms of the activity level at its input. There are several commonly used activation functions defined as

    • The Identity function

    • The Binary Step function

    • The Binary Sigmoid (Logistic) function

    • The Binary Sigmoid (Hyperbolic Tangent) function

The transfer function used in the present report is sigmoida l which is continuous, differentiab le, monotonically increasing function, and it is the most commonly used in the backpropagation networks. The output is always bounded between 0 and 1, and the input data have been normalized to a range between 0 to 1. The Slope a is taken assumed to be 1.The sigmo id activation function will process the signal that passes fro m each node by

f(Zinj) = 1 / (1+ eZinj)

Then from second layer the signal is transmitted to third layer. The error information is transfer from the output layer back to early layers. This is known as the back propagation of the output error to the input nodes to correct the weights.


    1. Learning Processes

      By lea rning rule we mean a procedure for modifying the weights and biases of a network. The purpose of leaning rule is to train the network to perform some task. They fall into three broad categories:

      • Supervised learning

      • Reinforcement learning

      • Unsupervised learning

        In this proposed system supervised learning process is being used.This learning rule is provided with a set of training data of proper network behavior. As the inputs are applied to the network, the network outputs are compared to the targets. The learning rule is then used to adjust the weights and biases of the network in order to move the network outputs closer to the targets.

    2. Training of an ANNs

A neural network has to be configured such that the application of a set of inputs produces the desired set of outputs. Various methods to set the strengths of the connections exist. One way is to set the weights explicit ly, using a priori knowledge. Another way is to 'train' the neural network by feeding it teaching patterns and letting it change its weights according to some learning ru le.

    1. Back Propagation

      Multi-layer feed-forwa rd network can overco me many restrictions compared to single layer network. But still there is proble m of how to adjust the weights from input to hidden units. An answer to this question is that the errors for the units of the hidden layer are determined by back- propagating the errors of the units of the output layer. And hence this method is often called the back-propagation learning ru le. Bac k-propagation can also be considered as a generalization of the delta rule for non -linear activation functions and multilayer networks. Bac kpropagation is the most commonly used supervised training algorithm in the mu ltilayer feed-forward networks. The network we ights are modified by minimizing the error between a target and computed outputs. The weights are updated continuously until minimu m e rror is achieved. A tra ining pair is selected fro m the train ing set and applied to the network. The network ca lculates the output based on the inputs provided in this training pair. The resultant outputs from the network are then compared with the e xpected outputs identified by the training pair. The weights and biases of each neuron are then adjusted by a factor based on the derivative of the sigmoid function, the differences between the expected


      network outputs and the actual outputs (the error), and the actual neuron outputs. Through these adjustments it is possible to improve the results that the network generates, and thus the network is seen to learn. How much each neuron's weights and bias are adjusted in the back- propagation algorithm a lso depends on a learning parameterwhich is nothing but the learning Rate() and it is a single factor by which all adjustments are multip lied. A large learn ing rate can result in training oscillatation fro m one poor extre me result to another, whereas a small learning rate can lead to a situation where the network does not learn anything and is caught in a local min imu m, unable to reach a more accurate set of we ights. So proper selection of learning rate is most important before train ing the ANN.

      1. Procedure for Back-Propagation of Error

        Back propagation error can be calcu lated as, Error ek = (tk-yk)f(y-ink)

        Where ek-error informat ion, tk-output target unit k, yk-output unit k

      2. Weight and Biases Updation

Each output unit (yk ) updates its bias and weight to minimize e rror between output and target.

  1. The Weight Correction is given by,

    Wjk= ekZj

    Where is lea rning rate

    Thus W(new) = Wjk(o ld) + Wjk

  2. Biase Correction is Given by,

bok= ek

Thus bok(new) = bok(old) + bok

  1. Conclusion and Future Work

    In this paper, research on ANN is being carried out for Hydrologica l Model to Predict water leve l at Da m. Bac k Propagation learning rule is being used to optimise error by evaluating and back propagating error for more accuracy in training. After successful training of this ANN model, the results can be used to control Reservoir operations such as flood controlling, Water usage management,etc

  2. Acknowlegement


The authors are grateful to Mr. J. A. Shimp i (Research Office r, Bo mbay Model) for their kind encouragement and help during the progress of this work.


  1. R.Remesan, M . A. Shamim, D. Han, J. M athewANFIS and NNARX based Rainfall-Runoff M odeling 2008 IEEE, International Conference on Systems,M an and Cybernetics (SM C 2008)

  2. Jingwen XU, Junjang Zha, Wanchang Zhang, Zhongda HU, Ziyan Zheng,M id-short- term daily runoff forecasting by ANNs and multiple process-based hydrological models, 2009 IEEE

  3. Xian Luo, You-peng Xu, Jin-tao Xu, Regularized Back- Propagation Neural Network for Rainfall-Runoff

    M odeling, International Conference on Network Computing and Information Security,IEEE 2011

  4. Luciano Carli M oreira de Andrade, Ivan Nunes da Silva Very Short-Term Load Forecasting Using a Hybrid Neuro-Fuzzy Approach, 2010 IEEE, Eleventh Brazilian Symposium on Neural Networks

  5. Gokmen Tayfur and Vijay P. Singh ANN and Fuzzy Logic M odels for Simulating Event-Based Rainfall-Runoff Journal of Hydraulic Engineering © ASCE / December 2006

  6. Tata M cGraw-Hill Publications Introduction To Neural Networks Using M ATLAB 6.0 by S. N. Sivanandam,S. Sumathi & S.N.Deepa,


Leave a Reply