American Journal of Innovative Research and Applied Sciences. ISSN 2429-5396 I www.american-jiras.com American Journal of Innovative Research and Applied Sciences. ISSN 242
ORIGINAL ARTICLE

| Ghatfan, Abdalkareem Ammar 1 | Amer, Qousai Al Darwish 2 | and | Alaa, Ali Slieman 3 |
Tishreen University | department of water engineering and irrigation | Lattakia | Syria |
AL Baath University | Department of Water Resources Engineering and Management | Homs | Syria |
AL Baath University | Department of Water Resources Engineering and Management | Homs | Syria |
| Received | 12 August 2018 | | Accepted 27 August 2018 | | Published 28 August 2018 | | ID Article | Ghatfan-ManuscriptRef.2-ajira180818 |
</sect1>
<sect1>
<title>ABSTRACT
Background: Water resource planning and management requires long time series of hydrological data (e.g. precipitation, river flow). However, sometimes hydrological time series happen to be incomplete or to have missing values. Objectives: This paper focuses on the possibility of using artificial neural network (ANN) in the treatment of the missing precipitation data in AL Rasafeh station (Hama, Syria) as a target station, using the available data of neighboring stations (Ein Hlaqeem, Wade Alaewn, Messiaf), which were used as reference stations during the period from (1/1/1994) to (31/12/2001). Methods: This paper presents a technique for replacing missing spatial data using a feed forward neural network with Levenberg-Marquardt algorithm applied to concurrent data from nearby gauges of the three reference stations depending on their correlation coefficients with the target station. Results: On evaluating the performance of FFNN with LM algorithm models, it turned out that these models have the ability to estimate, accurately and reliably, the missing amounts of the target station, which allows for a flexible, easy-to-use system that can be developed later by increasing the number of the neighboring stations and selecting the most accurate model to predict the missing precipitation values in a certain period (Mean-Root Square Error, RMSE) in conformity with a case of missing values of one or more neighboring stations for the same period. Conclusions: Access to accurate and reliable precipitation data is essential to water resources assessment, management and planning activities. Results show good ability of the proposed models to replace missing rain gauge data in the target station using a Feedforward neural network that calculates its estimates on precipitation amounts from nearby rain gauges depending on available patterns of inputs.
Keywords: Precipitation, Prediction, Artificial neural network, data estimation, missing data.
INTRODUCTION
INTRODUCTIONINTRODUCTIONMissing precipitation data from a time series of observations can present a serious obstacle to data analysis, modelling studies and forecasting in hydrology. Although remotely sensed data from satellites and radar are becoming increasingly available, point measurements obtained from precipitation gauges remain the most reliable source for quantitative precipitation data at relatively short time scales (hourly and daily). However, such data are often lost due to instrument failure, contamination by measurement errors, or rendered useless in myriad other ways [1].Bustami et al., (2007) enhanced the process of forecasting water level of Bedup River in the province of Sarawak, Malaysia with and without using artificial neural networks (ANN) to estimate the missing precipitation amounts used in the models. The results showed the ability of neural networks to predict the missing data from the records of Bedup River with 96.4% accuracy after forecasting the missing precipitation amounts with 85.3 % accuracy [2]. Lunga (2010) put a Feedforward back-propagation model to predict the annual precipitation in Luckhoff-pol precipitations station in South Africa depending on the data of the neighboring station of Bleskop. The results demonstrated the neural network's ability to estimate the missing values, and showed that the longer the data gaps are, the larger the error becomes. Also, he reached similar results with other neighboring stations [3]. Nkuna and Odiyo (2011) conducted a study over the basin of Luvuvhu River, South Africa using ANNs to infill the missing daily precipitation data of different periods depending on the data of five neighboring stations [4]. Roman et al., (2012) predicted the missing precipitation data in two precipitation stations in Malkapur and Balapur over the basin of Purna River in India. They used the precipitation data of nine neighboring stations for the same period of time (1994- 2003) where the results showed good performance of ANN models in all stations as well as an increase in the accuracy of the model when predicting data as a monthly value since it assimilates data differences (whether increase or decrease) between one station and the other [5].MATERIALS and METHODS
MATERIALS and METHODSMATERIALS and METHODS2.1 Artificial Neural NetworkA neural network is a set of interconnected neural processing units that imitate the activity of the brain. These elementary processing units are called neurons. Figure 2.1 illustrates a single neuron in a neural network [6].Figure 2.1: The figure presents the artificial Neuron Model [1].Figure 2.1: The figure presents the artificial Neuron Model [1].In the figure, each of the inputs
(
xi
)
left (xi right )
has a weight
wi
wi
that represents the strength of that particular connection. The sum of the weighted inputs and the bias
(
b
)
left (b right )
is input to the transfer function
(
f
)
left (f right )
to generate the output
(
y
)
left (y right )
. This process can be summarized in the following formula Eq. (2.1):
y
=
f
(
∑
i
=
1
n
x
i
∗
w
i
+
b
)
(
2.1
)
y=f left (sum from {i=1} to {n} {{x} rsub {i} * {w} rsub {i} +b} right ) (2.1)
MATLAB provides built-in activation functions which are used in this study; linear (purelin), Hyperbolic Tangent Sigmoid (logsig) and Logistic Sigmoid (tansig) [7].Multilayer Feedforward neural networks contain one or more hidden layers of neurons between the input and output layers. Each neuron in the layer is connected to every neuron in the next layer, so each neuron receives its inputs directly from the previous layer (except for the input nodes) and sends its output directly to the next layer (except for the output nodes. Traditionally, there is no connection between the neurons of the same layer. Figure 2.2 shows an example of a multilayer neural network with one hidden layer.Figure 2.2: The figure presents the multilayer feed forward neural network.Figure 2.2: The figure presents the multilayer feed forward neural network.Designing ANN models follows a number of systemic procedures. In general, there are four basics steps: (1) Collecting and preprocessing data, (2) building the network, (3) train, and (4) test performance of model.Data collection and pre-processing After collecting the data, ANNs are prepared to be trained in a more efficient manner through examining homogeneity of the stations and checking out for any missing data. Four precipitation stations were chosen for this study; namely: AL Rasafeh, Ein Hlaqeem, Wade Alaewn, Messiaf in Hama, Syria for an 8-year observation period (1994-2001) (2992 values). A relatively low homogeneity among the data of the target station itself, on one hand, and with the data of the neighbouring stations, on the other hand, was observed; so, to realize the objective of this study and to make sure of the validity of the neural models, a 5% random loss in the four stations was assumed (equals 146 values), then AL Rasafeh station was taken as a target station to infill its missing data. Table 2.1 shows a descriptive statistics of precipitation data in the four precipitation stations used in the study in addition to their sites on longitude and latitude. Table 2.1: the table presents the descriptive Statistics of precipitation data in the rain stations used in the study.Table 2.1: the table presents the descriptive Statistics of precipitation data in the rain stations used in the study.0LatitudeDe Mi SeLongitudeDe Mi SeAnnual (mm)NN*MeanSE.MeanSt.DevMinMaxSkewnessKurtosis

AL Rasafeh35 02 1036 17 351461.827761464.230.2512.9901434.6828.73

Ein Hlaqeem34 55 2536 18 201581.727761463.740.2211.8301504.9031.12

Wade Alaewn35 00 0736 11 241441.827761463.410.2010.6401304.7729.60

Messiaf33 04 0736 20 121217.227761462.920.189.6301155.1934.10

De Mi Se: Degree Minute Second; N: Number of data; N*: Number of missing data; SE.Mean: Standard Error of Mean; St.Dev: Standard Deviation; Min: Minimum; Max: maximumThere is general agreement on how to select the neighbouring stations to create a reference series or to link the target station to the neighbouring stations, an act that depends on two elements which are:Table 2.2 shows the values of Pearson correlation coefficients between pairs of stations proposed for study which indicates that correlation is significant at the level 0.01, a matter that is attributed to the locations of the stations in a climate regions where similar atmospheric conditions prevail (the first precipitation human-settlement region in Hama Governorate.)Table 2.2: Pearson correlation coefficient between stations during the period (1994-2001).Table 2.2: Pearson correlation coefficient between stations during the period (1994-2001).Precipitations StationAL RasafehEin HlaqeemWade Alaewn

Ein Hlaqeem0.8690.00

Wadi AL Eyoun0.8960.0000.8480.000

Messiaf0.9010.0000.8390.0000.8310.000

Cell Contents: Pearson Correlation P-Value

<orderedlist><listitem/></orderedlist></sect2><sect2><title>Building the network:At this stage, the designer specifies the number of hidden layers, the number of neurons in each layer, the activation function in each layer, the training function, the weight/bias learning function and the performance function. Researches about the neural networks point out to the nonexistence of a standard style for the determination of the number of hidden layers or the number of neurons. Instead, this number is determined according to the viewpoint of the designer of the model [8]. In this work, only one hidden layer is adopted. Using feedforward networks with Levenberg-Marquardt algorithm, different models were designed for different input patterns (three groups of data for three neighboring stations), taking into consideration different divisions of the training, validation and testing sets [9]. The number of neurons is between 20 – 50 with a transition step of 2 neurons. By comparing the results (R, RMSE, NSE), the number of neurons, the activation functions used in both the hidden and the input layers and the iterations of each epoch were identified. Training an Artificial Neural Network:Once a network has been structured for a particular application, it becomes ready to be trained. To start this process, the initial weights are chosen randomly, then training, or learning, begins.There are two approaches to training- supervised and unsupervised [10]. Supervised training involves the mechanism of providing the network with the desired output either by manually "grading" the network performance or by providing the desired outputs with the inputs. Unsupervised training, on the other hand, is the one where the network has to make sense of the inputs without outside help, which is the type of training used in this article.The vast bulk of networks utilize supervised training. Unsupervised training is used to perform some initial characterizations on inputs. However, in the full blown sense of being truly self-learning, it is still just a shining promise that is not fully understood, does not completely work, and thus is relegated to the lab. Back propagation [11] is the most commonly used supervised learning algorithm for multilayer Feedforward networks. It works as follows: for each example in the training set, the algorithm calculates the difference between the actual and the desired outputs, i.e. the error, using a predefined error function. Then the error is back-propagated through the hidden nodes to modify the weights of the inputs. This process is repeated for a number of iterations until the neural network converges a minimum error solution. Although back propagation algorithm is a suitable method for neural network training, it has some shortcomings such as slow convergence and being easily trapped in local minima. For this reason, several improved learning algorithms are proposed including Levenberg-Marquardt (LM) algorithm [12], which is faster and more reliable than any other back Propagation techniques [13]. The next step is to test the performance of the developed model. At this stage, the model is exposed to unseen data. For the case study of AL-Rasafeh Station, weather data at a rate of 10% of total data have been used for testing the ANN models.Testing the network:In order to evaluate the performance of the developed ANN models quantitatively and verify whether there is any underlying trend in performance of ANN models, statistical analysis including the correlation coefficient (R), the root mean square error (RMSE). RMSE provides information on the short term performance which is a measure of the variation of predicated values around the measured data. The lower the RMSE is, the more accurate the estimation is [14]. Nash-Sutcliffe efficiency (NSE) is a normalized statistic that determines the relative magnitude of the residual variance ("noise") compared to the measured data variance ("information"). NSE indicates how well the plot of observed versus simulated data fits the 1:1 line [15]. The aforementioned statistical parameters are expressed by equations. (2.2), (2.3) and (2.4):
R
=
∑
i
=
1
n
(
P
obs
−
P
´
obs
)
(
P
pre
−
P
´
pre
)
∑
i
=
1
n
(
P
obs
−
P
´
obs
)
2
∗
∑
i
=
1
n
(
P
pre
−
P
´
pre
)
2
(
2.2
)
R = {sum from {i =1} to {n} {( {P} rsub {obs} - {acute {P}} rsub {obs} )( {P} rsub {pre} - {acute {P}} rsub {pre} )}} over {sqrt {sum from {i =1} to {n} {{( {P} rsub {obs} - {acute {P}} rsub {obs} )} ^ {2} * sum from {i =1} to {n} {{( {P} rsub {pre} - {acute {P}} rsub {pre} )} ^ {2}}}}} (2.2)
RMSE
=
∑
i
=
1
n
(
P
obs
−
P
pre
)
2
n
(
2.3
)
RMSE = sqrt {{sum from {i =1} to {n} {{( {P} rsub {obs} - {P} rsub {pre} )} ^ {2}}} over {n}} (2.3)
NSE
=
1
−
[
∑
i
=
1
n
(
P
obs
−
P
pre
)
2
∑
i
=
1
n
(
P
obs
−
P
´
pre
)
2
]
(
2.4
)
NSE =1- left [{sum from {i =1} to {n} {{( {P} rsub {obs} - {P} rsub {pre} )} ^ {2}}} over {sum from {i =1} to {n} {{( {P} rsub {obs} - {acute {P}} rsub {pre} )} ^ {2}}} right ] (2.4)
Where:
n
n
is the number of training or testing samples,
P
obs
{P} rsub {obs}
is the observed precipitation,
P
pre
{P} rsub {pre}
is the simulated value of precipitation,
P
´
obs
{acute {P}} rsub {obs}
and
P
´
pre
{acute {P}} rsub {pre}
is the average value of the observed precipitation and the simulated precipitation.RESULTS
RESULTS RESULTS Table 3.1 shows the structure of the best proposed models for different input patterns and different divisions by using the "dividerand" function for each of training, validation and testing sets. We used a single-layer Feedforward neural network (in 85 % of the applications, one layer was enough with good results), then we used MATLAB language in writing a programming code to build up the neural models and increase the number of neurons by 2 within the range of [20-50] and change the activation functions in both the input and the output layers (order matters), and thus the number of possible solutions is (9 x 16= 44) for each model taking into account a fixed learning ratio (lr= 0.08) and a fixed epoch (= 1000). Deciding on the structure of the model depends on MSE. It is also achieved through try/error procedure of other parameters such as the dividing ratio and the number of runs. <sect3><title> Table 3.1: Structure of the best neural network for different input models. Table 3.1: Structure of the best neural network for different input models.Model 1Model 2

;
;
;
;
;
;
;
;
;
;
;
;
;
;
;
;

TrainRatio=0.65ValRatio=0.25TestRatio=0.10TrainRatio=0.65ValRatio=0.25TestRatio=0.10

ANN structure:ANN structure:

Model 3Model 4

;
;
;
;
;
;
;
;
;
;
;
;
;
;
;
;

TrainRatio=0.65ValRatio=0.25TestRatio=0.10TrainRatio=0.65ValRatio=0.25TestRatio=0.10

ANN structure:ANN structure:

Model 5Model 6

;
;
;
;
;
;
;
;
;
;
;
;
;
;
;
;

TrainRatio=0.75ValRatio=0.15TestRatio=0.10TrainRatio=0.80ValRatio=0.10TestRatio=0.10

ANN structure:ANN structure:

Model 7

;
;
;
;
;
;
;
;

TrainRatio=0.80ValRatio=0.10TestRatio=0.10

ANN structure:

Table 3.2 shows the results of the best proposed networks for each of training, validation and testing sets. Utilized values of statistical standards (RMSE, R, NSE) exhibit the models' sensitivity to different input sets. It was noted that the model's ability to estimate the missing values increases with the availability of data from the three neighbouring stations, and that it decreases gradually according to the correlation between the stations, where these models take into account the existence of loss of values of the neighboring stations in different periods.<sect3><title>Table 3.2: The table presents the statistics of the Neural Network Models Developed.Table 3.2: The table presents the statistics of the Neural Network Models Developed.DataStatistical IndicatorsDataStatistical Indicators

RMSE (mm/day)RNSERMSE (mm/day)RNSE

Model 1Input (Ein Hlaqeem)Model 2Input (Wadi AL Eyoun)

Training Set17125.070.9150.837Training Set17135.160.9120.832

Validation Set6585.330.9150.837Validation Set6596.420.8980.806

Testing Set2635.440.8440.712Testing Set2645.230.9180.843

Model 3Input (Messiaf)Model 4Input (Ein Hlaqeem; Wadi AL Eyoun)

Training Set17145.260.9270.859Training Set16253.760.9580.918

Validation Set6604.460.9240.854Validation Set6254.230.9340.872

Testing Set2644.250.9170.841Testing Set2503.830.9220.850

Model 5Input (Ein Hlaqeem; Messiaf)Model 6Input ( Wadi AL Eyoun; Messiaf)

Training Set18794.190.9480.899Training Set16283.810.9640.929

Validation Set3763.740.90.810Validation Set6273.790.940.884

Testing Set2512.880.9770.955Testing Set2513.730.9490.901

Model 7Input (Ein Hlaqeem; Wadi AL Eyoun; Messiaf)Starting with the most accurate, neural models are arranged in descending sort as follows: Model 7>> Model5>> Model 6>> Model 4>> Model 3>> Model 1>> Model 2

Training Set19053.50.9640.929

Validation Set2382.790.9570.916

Testing Set2382.540.9670.935

R: The Correlation Coefficient; RMSE: The Root Mean Square Error; NSE: Nash-Sutcliffe Efficiency; Names of rain stations: Al-Rasafeh, Ein Hlaqeem, Wade Alaewn, Messiaf.Due to the existence of loss of data in the neighboring stations during random periods of time, we wrote a code using the MATLAB programming language to help specify the models available for such cases, then we arranged the available models in ascending sort according to RMSE value to adopt the model with the minimum error to infill the missing data when there is any loss of any precipitation value in the neighboring stations. Two matrices are obtained: the first includes the values that were estimated by the models with the minimum error, and the other includes the names of the neural networks used in estimating the missing values of the target station.Figure 3.1 shows a graph of the actual precipitation values and the values resulted from the best models in infilling the data of Al-Rasafeh station for a random test set of 5% of the all data (equal 146 values), with RMSE= 3.39 mm and NSE= 0.867, which are considered a good values given the presence of low internal and relative homogeneity between the stations, as well as the presence of some extreme and unusual values regarding the nature of precipitation.
;
;
<sect3><title>Figure 3.1: Comparison between forecasted and actual precipitation using different ANN models.Figure 3.1: Comparison between forecasted and actual precipitation using different ANN models.Figure 3.2 shows how well conformed the actual values and the values resulting from the neural models used to infill the missing values of the target station are, where the value of Pearson coefficient correlation (R= 0.93) indicates a very good correlation; consequently, these neural models can be reliably used to forecast accurately although spatial conditions overlap and diverge, a matter that corresponds to a relatively low homogeneity between the neighboring reference stations, on one hand, and the target station, on the other hand.<sect3><title> Figure 3.2: the figure presents the regression of observed and forecasted precipitations (R2=0.873) using different ANN. Figure 3.2: the figure presents the regression of observed and forecasted precipitations (R2=0.873) using different ANN.DISCUSSION
DISCUSSIONDISCUSSIONWe would like to point out here that the results of this study agree with the results of other studies and surpass the majority of the other studies which uses the aforementioned neural networks to estimate the missing precipitation data depending on one or more neighboring stations [3]. For instance, a study conducted over the basin of Luvuvhu River in South Africa depending on the data of daily precipitation of five stations did not show a good relation with the target station upon performing the homogeneity test. Despite that, the data designated for training and testing gave good results with Nash–Sutcliffe Efficiency coefficient (NSE) between (0.95- 0.55) and a Root Mean Square Error (RMSE) between (0.91- 7.50) mm [4].In an another study that utilizes the precipitation data of 9 stations during the period from 1994 to 2003, the best performance of the neural models used in it, according to Nash–Sutcliffe Efficiency coefficient (NSE), for a forecasting period of (1 day, 10 days, 1 month) has the values (0.8995, 0,908, 0.685), respectively, for the first station; and the values (0.955, 0.931, 0.911), respectively, for the second station [5].CONCLUSION
CONCLUSIONCONCLUSIONIt is the true, full, well-organized information which constitutes the 'motor nerve' of scientific research in any developed society, and it is its nature and accuracy which determine the importance of this scientific research. In this study, a Feedforward neural network (FFNN) pattern with back propagation algorithm (Levenberg-Marquardt (LM) algorithm) were used to investigate the possibility of estimating the values of missing daily precipitation in Al-Rasafeh Station which was used as a target station after optimizing the model structure by using programming algorithms designed especially for this purpose which aim at saving time and effort in obtaining an optimum neural network regarding its structure, the number of neurons and the activation functions. The findings indicate the possibility of utilizing the neighboring stations of the target station with high accuracy according to the most accurate model of the available models, a matter which provides a flexible, easy-to-use system in building up a long lasting database of the target station.Precipitation estimation is affected by geographical and regional variations and characteristics of each region. The patterns included in such studies conform with the characteristics of a specific region and thus cannot be applied directly to another one. Given this notion, this study recommends that hybrid systems be used to enhance the efficiency of forecasting models in different regions of Syria and for different time series like, for example, monthly series, annual high-values series and so on.REFERENCES
REFERENCES REFERENCES[1]. Peck, Eugene L. Quality of Hydrometeorological Data in Cold Regions 1. JAWRA Journal of the American Water Resources Association. 1997; 33(1): 125-134. Available on:https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1752-1688.1997.tb04089.x[2]. Bustami R., Bessaih N., BONG C., SUHAILI S. Artificial Neural Network for Precipitation and Water Level Predictions of Bedup River. International Journal of computer science. 2007. Available on:http://www.iaeng.org/IJCS/issues_v34/issue_2/IJCS_34_2_10.pdf[3]. Ilunga, M. Infilling annual precipitation data using Feedforward back-propagation Artificial Neural Networks (ANN): Application of the standard and generalised Backpropagation techniques. Journal of The South African Institution Of Civil Engineering. 2010; 52(1):2–10, 663. Available on:www.scielo.org.za/pdf/jsaice/v52n1/v52n1a01.pdf [4]. Nkuna T.R, and Odiyo J.O. Filling of missing precipitation data in Luvuvhu River Catchment using artificial neural networks. Physics and Chemistry of the Earth 36, 2011, 830-835. Available on:https://www.sciencedirect.com/science/article/pii/S1474706511001574 [5]. Roman U.C., Patel P.L., and Porey P.D. Prediction of missing precipitation data using conventional and artificial neural network techniques. ISH Journal of Hydraulic Engineering. 2012; 18(3):224-231. Available on:https://www.tandfonline.com/doi/abs/10.1080/09715010.2012.721660 [6]. Tan C., and Pedersen C.N.S. Financial time series forecasting using improved wavelet neural network. Master of Computer Science Faculty of Science. Univesity of Copenhagen. Copenhagen, 2009. Available on:http://www.trade2win.com/boards/attachments/data-feeds-software/94906d1288613395-build-neural-network-indicator-mt4-using-neuroshell-chong_jul2009.pdf[7]. Demuth H, and Beale M. Neural Network Toolbox for Use with Matlab--User'S Guide Verion 3.0, 1993. Available on:ftp://ftp.unicauca.edu.co/Facultades/FIET/DEIC/Materias/Control%20Inteligente/clases_2006a/Parte%20III/clase%2027%20int/docs/nnet.pdf[8]. Lopez G, Rubio M. A, Martinez M, Bahles F. J. Estimation of Hourly Global Photosynthetically Active Radiation Using Artificial Neural Network Models. Agricultural and Forest Meteorology., 2001; 107:279-291. Available on:https://www.researchgate.net/profile/GabrielLopez12/publication/223673797_Estimation_of_hourly_global_photosynthetically_active_radiation_using_artificial_neural_network_models/links/00b7d53bd781b665a7000000.pdf .[9]. SHAHIN M. A, HOLGER R. M, and JAKSA M. B. Data Division for Developing Neural Networks Applied to Geotechnical Engineering. Journal of Computing in Civil Engineering. 2004; 18(2):105-114. Available on:https://ascelibrary.org/doi/abs/10.1061/(ASCE)0887-3801(2004)18%3A2(105) [10]. Zurada M.J. Introduction to Artificial Neural Network. Jaico Publishing House. 1997; 764. Available on:https://anuradhasrinivas.files.wordpress.com/2013/08/29721562-zurada-introduction-to-artificial-neural-systems-wpc-1992.pdf [11]. Rumelhart G.E.H., D.E., and Williams R.J. Learning internal representations by error propagation. Parallel Distributed Processing: Explorations in the Microstructure of Cognition. ;1986; 1:318-362. Available on:https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=2ahUKEwi6u9G73tDcAhUBTBoKHXwTBBIQFjAAegQIBRAC&url=http%3A%2F%2Fwww.dtic.mil%2Fdtic%2Ftr%2Ffulltext%2Fu2%2Fa164453.pdf&usg=AOvVaw210N6Rv02w7e-R2B6V37Ig[12]. Nawi Nazri M, Abdullah Khan, and Rehman M. Z. A new Levenberg Marquardt based back propagation algorithm trained with cuckoo search. Procedia Technology. 2013; 11: 18-23. Available on:https://core.ac.uk/download/pdf/17297466.pdf[13]. Hagan M.T and Menhaj M.B. Training feed forward networks with Marquardt algorithm. 1994, 5(6), 989–993. Available on:http://134.208.26.59/AdvancedNA/ConjugateGradientMethod/Marquardt%20algorithm%20for%20MLP.pdf [14]. Al Shamisi., Maitha H., Ali H., Assi, and Hassan AN Hejase. "Using MATLAB to develop artificial neural network models for predicting global solar radiation in Al Ain City–UAE." Engineering education and research using MATLAB. InTech. 2011. Available on: https://www.intechopen.com/download/pdf/21382[15]. Moriasi D.N, Arnold J.G, Van Liew M.W, Bingner R.L, Harmel R.D, and Veith T.L. Model evaluation guidelines for systematic quantification of accuracy in watershed simulations. Trans. ASABE. 2007 ; 50(3) :885–900. Available on:http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.532.2506&rep=rep1&type=pdf