To read the full-text of this research, you can request a copy directly from the authors.
The energy demand and their associated costs in pressurized irrigation networks together with water scarcity are currently causing serious challenges for irrigation district’s (ID) managers. Additionally, most of the new water distribution networks in IDs have been designed to be operated on-demand complexing ID managers the daily decision-making process. The knowledge of the water demand several days in advance would facilitate the management of the system and would help to optimize the water use and energy costs. For an efficient management and optimization of the water-energy nexus in IDs, longer term forecasting models are needed. In this
work, a new hybrid model (called LSTMHybrid) combining Fuzzy Logic (FL), Genetic Algorithm (GA), LSTM encoder-decoder and dense or full connected neural networks (DNN) for the one-week forecasting of irrigation water demand at ID scale has been developed. LSTMHybrid was developed in Python and applied to a real ID.
The optimal input variables for LSTMHydrid were mean temperature (◦C), reference evapotranspiration (mm), solar radiation (MJ m 2) and irrigation water demand of the ID (m3) from 1 to 7 days prior to the first day of prediction. The optimal LSTMHybrid model selected consisted of 50 LSTM cells in the encoder submodel, 409 LSTM cells in the decoder submodel and three hidden layers in the DNN submodel with 31, 96 and 128 neurons in each hidden layer, respectively. Thus, LSTMHybrid had a total of 1.5 million parameters, obtaining a representativeness higher than 94 % and an accuracy around of 20 %.
Accurate estimation of reference evapotranspiration (ETo) provides useful information for water resource management and sustainable agriculture. This study estimates ETo with recurrent neural networks (RNNs), namely long short-term memory (LSTM) and bidirectional LSTM. Four representative meteorological sites (North Cape, Summerside, Harrington, and Saint Peters) were selected across Prince Edward Island (PEI), Canada to form a PEI dataset from mean values of the four sites' climatic variables for capturing climatic variability from all parts of the province. Based on subset regression analysis, the highest contributing climatic variables, namely maximum air temperature and relative humidity, were selected as input variables for RNNs' training (2011-2015) and testing (2016-2017) runs. The results suggested that the LSTM and bidirectional LSTM are suitable methods to accurately (R 2 > 0.90) estimate ETo for all sites except Harrington. Testing period (2016-2017) root mean square errors were recorded in range of 0.38-0.58 mm/day for all sites. No major differences were observed in accuracy of LSTM and bidirectional LSTM. Another objective of this study was to highlight the potential gap between ETO and rainfall for assessing agriculture sustainability in Prince Edward Island. Analyses of the data highlighted that the cumulative ETo surpassed the cumulative rainfall potentially affecting yield of major crops in the island. Therefore, agriculture sustainability requires viable options such as supplemental irrigation to replenish the crop water requirements as and when needed.
The perturbation in hydraulic networks for irrigation systems is often created when sudden changes in flow rates occur in the pipes. This is essentially due to the manipulation of hydrants and depends mainly on the gate closure time. Such a perturbation may lead to a significant pressure variation that may cause a pipe breakage. In a recent study, computer code simulating unsteady flow in pressurized irrigation systems—generated by the farmers’ behavior—was developed and the obtained results led to the introduction of an indicator called the relative pressure variation (RPV) to evaluate the pressure variation occurring into the system, with respect to the steady-state pressure. In the present study, two indicators have been set up: The hydrant risk indicator (HRI), defined as the ratio between the participation of the hydrant in the riskiest configurations and its total number of participations; and the relative pressure exceedance (RPE), which provides the variation of the unsteady state pressure with respect to the nominal pressure. The two indicators could help managers better understand the network behavior with respect to the perturbation by defining the riskiest hydrants and the potentially affected pipes. The present study was applied to an on-demand pressurized irrigation system in Southern Italy.
Predicting water table depth over the long-term in agricultural areas presents great challenges because these areas have complex and heterogeneous hydrogeological characteristics, boundary conditions, and human activities; also, nonlinear interactions occur among these factors. Therefore, a new time series model based on Long Short-Term Memory (LSTM), was developed in this study as an alternative to computationally expensive physical models. The proposed model is composed of an LSTM layer with another fully connected layer on top of it, with a dropout method applied in the first LSTM layer. In this study, the proposed model was applied and evaluated in five sub-areas of Hetao Irrigation District in arid northwestern China using data of 14 years (2000-2013). The proposed model uses monthly water diversion, evaporation, precipitation, temperature, and time as input data to predict water table depth. A simple but effective standardization method was employed to pre-process data to ensure data on the same scale. 14 years of data are separated into two sets: training set (2000-2011) and validation set (2012-2013) in the experiment. As expected, the proposed model achieves higher R² scores (0.789-0.952) in water table depth prediction, when compared with the results of traditional feed-forward neural network (FFNN), which only reaches relatively low R² scores (0.004-0.495), proving that the proposed model can preserve and learn previous information well. Furthermore, the validity of the dropout method and the proposed model’s architecture are discussed. Through experimentation, the results show that the dropout method can prevent overfitting significantly. In addition, comparisons between the R² scores of the proposed model and Double-LSTM model (R² scores range from 0.170 to 0.864), further prove that the proposed model’s architecture is reasonable and can contribute to a strong learning ability on time series data. Thus, one can conclude that the proposed model can serve as an alternative approach predicting water table depth, especially in areas where hydrogeological data are difficult to obtain.
Deep learning constitutes a recent, modern technique for image processing and data analysis, with promising results and large potential. As deep learning has been successfully applied in various domains, it has recently entered also the domain of agriculture. In this paper, we perform a survey of 40 research efforts that employ deep learning techniques, applied to various agricultural and food production challenges. We examine the particular agricultural problems under study, the specific models and frameworks employed, the sources, nature and pre-processing of data used, and the overall performance achieved according to the metrics used at each work under study. Moreover, we study comparisons of deep learning with other existing popular techniques, in respect to differences in classification or regression performance. Our findings indicate that deep learning provides high accuracy, outperforming existing commonly used image processing techniques.
Smart Farming is a development that emphasizes the use of information and communication technology in the cyber-physical farm management cycle. New technologies such as the Internet of Things and Cloud Computing are expected to leverage this development and introduce more robots and artificial intelligence in farming. This is encompassed by the phenomenon of Big Data, massive volumes of data with a wide variety that can be captured, analysed and used for decision-making. This review aims to gain insight into the state-of-the-art of Big Data applications in Smart Farming and identify the related socio-economic challenges to be addressed. Following a structured approach, a conceptual framework for analysis was developed that can also be used for future studies on this topic. The review shows that the scope of Big Data applications in Smart Farming goes beyond primary production; it is influencing the entire food supply chain. Big data are being used to provide predictive insights in farming operations, drive real-time operational decisions, and redesign business processes for game-changing business models. Several authors therefore suggest that Big Data will cause major shifts in roles and power relations among different players in current food supply chain networks. The landscape of stakeholders exhibits an interesting game between powerful tech companies, venture capitalists and often small start-ups and new entrants. At the same time there are several public institutions that publish open data, under the condition that the privacy of persons must be guaranteed. The future of Smart Farming may unravel in a continuum of two extreme scenarios: 1) closed, proprietary systems in which the farmer is part of a highly integrated food supply chain or 2) open, collaborative systems in which the farmer and every other stakeholder in the chain network is flexible in choosing business partners as well for the technology as for the food production side. The further development of data and application infrastructures (platforms and standards) and their institutional embedment will play a crucial role in the battle between these scenarios. From a socio-economic perspective, the authors propose to give research priority to organizational issues concerning governance issues and suitable business models for data sharing in different supply chain scenarios.
We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.
TensorFlow is an interface for expressing machine learning algorithms, and an implementation for executing such algorithms. A computation expressed using TensorFlow can be executed with little or no change on a wide variety of heterogeneous systems, ranging from mobile devices such as phones and tablets up to large-scale distributed systems of hundreds of machines and thousands of computational devices such as GPU cards. The system is flexible and can be used to express a wide variety of algorithms, including training and inference algorithms for deep neural network models, and it has been used for conducting research and for deploying machine learning systems into production across more than a dozen areas of computer science and other fields, including speech recognition, computer vision, robotics, information retrieval, natural language processing, geographic information extraction, and computational drug discovery. This paper describes the TensorFlow interface and an implementation of that interface that we have built at Google. The TensorFlow API and a reference implementation were released as an open-source package under the Apache 2.0 license in November, 2015 and are available at www.tensorflow.org.
In arid and semi-arid countries, the use of irrigation is essential to ensure agricultural
production. Irrigation water use is expected to increase in the near future due to several factors
such as the growing demand of food and biofuel under a probable climate change scenario. For
this reason, the improvement of irrigation water use efficiency has been one of the main drivers
of the upgrading process of irrigation systems in countries like Spain, where irrigation water
use is around 70 % of its total water use. Pressurized networks have replaced the obsolete
open-channel distribution systems and on farm irrigation systems have been also upgraded
incorporating more efficient water emitters like drippers or sprinklers. Although pressurized
networks have significant energy requirements, increasing operational costs. In these circumstances
farmers may be unable to afford such expense if their production is devoted to lowvalue
crops. Thus, in this work, a new approach of sustainable management of pressurized
irrigation networks has been developed using multiobjective genetic algorithms. The model
establishes the optimal sectoring operation during the irrigation season that maximize farmer’s
profit and minimize energy cost at the pumping station whilst satisfying water demand of crops
at hydrant level taking into account the soil water balance at farm scale. This methodology has
been applied to a real irrigation network in Southern Spain. The results show that it is possible
to reduce energy cost and improve water use efficiency simultaneously by a comprehensive irrigation management leading, in the studied case, to energy cost savings close to 15 %
without significant reduction of crop yield.
Networksectoringisoneofthemosteffectivemeasurestoreduceenergyconsumption in pressurized irrigation networks. In this work, the previous model focused on the irrigation networks sectoring with several supply points (WEBSOM), which considered the simultaneous operation of all hydrants, has been improved by integrating an analysis of multiple random demand patterns and their effects on variability in hydrant pressure (extended WEBSOM). The extended WEBSOM has implied a multiobjective optimization, followed by a Montecarlo procedure to analyze different flow regimes using quality of service indicators, a novelty for multi-source pressurized irrigation networks. This innovation has involved energy savings ranging from 9 to 15 % with respect to the consideration of the concurrent operation of all hydrants, which rarely occurs in on-farm irrigation systems. These energy savings were associ- ated with maximum values of pressure deficit of 21 and 34 % in the most critical hydrant with a deficit frequency of 27 and 36 % in the peak month. However, smaller and less frequent deficits were achieved in the rest of the months. Thus, substantial energy savings can be obtained in irrigation districts without significant losses in the service quality provided to farmers.
In recent years,a significant evolution of forecasting methods has been possible due to advances in artificial computational intelligence. The achievement of the optimal architec- ture of an ANN is a complex process. Thus, in this work, an Evolutionary Robotic (study of the evolution of an ANN using Genetic Algorithm) approach has been used to obtain an Artificial Neuro-Genetic Networks (ANGN) to the short-term forecasting of daily irrigation water demand that maximizes the accuracy of the predictions. The methodology is applied in the Bembézar Irrigation District (Southern Spain). An optimal ANGN architecture (ANGN (7, 29, 16, 1)) has achieved obtaining a Standard Error Prediction (SEP) value of the daily water demand of 12.63 % and explaining 93 % of the total variance observed during validation process. The developed model proved to be a powerful tool that, without long dataset and time requirements, can be very useful for the development of management strategies.
Irrigated agriculture constitutes the largest consumer of freshwater in the Mediterranean region and provides a major source of income and employment for rural livelihoods. However, increasing droughts and water scarcity have highlighted concerns regarding the environmental sustainability of agriculture in the region. An integrated assessment combining a gridded water balance model with a geodatabase and GIS has been developed and used to assess the water demand and energy footprint of irrigated production in the region. Modelled outputs were linked with crop yield and water resources data to estimate water (m3 kg−1) and energy (CO2 kg−1) productivity and identify vulnerable areas or ‘hotspots’. For a selected key crops in the region, irrigation accounts for 61 km3 yr−1 of water abstraction and 1.78 Gt CO2 emissions yr−1, with most emissions from sunflower (73 kg CO2/t) and cotton (60 kg CO2/t) production. Wheat is a major strategic crop in the region and was estimated to have a water productivity of 1000 t Mm−3 and emissions of 31 kg CO2/t. Irrigation modernization would save around 8 km3 of water but would correspondingly increase CO2 emissions by around +135%. Shifting from rain-fed to irrigated production would increase irrigation demand to 166 km3 yr−1 (+137%) whilst CO2 emissions would rise by +270%. The study has major policy implications for understanding the water–energy–food nexus in the region and the trade-offs between strategies to save water, reduce CO2 emissions and/or intensify food production.
In many pressurized irrigation water distribution networks, rising energy costs are having a significant impact on system performance, environmental impact and the profitability of agribusinesses and farms dependant on water supplies for irrigated production. In this study, a new methodology is proposed for analysing the location of critical control points (hydrants) to reduce energy consumption. The methodology is developed and applied using two irrigation districts located in Southern Spain (Fuente Palmera and El Villar). The new approaches provide a framework for comparing different energy saving strategies, including improved critical point management and network sectoring. The results show that potential energy savings of around 10% and 30% are possible in each district when the theoretical irrigation requirements are modeled. However these savings reduce to 5% and 12% when the local farmers’ practices of deficit irrigation are incorporated. These results are compared to those obtained for networks sectoring in the same irrigation networks in a previous work. The study confirms that that a sectoring approach works best for reducing the energy costs associated with meeting actual irrigation water demands in irrigation districts where energy consumption is a limiting factor on production.
When modeling a complex, poorly defined, nonlinear problem with hundreds of possible inputs, we must identify the significant inputs before any known nonlinear modeling techniques can be applied. In this paper the concept of fuzzy surfaces is introduced and used to automatically and quickly identify a subset of independent significant inputs for use in nonlinear system models.
In recent years, many modernization processes have been undertaken in irrigation districts with the main objective to improve water use efficiency. In southern Spain, many irrigation districts have either been modernized or are currently being improved. However, as part of the modernization process some unexpected side effects have been observed. This paper analyzes the relative advantages and limitations of modernization based on field data collected in a typical Andalusian irrigation district. Although the amount of water diverted for irrigation to farms has been considerably reduced, consumptive use has increased, mainly due to a change in crop rotations. The costs for operation and system maintenance have dramatically risen (400%) as the energy for pumping pressurized systems is much higher now compared to gravity fed systems used previously. Then a regional analysis in ten Southern Spain irrigation districts of the relationship between energy requirements and irrigation water applied has been carried out. Results show that to apply an average depth of 2590 m(3) ha, the energy required was estimated to be 1000 kWh ha(-1). A new approach is needed that involves efficient management of both water and energy resources in these modernized systems. Finally, some energy saving options are identified and discussed.
Irrigated production in the Guadalquivir river basin in Spain has grown significantly over the last decade. As a consequence,
water resources are under severe pressure, with an increasing deficit between available supplies and water demand. To conserve
supplies, the water authority has reduced the volume of water assigned to each irrigation district. Major infrastructural
investments have also been made to improve irrigation efficiency, including the adoption of high technology micro-irrigation
systems. Within a context of increasing water scarcity, climate change threatens to exacerbate the current supply-demand imbalance.
In this study, the impacts of climate change on irrigation water demand have been modelled and mapped. Using a combination
of crop and geographic information systems, maps showing the predicted spatial impacts of changes in agroclimate (climate
variables that determine the irrigation requirements) and irrigation need have been produced. The maps highlight a significant
predicted increase in aridity and irrigation need. Modelling of irrigation water requirements shows a typical increase of
between 15 and 20% in seasonal irrigation need by the 2050s, depending on location and cropping pattern, coupled with changes
in seasonal timing of demand.
Whereas before 2006 it appears that deep multilayer neural networks were not successfully trained, since then several algorithms have been shown to successfully train them, with experimental results showing the superiority of deeper vs less deep architectures. All these experimental results were obtained with new initialization or training mechanisms. Our objective here is to understand better why standard gradient descent from random initialization is doing so poorly with deep neural networks, to better understand these recent relative successes and help design better algorithms in the future. We first observe the influence of the non-linear activations functions. We find that the logistic sigmoid activation is unsuited for deep networks with random initialization because of its mean value, which can drive especially the top hidden layer into saturation. Surprisingly, we find that saturated units can move out of saturation by themselves, albeit slowly, and explaining the plateaus sometimes seen when training neural networks. We find that a new non-linearity that saturates less can often be beneficial. Finally, we study how activations and gradients vary across layers and during training, with the idea that training may be more difficult when the singular values of the Jacobian associated with each layer are far from 1. Based on these considerations, we propose a new initialization scheme that brings substantially faster convergence.
Reliable short-term forecasts of Irrigation Water Demand (IWD) can provide useful information to help water supply system operators with day-to-day operating decisions. Forecasting IWD is a complex task due to different natural (soil, water, crop, and climate interactions) and behavioral (farmers’ decision-making) components of the irrigation process. So far, various approaches have been developed to estimate IWD values in different contexts. One common approach is the application of data-driven methods to map the relationship between the main influential factors and IWD. Data-driven approaches often do not consider any conceptual understanding of the system in modeling IWD, which has been found to be effective in improving the predictive performance when considered. In this study, a hybrid framework has been introduced and developed by incorporating existing physical knowledge of the system into a data-driven model to predict IWD. This framework consists of two modules: In the first module, a simple conceptual approach was implemented to model the understood factors leading to crop water needs using observation data. In the second module, a data-driven model was used to capture the remaining relationships between inputs and the output in the irrigation process. The proposed hybrid framework was then applied to estimate daily IWD up to 7 days ahead for an irrigation district in Victoria, Australia. Results show that the integration of physical system understanding into data-driven models can improve the performance of IWD forecasting models, particularly during the high-demand period. In addition, the hybrid framework provides improved system understanding and thus leads to increased capacity to support operational decisions.
The groundwater resources are the essential sources for irrigation and agriculture management. Forecasting groundwater levels (GWL) for the current and future periods is an essential topic of watershed management. The prediction of GWL helps prevent overexploitation. The Auto-Regressive Integrated Moving Average model (ARIMA) is a widely known linear statistical model. One of the drawbacks of the ARIMA models is that they may not capture all existing patterns, such as non-linear parts of time series. This article introduces a new hybrid model, namely the ARIMA-Long Short-Term Memory (LSTM) neural network, to capture the linear and non-linear components of a GWL time series in the Yazd-Ardekn Plain in Iran. This study applied the ARIMA-LSTM in forecasting three-, six-, and nine-month-ahead GWL. To determine the hyperparameters of the LSTM algorithm, the Salp Swarm Algorithm (SSA), sine cosine optimisation algorithm (SCOA), particle swarm optimisation algorithm (PSOA), and genetic algorithm (GA) were coupled with the LSTM model. Two different scenarios were devised to introduce new input combinations. In the first scenario, the residual values of the ARIMA model and the lagged GWL data were inserted into hybrid and standalone LSTM models for forecasting the GWL. In the second scenario, the summation of the outputs of the ARIMA and LSTM models gave the final outputs. In terms of the content of three-month-ahead GWL predictions for the second scenario, the ARIMA-LSTM-SSA produced better results than the ARIMA-LSTM-SCOA, ARIMA-LSTM-PSOA, ARIMA-LSTM-GA, ATIMA-LSTM, LSTM, and ARIMA algorithms, which had lower mean absolute error values (MAE) of 5%, 9.4%, 15%, 38%, 42%, and 47%, respectively. However, the general results indicated that an increased forecasting horizon reduced the accuracy of the models. The new hybrid ARIMA-LSTM- SSA model was highly capable of forecasting other hydrological variables for capturing non-linear and linear elements of the time series.
In a world where the availability of water is decreasing, its use must be thoroughly optimized. Irrigated agricultural systems, as the main user of the planet's fresh water, must improve its management and save as much of this scarce resource as possible. However, the heterogeneity of these complex systems that are frequently organized in water user associations makes the daily management of this resource difficult. The new information and communication technologies as well as artificial intelligence techniques help to understand the heterogeneity of these complex systems, making it possible to better manage them. However, the implementation of a tool with these characteristics requires a large and heterogeneous amount of data from different sources. Thus, in this work, a new tool for managers based on water demand forecasting at the field scale for the week ahead has been developed. This tool, WatergyForecaster, combines artificial intelligence techniques, satellite remote sensing (Sentinel 2) and open source climate data to automatically build a water forecasting model at the farm scale for a week in advance. WatergyForecaster, developed in Python, was applied to a real water user association (WUA), obtaining a set of optimum models with an accuracy that ranged from 17% to 19% and representativeness higher than 80%.
Nowadays, water scarcity and the increase in energy demand and their associated costs in pressurized irrigation systems are causing serious challenges. In addition, most of these pressurized irrigation systems has been designed to be operated on-demand where irrigation water is continuously available to farmers complexing the daily decision-making process of the water user association’ managers. Know in advance how much water will be applied by each farmer and its distribution during the day would facilitate the management of the system and would help to optimize the water use and energy costs. In this work, a new hybrid methodology (CANGENFIS) combining Multiple input -Multiple output, fuzzy logic, artificial neural networks and multiobjective genetic algorithms was developed to model farmer behaviour and short-term forecasting the distribution by tariff period of the irrigation depth applied at farm level. CANGENFIS which was developed in Matlab was applied to a real water user association located in Southwest Spain. Three optimal models for the main crops in the water user association were obtained. The average for all tariff periods of the representability (R2) and accuracy of the forecasts (standard error prediction, SEP) were 0.70, 0.76% and 0.85% and 19.9%, 22.9% and 19.5%, for rice, maize and tomato crops models, respectively.
Daily reference evapotranspiration (ETo) forecasts can help farmers in irrigation planning. Therefore, this study assesses the potential of deep learning (long short-term memory (LSTM), one-dimensional convolutional neural network (1D CNN) and a combination of the two previous models (CNN-LSTM)) and traditional machine learning models (artificial neural network (ANN) and random forest (RF)), in regional and local scenarios, to forecast multi-step ahead daily ETo (seven days) using iterated, direct and multiple input multiple output (MIMO) forecasting strategies. Three input data combinations were assessed: (1) only lagged ETo; (2) lagged ETo + day of the year of each step of the time lag considered; and (3) the same of input combination 2 + lagged meteorological variables. Data from 53 weather stations located in Minas Gerais, Brazil, were used. Four stations were used as test stations. Two baselines were also employed: (B1), all the forecasting horizon is considered equal to the mean ETo measured during the last seven days; and (B2), ahead ETo values are considered equal to their respective historical monthly means. In general, MIMO was the best forecasting strategy, offering good performance and lower computational cost. The deep learning models performed slightly better than the machine learning models, and both were better than the best baseline (B2), mainly on the first and second forecasting days. Among the deep learning models, CNN-LSTM2 (i.e., CNN-LSTM with input combination 2) performed the best in local scenario (mean RMSE over the prediction horizon and stations equal to 0.87), and CNN-LSTM3 performed the best in regional scenario (mean RMSE equal to 0.88). The regional models are recommended instead of the local models since they exhibited similar performances and have higher generalization capacity. Finally, although the models developed have not exhibited high accuracies, they can be useful tools in places where historical monthly mean ETo is used to forecast ETo.
As the standard method to compute reference evapotranspiration (ET0), Penman-Monteith (PM) method requires eight meteorological input variables, which makes it difficult to apply in data scarce regions. To overcome this problem, a hybrid bi-directional long short-term memory (Bi-LSTM) model was developed to forecast short-term (1–7-day lead time) daily ET0. The model was trained, validated and tested using three meteorological variables for the period of 2006–2018 at selected three meteorological stations located in the semi-arid region of central Ningxia, China. The performance of the hybrid Bi-LSTM model to forecast short-term daily ET0 was evaluated against daily ET0 calculated by the Penman-Monteith method using the statistical metrics namely, mean absolute error (MAE), root mean square error (RMSE), Pearson's correlation coefficient (R) and Nash-Sutcliffe efficiency (NSE). The results showed that the hybrid Bi-LSTM model with a combination of three meteorological inputs (maximum temperature, minimum temperature and sunshine duration) provides the best forecast performance for short-term daily ET0 at the selected meteorological stations. When averaged across stations, the statistical performance at different forecast lead time were as follows; 1-day lead time: RMSE = 0.159 mm day−1, MAE = 0.039 mm day−1, R = 0.992, NSE = 0.988; 4-day lead time: RMSE = 0.247 mm day−1, MAE = 0.075 mm day−1, R = 0.972, NSE = 0.985 and 7-day lead time: RMSE = 0.323 mm day−1, MAE = 0.089 mm day−1, R = 0.943, NSE = 0.982. Moreover, the hybrid Bi-LSTM model consistently improved the forecast performance of short-term daily ET0 compared to the adjusted Hargreaves-Samani (HS) method and the general Bi-LSTM model. The hybrid Bi-LSTM model developed in this study is currently integrated into the modern intelligent irrigation system of 30 ha of Lycium barbarum plantation in central Ningxia in China, a region with limited meteorological data. It is recommended however that the hybrid Bi-LSTM should be evaluated across a wide range of climatic conditions in different regions of the world.
Irrigation water demand is highly variable and depends on farmers' decision about when to irrigate. Their decision affects the performance of the irrigation networks. An accurate daily prediction of irrigation events occurrence at farm scale is a key factor to improve the management of the irrigation districts and consequently the sustainability of the irrigated agriculture. In this work, a hybrid heuristic methodology that combines Decision Trees and Genetic Algorithm has been developed to find the optimal decision tree to model farmer's behaviour, predicting the occurrence of irrigation events. The methodology has been tested in a real irrigation district and results showed that the optimal models developed have been able to predict between 68% and 100% of the positive irrigation events and between 93% and 100% of the negative irrigation events.
Irrigation water demand is highly variable and depends on farmer behaviour, which affects the performance of irrigation networks. The irrigation depth applied to each farm also depends on farmer behaviour and is affected by precise and imprecise variables. In this work, a hybrid methodology combining artificial neural networks, fuzzy logic and genetic algorithms was developed to model farmer behaviour and forecast the daily irrigation depth used by each farmer. The models were tested in a real irrigation district located in southwest Spain. Three optimal models for the main crops in the irrigation district were obtained. The representability (R 2) and accuracy of the predictions (standard error prediction, SEP) were 0.72, 0.87 and 0.72; and 22.20%, 9.80% and 23.42%, for rice, maize and tomato crop models, respectively.
Limited opportunities to further expand the volume of global freshwaters allocated to irrigation means that advanced irrigation technologies, aiming to improve efficiency of existing systems, are timely needed and are of paramount importance. This article ‘Advanced Irrigation Technologies’ describes the latest advances in irrigation application methods, irrigation management, and other novel developments. It provides a vision for the future, including emerging risks, opportunities, and technical challenges, as the world gears up to supply 50% more food to an additional 2 billion people by 2050.
In Spain, farmers and water user authorities have applied a variety of approaches in modernizing irrigation systems to address the delicate balance between water and energy use. This review presents the technical aspects of this process. This delicate balance is strongly manifested when replacing openchannel, gravity-based systems with pressurized distribution networks and switching from surface to pressurized irrigation systems, the most common modernization approach in Spain and other countries. This summary focuses on actions and technologies for improving water and energy use in irrigation and some of the main models and tools for improving irrigation infrastructure design and management. Calculations of water conservation and energy consumption as a result of improvement demonstrates the complexity of the balance between energy and water efficiency. The benefits of irrigation modernization include increased water efficiency and productivity, improved operation and management of irrigation systems and working conditions of farmers, but increased energy demands and investment amount. It is necessary to analyze the economic, social, and environmental viability of the irrigation modernization process in each case. Proper design and management of irrigation systems, promotion of the application and usefulness of Irrigation Advisory Services and web-GIS platforms to transfer and share real-time information with farmers in a feedback process are some of the best tools for improving consumption of water, energy and other production inputs.
Real-time irrigation scheduling improves irrigation water management and achieves higher irrigation system performance. This scheduling requires the prediction of daily reference evapotranspiration (ETo), which has been performed in some areas by using the Hargreaves-Samani (HS) equation or the Penman-Monteith (PM) equation with all of the required parameters obtained from forecasting services. Artificial neural networks (ANNs) and HS, which use forecasted daily maximum and minimum temperatures (TMAX and TMIN) as input data, were used to forecast ETo from 2011 to 2012 using PM as the reference methodology. A tool named FORETo (ETo forecasting) was implemented to transfer the proposed methodology to final users. This methodology and FORETo software were applied in the Hydrogeologic System 08.29 (Spain), where there is a high concentration of crops with high water consumption in semi-arid conditions. Two seasons of weekly field observations were collected to analyse the ability of FORETo to predict maize and onion crop water requirements. The results obtained from the comparison of daily ETo forecasts by FORETo and HS versus PM showed a better performance of the developed model. However, HS had good agreement, with root mean squared errors (RMSEs) lower than 0.98 mm day-1and an index of agreement (d) higher than 0.95.
We introduce Adam, an algorithm for first-order gradient-based optimization
of stochastic objective functions. The method is straightforward to implement
and is based an adaptive estimates of lower-order moments of the gradients. The
method is computationally efficient, has little memory requirements and is well
suited for problems that are large in terms of data and/or parameters. The
method is also ap- propriate for non-stationary objectives and problems with
very noisy and/or sparse gradients. The method exhibits invariance to diagonal
rescaling of the gradients by adapting to the geometry of the objective
function. The hyper-parameters have intuitive interpretations and typically
require little tuning. Some connections to related algorithms, on which Adam
was inspired, are discussed. We also analyze the theoretical convergence
properties of the algorithm and provide a regret bound on the convergence rate
that is comparable to the best known results under the online convex
optimization framework. We demonstrate that Adam works well in practice when
experimentally compared to other stochastic optimization methods.
The suitability of artificial neural networks for estimating kinetic analytical parameters for first-order reactions by using real kinetic data acquired after a short reaction time is demonstrated. The optimal reaction time region and its associated number of inputs are the two key parameters for obtaining as suitable network as possible. Noise in the transient signal was found to affect the performance of the neural network as well as the size of the training set. The trained network estimated kinetic analytical parameters with a % SEP of 2.14, which is much smaller than those provided by parametric methods such as NLR and PCR.
We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal 'hidden' units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure.
Recently, Computational Neural Networks (CNNs) and fuzzy inference systems have been successfully applied to time series forecasting. In this study the performance of a hybrid methodology combining feed forward CNN, fuzzy logic and genetic algorithm to forecast one-day ahead daily water demands at irrigation districts considering that only flows in previous days are available for the calibration of the models were analysed. Individual forecasting models were developed using historical time series data from the Fuente Palmera irrigation district located in Andalucía, southern Spain. These models included univariate autoregressive CNNs trained with the Levenberg–Marquardt algorithm (LM). The individual models forecasting were then corrected via a fuzzy logic approach whose parameters were adjusted using a genetic algorithm in order to improve the forecasting accuracy. For the purpose of comparison, this hybrid methodology was also applied with univariate autoregressive CNN models trained with the Extended-Delta-Bar-Delta algorithm (EDBD) and calibrated in a previous study in the same irrigation district. A multicriteria evaluation with several statistics and absolute error measures showed that the hybrid model performed significantly better than univariate and multivariate autoregressive CNNs.
In this paper, we present bidirectional Long Short Term Memory (LSTM) networks, and a modified, full gradient version of the LSTM learning algorithm. We evaluate Bidirectional LSTM (BLSTM) and several other network architectures on the benchmark task of framewise phoneme classification, using the TIMIT database. Our main findings are that bidirectional networks outperform unidirectional ones, and Long Short Term Memory (LSTM) is much faster and also more accurate than both standard Recurrent Neural Nets (RNNs) and time-windowed Multilayer Perceptrons (MLPs). Our results support the view that contextual information is crucial to speech processing, and suggest that BLSTM is an effective architecture with which to exploit it.
Population increase and the improvement of living standards brought about by development will result in a sharp increase in food demand during the next decades. Most of this increase will be met by the products of irrigated agriculture. At the same time, the water input per unit irrigated area will have to be reduced in response to water scarcity and environmental concerns. Water productivity is projected to increase through gains in crop yield and reductions in irrigation water. In order to meet these projections, irrigation systems will have to be modernized and optimised. Water productivity can be defined in a number of ways, although it always represents the output of a given activity (in economic terms, if possible) divided by some expression of water input. Five expressions for this indicator were identified, using different approaches to water input. A hydrological analysis of water productivity poses a number of questions on the choice of the water input expression. In fact, when adopting a basin-wide perspective, irrigation return flows often can not be considered as net water losses. A number of irrigation modernization and optimization measures are discussed in the paper. Particular attention was paid to the improvement of irrigation management, which shows much better economic return than the improvement of the irrigation structures. The hydrological effects of these improvements may be deceiving, since they will be accompanied by larger crop evapotranspiration and even increased cropping intensity. As a consequence, less water will be available for alternative uses.
Multi-objective evolutionary algorithms (MOEAs) that use
non-dominated sorting and sharing have been criticized mainly for: (1)
their O(MN<sup>3</sup>) computational complexity (where M is the number
of objectives and N is the population size); (2) their non-elitism
approach; and (3) the need to specify a sharing parameter. In this
paper, we suggest a non-dominated sorting-based MOEA, called NSGA-II
(Non-dominated Sorting Genetic Algorithm II), which alleviates all of
the above three difficulties. Specifically, a fast non-dominated sorting
approach with O(MN<sup>2</sup>) computational complexity is presented.
Also, a selection operator is presented that creates a mating pool by
combining the parent and offspring populations and selecting the best N
solutions (with respect to fitness and spread). Simulation results on
difficult test problems show that NSGA-II is able, for most problems, to
find a much better spread of solutions and better convergence near the
true Pareto-optimal front compared to the Pareto-archived evolution
strategy and the strength-Pareto evolutionary algorithm - two other
elitist MOEAs that pay special attention to creating a diverse
Pareto-optimal front. Moreover, we modify the definition of dominance in
order to solve constrained multi-objective problems efficiently.
Simulation results of the constrained NSGA-II on a number of test
problems, including a five-objective, seven-constraint nonlinear
problem, are compared with another constrained multi-objective
optimizer, and the much better performance of NSGA-II is observed