Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Probability-based design of offshore structures usually require a joint distribution of significant wave heights and zero-up-crossing periods, or spectral peak periods, in order to describe the long-term wave scenario, and thus predict long-term extreme responses. Several joint statistical models have been proposed in the literature, often comprising environmental parameters aside from significant wave heights and zero-up-crossing or spectral peak periods, such as wind and current velocities, among others. However, since there is no theoretical environmental distribution function, the assessment of different models is done on an empirical basis for each available data set. The fitting of a joint distribution model on a set of long-term wave data, given the extensive number of available joint statistical models and the inherent extrapolations required for extreme conditions, can be extremely cumbersome. This paper proposes the application of Taylor diagram in the assessment of the fit of joint models. The aim is to find the models with the best performances through an easy and straightforward visual comparison. For simplification purposes, the joint statistical model considered comprises significant wave heights and spectral peak periods only. Results show that Taylor diagram is a useful tool for primarily evaluations of adjusted probability models. The indicatives for better model performances identified in the diagram were shown to correspond to other common identifiers, which demonstrates that the diagram can be a reliable indicator for good models. Some study cases are presented to demonstrate Taylor diagram application in joint probability distributions fittings for wave environmental parameters.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... This diagram provides a quick visual representation for readers, making it easier to assess key differences between models than other methods. It ensures that the best-performing models are identified with high confidence [103][104][105]. ...
... 2) for each previous relationship compared to the values of the reference data. Models with high correlation and low error naturally appear closer to the ideal point (the center of the circles with R 2 = 1 and RSME = 0), making their reliability easily identifiable [103][104][105]. ...
Article
Full-text available
Accurate determination of rock mechanical parameters is crucial for safe and efficient drilling, production, and reservoir development in these complex formations. While numerous empirical relationships have been proposed to estimate these properties, there is no reliable way to guide the selection of suitable models in new studies. This study develops a one-dimensional mechanical earth model (1D MEM) for a carbonate reservoir in southwestern Iran to systematically evaluate existing empirical relationships for carbonate rocks. First, laboratory tests were performed on the obtained core samples. The results were then integrated with petrophysical data to establish new empirical correlations and convert dynamic measurements into static elastic and rock strength parameters. The study involved estimating Biot's effective stress coefficient, calculating the pore pressure and in-situ stresses profiles, and calibrating them using field data. Additionally, this study reviewed existing methods and empirical relationships for estimating key physico-mechanical rock parameters. The predictions of these relationships were statistically compared with results from the current research as reference data. Using the Taylor diagram as an innovative method the relationships were ranked based on their prediction accuracy, providing a guideline for selecting optimal relationships to use in future studies on similar lithologies. The comprehensive statistical analysis on more than 200 existing empirical relationships showed that the effectiveness of predictive relationships is influenced by factors such as lithology, geological environment, equation forms, and data acquisition. Relationships developed specifically for carbonate rocks particularly limestone produce more reliable results than those intended for mixed or non-carbonate rocks.
... The most accurate approximation (or model) of a system, process, or phenomenon can be identified visually using Taylor diagrams in mathematics (Simão et al. 2020). Karl E. Taylor produced this diagram in 1994 to make it simpler to compare various models side by side. ...
... This may be related to the processing speed of different algorithms, and because each model has advantages and disadvantages, it is advisable to use a number of them before selecting the best one for further study. Recently, hybrid algorithms have been used more and more frequently, and the majority of literature evaluation shows that they can boost solo algorithms' predictive power (Simão et al. 2020). ...
Article
Full-text available
The Advance Rate (AR) of tunnel boring machines (TBMs) plays a pivotal role in evaluating their efficiency in tunnel engineering projects. This study focuses on the development of precise prediction models for TBM performance, employing advanced algorithms, including gene expression programming, time series analysis, multivariate regression, artificial neural networks, particle aggregation algorithm, genetic algorithm, adaptive neural fuzzy inference systems, and support vector machines.The AR,serving as a performance metric, becomes the specific target for prediction models. The atest database comprising 3597 data sets was curated from a tunneling project at the Sar Pol Zahab, Bazi Daraz Water Transfer Tunnel. Utilizing 21 parameters as input variables, intelligent AR models were formulated based on comprehensive training and testing patterns, incorporating geological features and the key machine parameters influencing AR. Quantitative evaluation of the models involved statistical indicators such as root mean square error (RMSE), coefficient of determination (R²), and variance calculation. Comparative analysis based on RMSE, MAE, MAPE VAF and R2 SuperiorGen Expression Function Model Model showed that the gene expression algorithm with1.41, 0.66, 6.33, 98.88 and 0.95 ahead the nose is better than other approaches than other approaches. These results underscore the efficacy of the Gene ExpressionProgramming-based model, suggesting its potential to yield a novel functional equation for accurate TBM performance prediction.
... The error value R M S is the standard deviation between the RMS error and the correlation coefficient. Figure 4 presents the geometric form of the Taylor diagram [48]. An example of a Taylor diagram is shown in Fig. 5. ...
Article
Full-text available
Solar radiation forecasting is important for energy management, energy stability and installation of solar energy systems. In this study, hybrid LSTM-SVM was developed to estimate the daily solar radiation of Nusaybin district of Mardin province. To compare the hybrid LSTM-SVM, support vector machines (SVM), long short-term memory (LSTM), decision tree and K-nearest neighbor were used. The performance of the models was evaluated with the help of coefficient of determination ( R2{R}^{2} R 2 ), mean square error (MSE), root-mean-square error (RMSE), normalized root-mean-square error (NRMSE), mean absolute percentage error (MAPE) and Taylor diagram. The regression curve of the models was drawn and their solar radiation prediction performances were compared. The correlation matrix determined the correlation between data. R2{R}^{2} R 2 , MSE, RMSE, NRMSE and MAPE of the hybrid LSTM-SVM were 0.962, 2.0191e-04, 0.0142, 0.0384 and 42.7962, respectively. The proposed hybrid LSTM-SVM had higher performance than other compared models. Taylor diagram proved that the prediction accuracy of the hybrid LSTM-SVM model is good. A correct correlation of 0.8315 between solar radiation and sunshine duration was determined by the correlation matrix. The meteorological data that most affected solar radiation was sunshine duration. As a result, the hybrid LSTM-SVM can be used as an alternative method for solar radiation estimation.
... This diagram presents a thorough method for assessing model performance by incorporating metrics such as Pearson's correlation coefficient, root mean square difference, and the ratio of variances. The centered RMS difference between data ( CE 2 ) and the variance ratio are related by the following equation (Simão et al., 2020): ...
Article
Full-text available
Jordan is confronted with substantial risks linked to climate change and is proactively striving to manage resources sustainably by aligning its initiatives with the Sustainable Development Goals (SDGs). This study examines the impacts of climate change on rainfed wheat productivity in Irbid Governorate during the historical period (1994-2021) and simulated for the future (2024-2100) using climate models (SDGs 1, 2, 11, 12, 13, & 15). Monthly precipitation, mean temperature, and annual wheat yield data were collected for the period (1994-2021) and analyzed using the Mann-Kendall test with Sen's slope to investigate the historical trends and elaborate Phi-k correlation coefficient to determine their association. Monthly precipitation and mean temperature projections were collected from CSIRO-MK3.6.0 & GFDL-ESM2M CMIP5 ensemble models under two concentration pathways: RCP4.5 and RCP8.5. The projections were biased and corrected by the dynamic bilinear interpolation method and Delta, EQM, and QM statistical downscaling to enhance the projections' reliability and performance in capturing the region's climate. Taylor diagram was utilized to choose the best model to represent observed data. Cumulative Distribution Function and Probability Density Function Curves were plotted to describe the changes over decades based on climate models. Future wheat yields were predicted using the Random Forest Regression model. The results revealed a non-significant increase in total annual precipitation of 1.8% and 13.8% and a rise in annual mean temperature of 5.4% and 2.7% for Irbid and Baqura stations, respectively during the baseline timeframe on which wheat yield increased by 21.3%. The CSIRO model out-performed the GFDL model with greater fidelity in simulating historical monthly precipitation and mean temperatures. The CSIRO-MK3.6.0 model indicates precipitation and temperature shifts for near and far future periods. Temperature increases are expected to have more severe impacts in the far future (2073-2100) while projected decreases in precipitation of 0.06 mm/day and 0.10 mm/day under RCP4.5 and RCP8.5 affect yield reductions by 11.42 kg/ha and 12.21 kg/ha. Temperature increases of 2.7 • C and 4.1 • C under RCP 4.5 & 8.5 and the responding yield decreases to 24.83 kg/ha and 33.40 kg/ha, respectively. The study highlights the need for enhanced adaptation strategies; cultivating resilient wheat varieties, promoting crop rotation, improving meteorological monitoring and risk response approaches, and providing timely weather forecasts to mitigate the adverse effects of climate change on wheat productivity and ensure sustainable agriculture in Jordan.
... Taylor diagrams are used to characterize the potential of an algorithm in estimating IOPs in terms of the correlation coefficient, normalized standard deviation, and normalized unit root mean square deviation (uRMSD). In this diagram, a model performs better if the obtained IOPs are closer to the in situ data [104], where in the ideal case, r is equal to one, the normalized uRMSD is zero, and the normalized standard deviation is one [29,105]. The following equation was used to calculate the uRMSD: ...
Article
Full-text available
Lake Kasumigaura, one of Japan’s largest lakes, presents significant challenges for remote sensing due to its eutrophic conditions and complex optical properties. Although the Global Change Observation Mission-Climate (GCOM-C)/Second-generation Global Imager (SGLI)-derived inherent optical properties (IOPs) offer water quality monitoring potential, their performance in such turbid inland waters remains inadequately validated. This study evaluated five established IOP retrieval algorithms, including the quasi-analytical algorithm (QAA_V6), Garver–Siegel–Maritorena (GSM), generalized IOP (GIOP-DC), Plymouth Marine Laboratory (PML), and linear matrix inversion (LMI), using measured remote sensing reflectance (Rrs) and corresponding IOPs between 2017–2018. The results demonstrated that the QAA had the highest performance for retrieving absorption of particles (ap) with a Pearson correlation (r) = 0.98, phytoplankton (aph) with r = 0.97, and non-algal particles (anap) with r = 0.85. In contrast, the GSM algorithm exhibited the best accuracy for estimating absorption by colored dissolved organic matter (aCDOM), with r = 0.87, along with the lowest mean absolute percentage error (MAPE) and root mean square error (RMSE). Additionally, a strong correlation (r = 0.81) was observed between SGLI satellite-derived remote-sensing reflectance (Rrs) and in situ measurements. Notably, a high correlation was observed between the aph (443 nm) and the chlorophyll a (Chl-a) concentration (r = 0.84), as well as between the backscattering coefficient (bbp) at 443 nm and inorganic suspended solids (r = 0.64), confirming that IOPs are reliable water quality assessment indicators. Furthermore, the use of IOPs as variables for estimating water quality parameters such as Chl-a and suspended solids showed better performance compared to empirical methods.
... When these data are combined, they provide a quick summary of the degree of sequence consistency, allowing one to judge whether a model resembles the natural mechanism. The graph helps to evaluate the relative benefits of competing models and track overall performance as a model evolves [4,56]. ...
Article
Full-text available
Forecasting the performance and energy yield of photovoltaic (PV) farms is crucial for establishing the economic sustainability of a newly installed system. The present study aims to develop a prediction model to forecast an installed PV system’s annual power generation yield and performance ratio (PR) using three environmental input parameters: solar irradiance, wind speed, and ambient air temperature. Three data-based artificial intelligence (AI) techniques, namely, adaptive neuro-fuzzy inference system (ANFIS), response surface methodology (RSM), and artificial neural network (ANN), were employed. The models were developed using three years of data from an operational 2MWp Solar PV Project at Kuzhalmannam, Kerala state, India. Statistical indices such as Pearson’s R, coefficient of determination (R²), root-mean-squared error (RMSE), Nash-Sutcliffe efficiency (NSCE), mean absolute-percentage error (MAPE), Kling-Gupta efficiency (KGE), Taylor’s diagram, and correlation matrix were used to determine the most accurate prediction model. The results demonstrate that ANFIS was the most precise performance ratio prediction model, with an R² value of 0.9830 and an RMSE of 0.6. It is envisaged that the forecast model would be a valuable tool for policymakers, solar energy researchers, and solar farm developers.
... The standard deviation shows the model predictions' relative variation compared to the observed data. The RMSD computes the magnitude of the average model prediction error [48,49]and for multiple parameters, simultaneously. Using modified Taylor diagrams has led to new conclusions about the models tested in McNamara et al. (2013. ...
Article
Full-text available
Rising shipping emissions greatly affect greenhouse gas (GHG) levels, so precise fuel consumption forecasting is essential to reduce environmental effects. Precision forecasts using machine learning (ML) could offer sophisticated solutions that increase the fuel efficiency and lower emissions. Indeed, five ML techniques, linear regression (LR), decision tree (DT), random forest (RF), XGBoost, and AdaBoost, were used to develop ship fuel consumption models in this study. It was found that, with an R² of 1, zero mean squared error (MSE), and a negligible mean absolute percentage error (MAPE), the DT model suited the training set perfectly, while R² was 0.8657, the MSE was 56.80, and the MAPE was 16.37% for the DT model testing. More importantly, this study provided Taylor diagrams and violin plots that helped in the identification of the best-performing models. Generally, the employed ML approaches efficiently predicted the data; however, they are black-box methods. Hence, explainable machine learning methods like Shapley additive explanations, the DT structure, and local interpretable model-agnostic explanations (LIME) were employed to comprehend the models and perform feature analysis. LIME offered insights, demonstrating that the major variables impacting predictions were distance (≤450.88 nm) and time (40.70 < hr ≤ 58.05). By stressing the most important aspects, LIME can help one to comprehend the models with ease.
... This diagram presents a thorough method for assessing model performance by incorporating metrics such as Pearson's correlation coefficient, root mean square difference, and the ratio of variances. The centered RMS difference between data ( CE 2 ) and the variance ratio are related by the following equation (Simão et al., 2020): ...
... At the reference point, one finds a perfect model. Better models are those almost at this point (Elvidge et al., 2014;Simão et al., 2020). This method underlines differences between models quickly. ...
Article
Full-text available
In the ongoing search for an alternative fuel for diesel engines, biogas is an attractive option. Biogas can be used in dual-fuel mode with diesel as pilot fuel. This work investigates the modeling of injecting strategies for a waste-derived biogas-powered dual-fuel engine. Engine performance and emissions were projected using supervised machine learning methods including random forest, lasso regression, and support vector machines (SVM). Mean Squared Error (MSE), R-squared (R²), and Mean Absolute Percentage Error (MAPE) were among the criteria used in evaluations of the models. Random Forest has shown better performance for Brake Thermal Efficiency (BTE) with a test R² of 0.9938 and a low test MAPE of 3.0741%. Random Forest once more exceeded other models with a test R² of 0.9715 and a test MAPE of 4.2242% in estimating Brake Specific Energy Consumption (BSEC). With a test R² of 0.9821 and a test MAPE of 2.5801% Random Forest emerged as the most accurate model according to carbon dioxide (CO₂) emission modeling. Analogous results for the carbon monoxide (CO) prediction model based on Random Forest obtained a test R² of 0.8339 with a test MAPE of 3.6099%. Random Forest outperformed Linear Regression with a test R² of 0.9756% and a test MAPE of 7.2056% in the case of nitrogen oxide (NOx) emissions. Random Forest showed the most constant performance overall criteria. This paper emphasizes how well machine learning models especially Random Forest can prognosticate the performance of biogas dual-fuel engines.
... This diagram presents a thorough method for assessing model performance by incorporating metrics such as Pearson's correlation coefficient, root mean square difference, and the ratio of variances. The centered RMS difference between data ( CE 2 ) and the variance ratio are related by the following equation (Simão et al., 2020): ...
... The use of Taylor's diagrams is an efficient and easy-to-use graphical tool to estimate the efficacy of ML models, specifically when several ML-based models to compared. The proximity of nodes to the benchmark standard, the correlation coefficient, and the RMSE ratio all provide information about the models' dependability and correctness [66,67]. Violin plots integrate the characteristics of kernel density plots as well as box plots. ...
Article
Full-text available
Biomass is an excellent source of green energy with numerous benefits such as abundant availability, net carbon zero, and renewable nature. However, the conventional methods of biomass combustion are polluting and poor efficiency processes. Biomass gasification overcomes these challenges and provides a sustainable method for the supply of greener fuel in the form of producer gas. The producer gas can be employed as a gaseous fuel in compression ignition engines in dual-fuel systems. The biomass gasification process is a complex as well as a nonlinear process that is highly dependent on the ambient environment, type of biomass, and biomass composition as well as the gasification medium. This makes the modeling of such systems quite difficult and time-consuming. Modern machine learning (ML) techniques offer the use of experimental data as a convenient approach to modeling and forecasting such systems. In the present study, two modern and highly efficient ML techniques, random forest (RF) and AdaBoost, were employed for this purpose. The outcomes were employed with results of a baseline method, i.e., linear regression. The RF could forecast the hydrogen yield with R² as 0.978 during model training and 0.998 during the model test phase. AdaBoost ML was close behind with R² at 0.948 during model training and 0.842 during the model test phase. The mean squared error was as low as 0.17 and 0.181 during model training and testing, respectively. In the case of the low heating value model, during model testing, the R² was 0.971 and RF and AdaBoost, respectively, during model training and 0.842 during the model test phase. Both ML techniques provided excellent results compared to linear regression, but RFt was the best among all three.
... The Figure 5a depicts the Taylor's dig ram during model training while Figure 5b depicts the same for model testing phase. It can be observed that DT-based model was superior to both LR and SVR for both during training as well as testing phase [40], [41]. ...
Article
The study highlights the need to use suitable modeling techniques to accurately predict the efficiency of parabolic trough collectors in light of their significance in renewable energy. An examination of prediction models using the Linear Regression (LR), Support Vector Regression (SVR), and Decision Tree (DT) algorithms for the efficiency of parabolic trough collectors provides insightful information about how well they work. In terms of prediction accuracy and precision, the Decision Tree model regularly performs better than its rivals throughout the training and testing phases. What sets it apart from SVR and LR models is its ability to identify minute relationships within the data. SVR performs better than LR, although it is not as exact or accurate as the DT model. Among the three models, Linear Regression has the lowest performance, underscoring its limitations in terms of capturing non-linear relationships. Given its exceptional performance, the Decision Tree model may prove to be a crucial instrument in encouraging the design and construction of solar energy systems, hence advancing the growth of sustainable development projects and renewable energy technology.
... To determine this aspect, it is necessary to return to solo metrics. One of the most successful examples of the combined use of scalar metrics is the construction of a Taylor diagram (Taylor 2001) which has been widely used in model assessment and testing in recent years (Abbasian et al. 2019;Simão et al. 2020;Wadoux et al. 2022). Different types of accuracy (or error) metrics have the sense of distance (or inverse distance) between observed and predicted datasets. ...
Article
Full-text available
This paper proposes to add a probabilistic component to the models’ performance assessment using permutation approach. The application of the permutation approach was demonstrated on an example of a nonparametric randomization test. To test this approach, three models based on artificial neural networks were implemented: multilayer perceptron, radial basis function network, and generalized regression neural network. For modeling, spatial distribution data of copper and iron in the topsoil (depth 0.05 m) in subarctic cities of Novy Urengoy and Noyabrsk, Yamalo-Nenets Autonomous Okrug, Russia, were used. The predicted values were compared with the observed values of the test subset. To evaluate the performance of the built models, we compared three approaches: 1) calculation of the indices (mean absolute error, correlation coefficient, index agreement, etc.), 2) Taylor diagram, 3) randomization assessment of the probability of obtaining the divergence between the observed and predicted datasets, assuming that both of these datasets are derived from the same population. In the randomization method, two statistics were used: difference in means and correlation coefficient. The permutation approach showed its productivity, as it allowed to assess the significance of the divergence between the observed and predicted datasets and provided a more complete and objective performance assessment of the models. Authors believe that the permutation approach may be an attractive alternative to traditional performance assessment methods in application to forecasting purposes in various fields of science.
... It measures the similarity between the reference area and the test area. However, it shows the root mean square (RMS) difference, correlation coefficient and standard deviation between these two areas [38]. Figure 16 shows the Taylor diagram for the LSTM, KNN and hybrid LSTM-KNN models. ...
Article
Full-text available
Sustainable and renewable energy sources are of great importance in today’s world. In this respect, renewable energy sources are used in many fields of technology. In order to minimize dust on PV panels and ensure their sustainability, power losses due to dust must be estimated accurately. In this way, the efficiency of a sustainable energy source will increase and serious economic savings can be achieved. In this study, a hybrid deep learning model was designed to predict losses caused by dust in PV panels installed in the Manisa Saruhanlı district. The hybrid deep learning model consists of Long Short-Term Memory (LSTM) and K-Nearest-Neighbors (KNN) algorithms. The performance of the proposed hybrid deep learning model was compared with LSTM and KNN algorithms. Sensitivity analysis was performed to statistically evaluate the prediction results. The input variables of the models were time, sunshine duration, humidity, ambient temperature and solar radiation. The output variable was the losses caused by dust in the PV panels. Hybrid LSTM-KNN, LSTM and KNN models predicted losses caused by dust in PV panels with 98.22%, 95.51% and 61.49% accuracy. The hybrid LSTM-KNN model predicted losses caused by dust in PV panels with higher accuracy than other models. Using LSTM and KNN algorithms together improved the performance of the hybrid deep learning model. With sensitivity analysis, it was found that solar radiation is the most important variable affecting the losses caused by dust in PV panels.
... Taylor's Diagrams, by visualizing the level of agreement between model predictions and actual data, provide useful insights into patterns of variance. These graphs reveal the degree to which the models reflect the variability that is seen by scrutinizing the clustering of predicted points near the reference data [73]. In addition, any systematic overestimation or underestimation can be observed as variances from the reference point. ...
... Taylor diagrams makes it easier to compare models by displaying them all in one figure. In Taylor diagrams, RMSE is the distance from the reference point on the X-axis and SD is represented by the radial distance from the origin (Simão et al., 2020). ...
... Radial lines depict distinct performance standard deviations, while polar angles illustrate variances across measures. Models containing overlapping data points perform similarly, but models with separated data points perform differently (Simão et al., 2020;Taylor, 2001). Taylor's diagram for all parameters during model training and testing is shown and discussed in the following sections. ...
Article
This experimental work focuses on testing the diesel engine by the replacement of pyrolysis oil extracted from different waste plastics. The novelty of this work is testing the engine performance, combustion, and emission study with the following proportions D90WPO7NP3, D80WPO15NP5, D70WPO23NP7, WPO100, and D100. The pyrolysis oil is extracted from the mixed waste plastics with calcium bentonite as a catalyst and n-propyl alcohol is used as an additive for better igniting developer. From the results, it is found that the performance parameters such as BTE are found to increase in efficiency by 2.2% for the fuel D90WPO7NP3 and decrease in BSFC by 3.1% for the same fuel compared to pure diesel. It is also noticed that the emission properties such as CO2, CO, HC, and O2 content decreased in the order of 5.49%, and 6.2%. 39.2 and 12.5 % as compared to pure diesel. However, there is a slight increase in emissions of NOX due to the presence of uncontrolled enriched constituents in the pyrolysis oil; the increase in NOx emissions is found to be 16.2% compared to diesel fuel at full load conditions. The ML and RSM techniques using ANOVA techniques are adopted to find the least accuracy of data. This shows that the model fits the data well. The linear regression model explains 93.6% of the variation in BTE during training and has 95.9% prediction power. Linear Regression showed good predictive skills, acquiring a high R2 value of 0.985. It is concluded that pyrolysis oil with n-propyl alcohol as an additive gives a better substitute in the field of automobile applications.
... In the TD, which is plotted as a semicircle or quadrilateral, the values of the correlation coefficient in the radius, the values of standard deviations in the form of concentric circles with the circle center [0, 0] and the RMSD values in the form of concentric circles with the reference point as the center are shown [60]. In this diagram, the evaluation is done in a way in which the values of the standard deviations, correlation coefficient and RMSD deviation from the models are specified on the diagram as points [62]. Then the point that has a closer distance to the reference point (observed data) represented by the black dot has more accurate prediction credibility [61]. ...
Article
Full-text available
Protecting water resources from pollution is one of the most important challenges facing water management researchers. The governing equation for river pollution is mostly the advection–dispersion equation, with considering the longitudinal dispersion coefficient as its most important effective parameter. The purpose of this paper is to develop a new framework for accurate prediction of the longitudinal dispersion coefficient of rivers based on artificial intelligence (AI) methods. To do this, we used a combination of multilayer perceptron (MLP), one of the most robust neural networks, and a novel metaheuristic algorithm, namely Harris hawk optimization (HHO). Besides, two optimized MLP models with particle swarm optimization (PSO) and imperialist competitive algorithm (ICA) were utilized to demonstrate the accuracy of the proposed model. To evaluate the developed models, 164 series of data collected from previous studies, including hydraulic and geometric parameters of rivers, were used. The indicated results proved the efficiency of the HHO to improve the optimum auto-selection of the AI models. Thus, the recorded results show very high accuracy of the newly developed model, MLP-HHO compared to others. Furthermore, to increase the prediction accuracy, a K-means clustering technique is coupled with MLP-HHO model during dividing the data to train and test categories. The proposed hybrid K-means-MLP-HHO model with coefficient of determination (R2) and root mean square error (RMSE), of 0.97 and 30.94 m2/s, respectively, significantly outperformed all existing and AI-based models. Furthermore, the sensitivity analysis showed that the flow width is the most influential factor in predicting the longitudinal dispersion coefficient.
... Models select and test. Taylor diagram 18 is often used to evaluate the accuracy of models 19 . The scatter in the Taylor diagram represents the model, the solid lineis the correlation coefficient, the horizontal and vertical axis represents the standard deviation, and the dotted line is the root mean square error. ...
Article
Full-text available
The main targets of this were to screen the factors that may influence the distribution of 25-hydroxyvitamin D[25(OH)D] reference value in healthy elderly people in China, and further explored the geographical distribution differences of 25(OH)D reference value in China. In this study, we collected the 25(OH)D of 25,470 healthy elderly from 58 cities in China to analyze the correlation between 25(OH)D and 22 geography secondary indexes through spearman regression analysis. Six indexes with significant correlation were extracted, and a ridge regression model was built, and the country’s urban healthy elderly’25(OH)D reference value was predicted. By using the disjunctive Kriging method, we obtained the geographical distribution of 25(OH)D reference values for healthy elderly people in China. The reference value of 25(OH)D for healthy elderly in China was significantly correlated with the 6 secondary indexes, namely, latitude (°), annual temperature range (°C), annual sunshine hours (h), annual mean temperature (°C), annual mean relative humidity (%), and annual precipitation (mm). The geographical distribution of 25(OH)D values of healthy elderly in China showed a trend of being higher in South China and lower in North China, and higher in coastal areas and lower in inland areas. This study lays a foundation for further research on the mechanism of different influencing factors on the reference value of 25(OH)D index. A ridge regression model composed of significant influencing factors has been established to provide the basis for formulating reference criteria for the treatment factors of the vitamin D deficiency and prognostic factors of the COVID-19 using 25(OH)D reference value in different regions.
... Taylor Diagram (TD) indicated the degree of efficiency between the real and modeled data via criterion values of the R, RMSE, and the standard deviation (SD) (Simão et al., 2020). Figure 10 demonstrates the comparison of the WGEP model's efficiency for estimating the AWW variable including various factors in Golpayegan and Kashan regions. ...
Article
Full-text available
Due to climate change and the decrease of surface water resources recently, groundwater resources, especially aqueducts, have special importance to meet various human requirements in arid and semi-arid regions. With the aim of aqueduct water withdrawal (AWW) estimating for agricultural uses, the present research was implemented, in Golpayegan and Kashan regions of Iran; classified in non-uniform and uniform climate zones with water scarcity situation. The AWW variables were estimated based on four scenarios including (1) aqueduct local features, (2) hydrological, (3) land-use, and (4) combined scenarios. The [(Mother-well Depth (MWD), Aqueduct Channel Length (ACL)), (minimum flow rate (QMin), maximum flow rate (QMax)), and (Cultivated Area (CA), Orchard Area (OA))] variables reagent the first to third scenarios, respectively. Estimation of AWW was operated via single and Wavelet-hybrid (W-hybrid with de-noising) Soft-computing (SC) approaches, including artificial neural networks (ANNs), Wavelet-ANN (WANNs), adaptive neuro-fuzzy inference system (ANFIS), Wavelet-ANFIS (WANFIS), gene expression programming (GEP), and Wavelet-GEP (WGEP). The WGEP model's efficiency with the hybrid characteristics of MWD, ACL, QMin, QMax, CA, and OA variables was recommended as the best model to estimate AWW variables without climate conditions’ effects. With increasing levels of decomposition in wavelet approach and noise reduction, the performance of the models for estimating AWW increased. Also, the findings revealed that the implementation of the proposed method in uniform climates can have a higher performance than non-uniform climates. The achieved values of RMSE for the combined factor of WGEP models were 23.249 and 17.227 (×10³ m³), for estimating AWW in Golpayegan and Kashan, respectively. The performance of WGEP was excellent (R > 0.920) in the estimation of AWW in both climatic types for maximum extreme amounts. Abstracting mathematical formulation of GEP and WGEP models is part of the research finding profound effects implementing policies related to Integrated Water Resources Management to protect the aqueduct’s destruction by excessive consumption.
Article
Full-text available
Stable nanofluid dispersion with SiO2 particles of 15, 50, and 100 nm is generated in a base liquid composed of water and glycerol in a 7:3 ratio and tested for physical characteristics in the temperature range of 20-100oC. The nanofluid showed excellent stability for over a month. Experiments are undertaken for the flow of nanofluid in a copper pipe and measured for their heat transfer coefficient and flow behavior. The convection heat transfer coefficient increases with the flow Reynolds number in the transition-turbulent flow regime. The experimental results further reveal that the friction factor enhancement with 0.5% concentration has increased by 6% as compared to the base liquid. It was employed for prognostic model development using XGBoost and multi-gene genetic programming (MGGP) to model and predict the complex and nonlinear data acquired during experiments. Both techniques provided robust predictions, as witnessed by the statistical evaluation. The R² statistics of the XGBoost-based model was 0.9899 throughout the model test, while it was lowered to 0.9455 for the MGGP-based model. However, the change was insignificant. The mean squared value was 8.37 for XGBoost, while it increased in the MGGP model to 45.12. Similarly, the mean absolute error (MAE) value was higher (6.623) in the case of MGGP than in XGBoost at 2.733. The statistical evaluation, Taylor diagrams, and violin plots helped determine that XGBoost was superior to MGGP in the present work.
Article
Full-text available
Prediction of crop yield is essential for decision-makers to ensure food security and provides valuable information to farmers about factors affecting high yields. This research aimed to predict sunflower grain yield under normal and salinity stress conditions using three modeling techniques: artificial neural networks (ANN), adaptive neuro-fuzzy inference system (ANFIS), and gene expression programming (GEP). A pot experiment was conducted with 96 inbred sunflower lines (generation six) derived from crossing two parent lines, over a single growing season. Ten morphological traits—including hundred-seed weight (HSW), number of leaves, leaf length (LL) and width, petiole length, stem diameter, plant height, head dry weight (HDW), days to flowering, and head diameter—were measured as input variables to predict grain yield. Salinity stress was induced by applying irrigation water with electrical conductivity (EC) levels of 2 dS/m (control) and 8 dS/m (stress condition) using NaCl, applied after the seedlings reached the 8-leaf stage. The GEP model demonstrated the highest precision in predicting sunflower grain yield, with coefficient of determination (R²) values of 0.803 and 0.743, root mean squared error (RMSE) of 4.115 and 4.022, and mean absolute error (MAE) of 3.177 and 2.803 under normal conditions and salinity stress, respectively, during the testing phase. Sensitivity analysis using the GEP model identified LL, head diameter, HSW, and HDW as the most significant parameters influencing grain yield under salinity stress. Therefore, the GEP model provides a promising tool for predicting sunflower grain yield, potentially aiding in yield improvement programs under varying environmental conditions.
Article
Biochar is found to possess a large number of applications in energy and environmental areas. However, biochar could be produced from a variety of sources, showing that biochar yield and proximate analysis outcomes could change over a wide range. Thus, developing a high-accuracy machine learning-based tool is very necessary to predict biochar characteristics. In this study, a hybrid technique was developed by blending modern machine learning (ML) algorithms with cooperative game theory-based Shapley Additive exPlanations (SHAP). SHAP analysis was employed to help improve interpretability while offering insights into the decision-making process. In the ML models, linear regression was employed as the baseline regression method, and more advanced methodologies like AdaBoost and boosted regression tree (BRT) were employed. The developed prediction models were evaluated on a battery of statistical metrics, and all ML models were observed as robust enough. Among all three models, the BRT-based model delivered the best prediction performance with R2 in the range of 0.982 to 0.999 during the model training phase and 0.968 to 0.988 during the model test. The value of the mean squared error was also quite low (0.89 to 9.168) for BRT-based models. SHAP analysis quantified the value of each input element to the expected results and provided a more in-depth understanding of the underlying dynamics. The SHAP analysis helped to reveal that temperature was the main factor affecting the response predictions. The hybrid technique proposed here provides substantial insights into the biochar manufacturing process, allowing for improved control of biochar properties and increasing the use of this sustainable and flexible material in numerous applications.
Article
The Ganga-Brahmaputra moribund deltaic floodplain region hosted many socio-ecologically precious freshwater wetland ecosystems experiencing hydrological alteration. The present study aimed to model hydrological strength (HS) to show the spatial difference and account for the degree and direction of hydrological alteration of Indian moribund deltaic wetland in three phases e.g. (1) phase I (1988–1997), (2) phase II (1998–2007) and phase III (2008–2017). Three key hydrological parameters, such as Water Presence Frequency (WPF), water depth, and hydro-period were considered for hydrological strength modelling using two ensemble Machine Learning (ML) techniques (Random Forest (RF) and XGBoost). Image algebra was employed for phasal change detection. Hydrological strength models show that around 75% of the wetland area was lost in-between phases I to III and the loss was found more intensive in moderate and weak HS zones. Existing wetland shows a clear spatial difference of HS between wetland core and periphery and river linked and delinked or not linked wetlands. Regarding the suitability of the ML models, both are acceptable, however, the XGBoost outperformed in reference to applied 15 statistical validation techniques and field evidence. HS models based on change detection clarified that more than 22% and 55% of the weak HS zone in phases II and III respectively were turned into non-wetland. The degree of alteration revealed that about 40% of wetland areas experienced a negative alteration during phases I to II, and this proportion increased to 63% in between phases II to III. Since the study figured out the spatial nature of HS, degree and direction of alteration at a spatial scale, these findings would be instrumental for adopting rational planning towards wetland conservation and restoration.
Article
The article proposes the use of the permutation method for assessment of the predictive ability of models based on artificial neural networks. To test this method, three models based on artificial neural networks were implemented: a multilayer perceptron, a radial basis function network, and a generalized regression neural network. For modeling, data on the spatial distribution of copper and iron in the topsoil (depth 0.05 m) on the territory of the subarctic city of Noyabrsk, Yamalo-Nenets Autonomous Okrug, Russia, were used. A total of 237 soil samples were collected. For modelling, the copper and iron concentration data were divided into two subsets: training and test. The modelled spatial datasets were compared with the observed values of the test subset. To assess the performance of the constructed models, three approaches were used: 1) calculation of correlation coefficients, error or agreement indexes, 2) graphical approach (Taylor diagram), 3) randomization assessment of the probability of obtaining a divergence between the observed and modelled datasets, assuming that both of these datasets taken from the same population. For the randomization algorithm, two statistics were used: difference in means and correlation coefficient. The permutation method proved its productivity, as it allowed to assess the significance of the divergence between the observed and predicted datasets.
Article
Hybrid nanofluids are gaining popularity owing to the synergistic effects of nanoparticles, which provide them with better heat transfer capabilities than base fluids and normal nanofluids. The thermophysical characteristics of hybrid nanofluids are critical in shaping heat transmission properties. As a result, before using thermophysical qualities in industrial applications, an in depth investigation of thermophysical properties is required. In this paper, a metamodel framework is constructed to forecast the effect of nanofluid temperature and concentration on numerous thermophysical parameters of Fe3O4-coated MWCNT hybrid nanofluids. Evolutionary gene expression programming (GEP) and an adaptive neural fuzzy inference system (ANFIS) were employed to develop the prediction models. The model was trained using 70% of the datasets, with the remaining 15% used for testing and validation. A variety of statistical measurements and Taylor's diagrams were used to assess the proposed models. The Pearson's correlation coefficient (R), coefficient of determination (R2) was used for the regression index, the error in the model was evaluated with root mean squared error (RMSE). The model's comprehensive assessment additionally includes modern model efficiency indices such as Kling-Gupta efficiency (KGE) and Nash-Sutcliffe efficiency (NSCE). The proposed models demonstrated impressive prediction capabilities. However, the GEP model (R > 0.9825, R2 > 0.9654, RMSE = 0.7929, KGE > 0.9188, and NSCE > 0.9566) outperformed the ANFIS model (R > 0.9601, R2 > 0.9218, RMSE = 1.495, KGE > 0.8015, and NSCE > 0.8745) for the majority of the findings. The generated metamodel was robust enough to replace the repetitive expensive lab procedures required to measure thermophysical properties.
Article
Carbonation of concrete has significant influence on the service life of constructions and great effort has been made to establish an accurate and efficient model of carbonation that incorporates both internal and external factors. We present a hybrid machine learning (ML) approach that combined two single ML models: artificial neural network (ANN) and support vector machine (SVM). A literature survey generated a database containing 532 records of accelerated carbonation depth measurements for concrete mixtures inclusive of fly-ash blends. Six inputs comprising cement content, fly-ash replacement level, water–binder ratio (w/b), CO2 concentration, relative humidity and exposure time were selected for modeling, justified by grey relational analysis. The four ML models had excellent accuracy in predicting the carbonation depth of concrete, with the correlation coefficient ranging from 0.9788 to 0.9946, but the two hybrid ML models achieved superior performance to the single ANN and SVM models with characteristic higher correlation coefficients, lower mean value for the absolute error and lower standard deviation for its distribution. In addition, compared with other commonly known empirical carbonation models, the hybrid ML models showed more accurate carbonation depth prediction with smaller root mean square error. Furthermore, the weightings of the contributions of six selected factors to carbonation depth disclosed that CO2 concentration, w/b and binder content had higher relative importance to carbonation depth and should be given greater weighting in future carbonation model development.
Article
Full-text available
Vector quantities, e.g., vector winds, play an extremely important role in climate systems. The energy and water exchanges between different regions are strongly dominated by wind, which in turn shapes the regional climate. Thus, how well climate models can simulate vector fields directly affects model performance in reproducing the nature of a regional climate. This paper devises a new diagram, termed the vector field evaluation (VFE) diagram, which is a generalized Taylor diagram and able to provide a concise evaluation of model performance in simulating vector fields. The diagram can measure how well two vector fields match each other in terms of three statistical variables, i.e., the vector similarity coefficient, root mean square length (RMSL), and root mean square vector difference (RMSVD). Similar to the Taylor diagram, the VFE diagram is especially useful for evaluating climate models. The pattern similarity of two vector fields is measured by a vector similarity coefficient (VSC) that is defined by the arithmetic mean of the inner product of normalized vector pairs. Examples are provided, showing that VSC can identify how close one vector field resembles another. Note that VSC can only describe the pattern similarity, and it does not reflect the systematic difference in the mean vector length between two vector fields. To measure the vector length, RMSL is included in the diagram. The third variable, RMSVD, is used to identify the magnitude of the overall difference between two vector fields. Examples show that the VFE diagram can clearly illustrate the extent to which the overall RMSVD is attributed to the systematic difference in RMSL and how much is due to the poor pattern similarity.
Article
Full-text available
Vector quantities, e.g. vector winds, play an extremely important role in climate system. Energy and water exchanges between different regions are strongly dominated by wind, which in turn shapes regional climate. Thus, how well climate models can simulate vector fields directly affect model performance in reproducing the nature of regional climate. The paper devises a new diagram, termed vector field evaluation (VFE) diagram, which is very similar to Taylor diagram but to provide a concise evaluation of model performance in simulating vector fields. The diagram can measure how well of two vector fields match each other in terms of three statistical variables, i.e. vector similarity coefficient, root-mean-square (RMS) length (RMSL), and RMS vector difference (RMSVD). As the Taylor diagram, the VFE diagram is especially useful in evaluating climate models. The pattern similarity of two vector fields is measured by a vector similarity coefficient (VSC) that is defined by the arithmetic mean of inner product of normalized vector pairs. Examples are given showing that VSC can well identify how close one vector field resemble another. Note that VSC can only describe the pattern similarity and do not reflect the systematic difference in the mean vector length between two vector fields. To measure the vector length, RMSL is included in the diagram. The third variable, RMSVD, is used to identify the magnitude of overall difference between two vector fields. Examples show that the new diagram can clearly illustrate how much the overall RMSVD is attributed to the systematic difference in RMSL and how much is due to the poor pattern similarity.
Article
Full-text available
Wavo spectra were measured along a profile extending 160 km into the North Sea westward from Sylt for a period of ten weeks in 1969. Currents, tides, air-sea temperature differences and turbulence in the atmospheric boundary layer were also measured. the goal of the experiment (described in Part 1) was to determine the structure of the source function governing the energy balance of the wave spectrum, with particular emphasis on wave growth under stationary offshore wind conditions (Part 2) and the attention of swell in water of finito depth (Part 3). The source functions of wave spectra generated by offshore winds exhibit a characteristic plus-minus signature associated with the shift of the sharp spectral peak towards lower frequencies. The two-lobed distribution of the source function can be explained quantitively by the nonlinear transfer due to resonant wave-wave interactions (second order Bragg scattering). The evolution of a pronounced peak and its shift towards lower frequencies can also be understood as a self-stabilizing feature of this process. The decay rates determined for incoming swell varied considerably, but energy attenuation factors of two along the length of the profile were typical. This is in order of magnitude agreement with expected damping rates due to bottom friction. However, the strong tidal modulation predicted by theory for the case of a quadratic bottom friction law was not observed. Adverse winds did not affect the decay rate. Computations also rule out wave-wave interactions or dissipation due to turbulence outside the bottom boundary layer as effective mechanisms of swell attenuation. We conclude that either the generally accepted friction law needs to be significantly modified or that some other mechanism, such as scattering by bottom irregularities, is the cause of the attenuation. The dispersion characteristics of thw swells indicated rather nearby origins, for which the classical DELTA-event model was generally inapplicable. A strong Doppler modulation by tidal currents was also observed. (A)
Article
In this paper, a joint distribution of all relevant environmental parameters used in design of offshore structures including directional components is presented, along with a novel procedure for dependency modelling between wind and wind sea. Probabilistic directional models are rarely used for response calculation and reliability assessments of stationary offshore structures. However, very few locations have the same environment from all compass directions in combination with a rotationally symmetric structure. The scope of this work is to present a general environmental joint distribution with directional descriptions for long term design of stationary offshore structures such as offshore wind turbines. Wind, wind sea and swell parameters will be investigated for a chosen location in the central North Sea.
Article
In order to perform a more accurate analysis of marine structures, joint probability distributions of different metocean parameters have received an increasing interest during the last decade, facilitated by improved availability of reliable joint metocean data. There seems to be no general consensus with regard to the approach of estimating joint probability distributions of metocean parameters and a general overview of recent studies exploring different joint models for metocean parameters is presented. The main objective of this article is twofold: first to establish a joint distribution of significant wave height and current speed and then to assess the possible conservatism in the Norwegian design standard by applying this joint distribution in a simplified load case. Based on NORA10 wave data and simulated current data, a joint model for significant wave height and current speed at one location in the northern North Sea is presented. Since episodes of wind-generated inertial oscillations are governing the current conditions at this location, a joint conditional model with current speed conditional on significant wave height is suggested. A peak-over-threshold approach is selected. The significant wave height is found to be very well modelled by a 2-parameter Weibull distribution for significant wave height exceeding 8 m, while a log-normal distribution describes the current speed well. This model is used to Monte-Carlo simulate joint significant wave heights and current speeds for periods corresponding to the ultimate and accidental limit states (ULS and ALS), i.e. 100 and 10 000 years. The possible conservatism in the Norwegian design standard is assessed by a simplified case study. The results give a clear indication that the Norwegian design standard in not necessarily conservative, neither at ULS nor ALS level.
Article
A joint environmental model for a long term response calculations is proposed. The model is based on experience gained from measurements and hindcast data from the Norwegian Continental Shelf. The model is an extension of an existing joint environmental model for Haltenbanken (off central Norway). The present model includes the following environmental parameters: 1-hour mean wind speed, current speed, significant wave height, spectral peak period, main wave direction (wind and current are assumed to be collinear with the main wave direction), and water level. Additionally, a simultaneous description of wind sea and swell is also introduced. The model is expected to be a useful tool for long term reliability calculations of offshore structures, especially as dynamics and/or other frequency dependent loading mechanisms become important. The model accounts for the lack of full correlation among the various environmental processes and it can be used for assessing the conservatism of the traditional design approach. Herein, as an illustration, the model is applied in connection with a reliability analysis of an idealized structural system, where the degree of dynamics is varied artificially. A first order reliability method (FORM) is used for this purpose.
Book
Data Analysis Methods in Physical Oceanography is a practical referenceguide to established and modern data analysis techniques in earth and oceansciences. This second and revised edition is even more comprehensive with numerous updates, and an additional appendix on 'Convolution and Fourier transforms'. Intended for both students and established scientists, the fivemajor chapters of the book cover data acquisition and recording, dataprocessing and presentation, statistical methods and error handling,analysis of spatial data fields, and time series analysis methods. Chapter 5on time series analysis is a book in itself, spanning a wide diversity oftopics from stochastic processes and stationarity, coherence functions,Fourier analysis, tidal harmonic analysis, spectral and cross-spectralanalysis, wavelet and other related methods for processing nonstationarydata series, digital filters, and fractals. The seven appendices includeunit conversions, approximation methods and nondimensional numbers used ingeophysical fluid dynamics, presentations on convolution, statisticalterminology, and distribution functions, and a number of importantstatistical tables. Twenty pages are devoted to references.
Article
For several engineering applications, joint met-ocean probabilities are required. Recently increasing attentions have been given to importance of inclusion wind-sea and swell components in the joint description. Presence of wind-sea and swell will affect design and operability of fixed and floating offshore structures as well as LNG terminals. The study presents a joint met-ocean model which can be applied for design and operations of marine structures, including LNG platforms. The model is fitted to hindcast data from four locations: Southern North Sea, West Shetland, and Northwest Shelf of Australia and off coast of Nigeria. Uncertainties related to the proposed fits are examined focusing on location specific features of the wave climate and an adopted partitioning procedure for the wave components. Implications of the uncertainties on design and operation criteria of LNG platforms are discussed and demonstrated by examples.
Conference Paper
The joint probabilistic models (JPM) of the environmental parameters of wave, wind and current are nowadays extremely needed in order to perform reliability analyses of offshore structures. These JPM are also essential steps for the design of offshore structures based on long-term statistics and to perform dynamic response analysis of floating units that are strongly dependent on the directionality of the environmental actions, such as turret-moored FPSOs. Recently, some JPM have been proposed in the literature to represent the joint statistics of a reduced number of environmental parameters. However, it is difficult to find a practical and fully operational model taking into account the complete statistical dependence among all the environmental parameters intensities and their correspondent directions. In this paper, it is presented a straightforward methodology, based on the Nataf transformation, to create a JPM of the environmental parameters taking into account the dependence between the intensity and direction of all variables. The proposed model considers the statistical dependence of ten short-term variables: the significant wave height, peak period and direction of the sea waves, the significant wave height, peak period and direction of the swell waves, the amplitude and direction of the 1-h wind velocity and, finally, the amplitude and direction of the surface current velocity. The statistical dependence between them is evaluated using concepts of linear-linear, linear-circular and circular-circular variables correlation. Some results of the proposed JPM methodology are presented based on simultaneous environmental data gathered in a location offshore Brazil.
Article
This paper describes the construction and use of ‘modified Taylor diagrams’ and the comparison of three real-time assimilative ionospheric models. The paper expands on the work by McNamara et al. [2013] and serves as an addendum to that work. Modified Taylor diagrams provide an easy way of visualizing and comparing statistical information about a number of models, and for multiple parameters, simultaneously. Using modified Taylor diagrams has led to new conclusions about the models tested in McNamara et al. [2013]; also the comparison of the data ingestion version of NeQuick is included. It is shown that the modified NeQuick model performs comparably with the data assimilation models from McNamara et al. [2013] and in multiple cases also shows considerable improvement, such as in hmF2 at the Hermanus Digisonde station.
Article
Two multivariate distribution models consistent with prescribed marginal distributions and covariances are presented. The models are applicable to arbitrary number of random variables and are particularly suited for engineering applications. Conditions for validity of each model and applicable ranges of correlation coefficients between the variables are determined. Formulae are developed which facilitate evaluation of the model parameters in terms of the prescribed marginals and covariances. Potential uses of the two models in engineering are discussed.
Article
The wave climate off northern Norway is considered and the investigation is based on wave measurements made at Tromsøflaket by means of a waverider buoy during the years 1977–1981. Data quality of waverider measurements is briefly commented upon; however, more emphasis is given to an evaluation of the long-term representativity of the actual measuring period and to a procedure accounting approximately for a lack of representativity. The wave climate is presented in terms of a smoothed joint probability density function of the significant wave height, Hs, and the spectral peak period, Tp. Based on this distribution a consistent design curve in the Hs, Tp space is established.
Article
Several types of joint distribution function for significant wave height and zero-upcrossing period are compared with reference to measured wave data from the Norwegian Continental Shelf. The comparison is based on the utility of the distribution functions for predictions of extreme response of offshore structures. Logarithmic contour plots and contour plots of normalised deviations between the data and the fitted distribution are used in the comparison. A joint distribution combining a marginal 3-parameter Weibull distribution for significant wave height with a conditional log-normal distribution for zero-up-crossing period is recommended on the basis of this investigation.
Article
A diagram has been devised that can provide a concise statistical summary of how well patterns match each other in terms of their correlation, their root-mean-square difference, and the ratio of their variances. Although the form of this diagram is general, it is especially useful in evaluating complex models, such as those used to study geophysical phenomena. Examples are given showing that the diagram can be used to summarize the relative merits of a collection of different models or to track changes in performance of a model as it is modified. Methods are suggested for indicating on these diagrams the statistical significance of apparent differences and the degree to which observational uncertainty and unforced internal variability limit the expected agreement between model-simulated and observed behaviors. The geometric relationship between the statistics plotted on the diagram also provides some guidance for devising skill scores that appropriately weight among the various measures of pattern correspondence.
Article
This book describes the stochastic method for ocean wave analysis. This method provides a route to predicting the characteristics of random ocean waves - information vital for the design and safe operation of ships and ocean structures. Assuming a basic knowledge of probability theory, the book begins wuith a chapter describing the essential elements of wind-generated random seas from the stochastic point of view. The following three chapters introduce spectral analysis techniques, probabilistic predictions of wave amplitudes, wave height and periodicitiy. A further four chapters discuss sea severity, extreme sea state, directional wave energy spreading in random seas and special wave events such as wave breaking and group phenomena. Finally, the stochastic properties of non-Gaussian waves are presented. Useful appendices and an extensive reference list are included. Examples of practical applications of the theories presented can be found throughout the text. This book will be suitable as a text for graduate students of naval, ocean and coastal engineering. It will also serve as useful reference for research scientists and engineers working in this field.
RP-C205 Environmental Conditions and Environmental Loads
  • Dnv Gl
DNV GL. "RP-C205 Environmental Conditions and Environmental Loads." Technical Report. (2017).
Petroleum and natural gas industries-specific requirements for offshore structures. Part 1: Metocean design and operating considerations
  • Bs
  • Iso
BS EN ISO 19901-1, 2013, Petroleum and natural gas industries-specific requirements for offshore structures. Part 1: Metocean design and operating considerations, United Kingdom. (2015).
A joint probability model for environmental parameters
  • LVS Sagrilo
  • ECP Lima
  • A Papaleo