Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Review of J. Scott Armstrong's 1978 book

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... His classification system is one of the more exhaustive systems and it would not take much redefining to incorporate newer techniques such as forecasting by role-playing into his scenario classification. Armstrong (1985) said that research for analyzing data has historically been organized along three continuums: subjective vs. objective, naive vs. causal, and linear vs. classification methods. He then placed five forecasting methods within these continuums to develop a methodology tree ( Figure 1) that also provided guidance as to when various methods should be used. ...
... In addition, the models that result from bootstrapping might be viewed as econometric and/or segmentation models. Armstrong's (1985) classification scheme is concise, but is neither exhaustive nor exclusive. ...
... The earlier typologies (Cetron and Ralph 1971;Martino, 1972) were most useful in determining what wasand was notforecasting. The later ideas (Armstrong, 1985(Armstrong, , 2001) took a step forward by also providing guidance as to when certain classifications should be used. But these typologies are neither exhaustive, exclusive nor concise (Table 1). ...
Article
Full-text available
Given the large variety of forecasting methods, researchers have developed different ways of classifying these methods. However, none of these methods meet the criteria of a good typology, i.e. concise, exclusive and exhaustive. Based on a review of the current classification methods, the paper proposes a forecasting classification grid based on two distinct dimensions, i.e. judgmental opinions and empirically evaluated ideas, and naive and causal forecasting. Being concise, exclusive and exhaustive, this new classification method provides a systematic way to organize different forecasting methods.
... The academic literature is represented by two distinct views on technology forecasting. The classical view treats technology forecasting in the context of a corporate plan [51]. Following this discourse, Watts and Porter showed that grounded forecasts could effectively synthesize a number of bibliometric methods, such as analysis of technological trends in combination with visualizations of technological interdependencies and competitive intelligence survey [52]. ...
... In the corporate practice technology forecasting often serves as a part of strategic and technology management in its operationalized and simplified form – as technological roadmaps. Initially, roadmaps were meant as a planning tool that helps visualize technological developments and identify uncertainties and possibilities on the way to a target technol- ogy [40, 51] . Over time roadmaps have evolved in a prediction and forecasting tool [54, 55] and were widely used to embed business and technological strategy into the front-end of the product development [56]. ...
Article
Full-text available
Technologies arising out of successful high-tech mergers and acquisitions (M&A) have a significant innovation potential. However, forecasting of the possible output is coupled with uncertainties caused by misleading or insufficient future-oriented analytics. The proposed framework facilitates publicly available information and data to forecast potential innovation activities of the companies involved in high-tech M&As. A five-step scheme of analysis is aimed to assess previous M&A record, intellectual property (IP) portfolios of the focal companies as well as the relevant technological context, and construct pathways of potential innovation activities using elements of a scenario technique and roadmapping. The framework has been tested on the deals including both large concerns and small and medium-sized enterprises (SME). We summarize the paper by reflecting on the merits and limitations of the framework on the way to our objective – to provide grounded forecasting triggered by M&As to support the decision-making.
... As the rate of the differences between actual value and predicted value increases (as the information asymmetry increases), we expect to have more forecast error measurements which lead to larger forecast errors or values. Symmetric mean absolute percentage error (SMAPE or sMAPE) is an accuracy measure based on percentage (or relative) errors [43]. It is usually defined as follows; ...
... where A t is the actual value and F t is the forecast value [33,42,43]. The mean absolute percentage error (MAPE), also known as mean absolute percentage deviation (MAPD), is a measure of accuracy of a method for constructing fitted time series values in statistics, specifically in trend estimation. ...
Article
Full-text available
In this study, we try to examine whether the forecast errors obtained by the ANN models affect the breakout of financial crises. Additionally, we try to investigate how much the asymmetric information and forecast errors are reflected on the output values. In our study, we used the exchange rate of USD/TRY (USD), the Borsa Istanbul 100 Index (BIST), and gold price (GP) as our output variables of our Artificial Neural Network (ANN) models. We observe that the predicted ANN model has a strong explanation capability for the 2001 and 2008 crises. Our calculations of some symmetry measures such as mean absolute percentage error (MAPE), symmetric mean absolute percentage error (sMAPE), and Shannon entropy (SE), clearly demonstrate the degree of asymmetric information and the deterioration of the financial system prior to, during, and after the financial crisis. We found that the asymmetric information prior to crisis is larger as compared to other periods. This situation can be interpreted as early warning signals before the potential crises. This evidence seems to favor an asymmetric information view of financial crises.
... A summary of the model shows that there were 12,131 trainable parameters. The network model was compiled using the Adam optimizer, and the loss of the model was calculated using the mean square error (MSE) function equation (2). Where 1 n ∑ n i is the mean, and y i −b y i ð Þ is the squared difference between the predicted and the actual value. ...
... It is calculated by differencing the absolute of the actual value from the absolute of the predicted value divided by half the [35]. This equation evaluates the model by generating a positive and negative error while limiting the effect of outliers and bias [2]. ...
Article
Full-text available
There have been many improvements and advancements in the application of neural networks in the mining industry. In this study, two advanced deep learning neural networks called recurrent neural network (RNN) and autoregressive integrated moving average (ARIMA) were implemented in the simulation and prediction of limestone price variation. The RNN uses long short-term memory layers (LSTM), dropout regularization, activation functions, mean square error (MSE), and the Adam optimizer to simulate the predictions. The LSTM stores previous data over time and uses it in simulating future prices based on defined parameters and algorithms. The ARIMA model is a statistical method that captures different time series based on the level, trend, and seasonality of the data. The auto ARIMA function searches for the best parameters that fit the model. Different layers and parameters are added to the model to simulate the price prediction. The performance of both network models is remarkable in terms of trend variability and factors affecting limestone price. The ARIMA model has an accuracy of 95.7% while RNN has an accuracy of 91.8%. This shows that the ARIMA model outperforms the RNN model. In addition, the time required to train the ARIMA is than that of the RNN. Predicting limestone prices may help both investors and industries in making economical and technical decisions, for example, when to invest, buy, sell, increase, and decrease production.
... Using Eq. (1), one can run forward time and get forecasts for jobs ads postings in the future. We measure the accuracy of the forecast using the Symmetric Mean Absolute Percentage Error (SMAPE) [23], [29]. SMAPE is formally defined as: where A t denotes the actual value of jobs posted on day t, and F t is the predicted value of job ads on day t. ...
... SMAPE is an alternative to MAPE that is (1) scale-independent and (2) can handle actual or predicted zero values. SMAPE, first proposed by Armstrong [29] and then by Makridakis [23], G. DSA Skills List 3 The method is called cross_validation. For more information, see: https://facebook.github.io/prophet/docs/diagnostics.html ...
Preprint
Full-text available
This research develops a data-driven method to generate sets of highly similar skills based on a set of seed skills using online job advertisements (ads) data. This provides researchers with a novel method to adaptively select occupations based on granular skills data. We apply this adaptive skills similarity technique to a dataset of over 6.7 million Australian job ads in order to identify occupations with the highest proportions of Data Science and Analytics (DSA) skills. This uncovers 306,577 DSA job ads across 23 occupational classes from 2012-2019. We then propose five variables for detecting skill shortages from online job ads: (1) posting frequency; (2) salary levels; (3) education requirements; (4) experience demands; and (5) job ad posting predictability. This contributes further evidence to the goal of detecting skills shortages in real-time. In conducting this analysis, we also find strong evidence of skills shortages in Australia for highly technical DSA skills and occupations. These results provide insights to Data Science researchers, educators, and policy-makers from other advanced economies about the types of skills that should be cultivated to meet growing DSA labour demands in the future.
... Although there are several variants of the measure, the following definition was employed: where the absolute difference between A and F is divided by the sum of A and F. The value of this calculation is again summed for every estimated unit and divided by the number of units. Armstrong (1985, p. 348) first introduced SMAPE and called it " adjusted MAPE. " It was later modified and discussed by Flores (1986). ...
Article
Full-text available
Cultivar-specific adoption information is imperative for agricultural research organizations to make strategic research plans for crop-genetic development. However, such data are often unavailable in developing countries or obsolete and unreliable even when they exist. A budget-friendly and reliable method of tracking and monitoring varietal adoptions is highly desired. In this paper, we employ expert elicitation (EE) as a method to obtain estimates of modern variety (MV) adoption of rice in Bangladesh, Bhutan, India, Nepal, and Sri Lanka. EE is conducted by comparing information from EE assessment and household surveys. We found that organized panels of agricultural experts can provide reliable estimates of the area planted to MVs. In addition, cultivar-specific adoption estimates are reliable for dominant varieties. To some extent, EE estimates are more precise when estimates are calculated by aggregating disaggregate-level elicitations than by directly obtaining aggregate-level elicitations. Furthermore, the household surveys reveal that it takes approximately a decade for a new variety to be adopted by a significant number of farmers.
... Scientists have studied forecasting since the 1930s; Armstrong (1978Armstrong ( , 1985) provides summaries of important findings from the extensive forecasting literature. In the mid 1990s, Scott Armstrong established the Forecasting Principles Project to summarize all useful knowledge about forecasting. ...
Article
Full-text available
Calls to list polar bears as a threatened species under the United States Endangered Species Act are based on forecasts of substantial long-term declines in their population. Nine government reports were written to help U.S. Fish and Wildlife Service managers decide whether or not to list polar bears as a threatened species. We assessed these reports based on evidence-based (scientific) forecasting principles. None of the reports referred to sources of scientific forecasting methodology. Of the nine, Amstrup, Marcot, and Douglas (2007) and Hunter et al. (2007) were the most relevant to the listing decision, and we devoted our attention to them. Their forecasting procedures depended on a complex set of assumptions, including the erroneous assumption that general circulation models provide valid forecasts of summer sea ice in the regions that polar bears inhabit. Nevertheless, we audited their conditional forecasts of what would happen to the polar bear population assuming, as the authors did, that the extent of summer sea ice would decrease substantially during the coming decades. We found that Amstrup et al. properly applied 15 percent of relevant forecasting principles and Hunter et al. 10 percent. Averaging across the two papers, 46 percent of the principles were clearly contravened and 23 percent were apparently contravened. Consequently, their forecasts are unscientific and inconsequential to decision makers. We recommend that researchers apply all relevant principles properly when important public-policy decisions depend on their forecasts.
... To quantify the difference between the analytically approximated joint strengths and the corresponding experimental results, the mean absolute percentage error (MAPE) is determined (Armstrong, 1985). This value reflects the average deviation between model and experiments. ...
Article
Space frames and multi-material structures are innovative designs to reducethe weight of a vehicle. But both lightweight design concepts have complex demands on joining technologies with the result that conventional processes are often pushed to their technological limits. Joining by electromagnetic crimping provides an interesting alternative to connect such structures without penetration or external heating. During electromagnetic crimping, pulsed magnetic fields are used to form a profile made out of an electrically conductive material into form-fit elements, like grooves, of the other joining partner. Thereby, an interlock is generated, which enables a load transfer.However, existing joint design methodologies require either extensive experimental studies or numerical modeling.To facilitate the connection design, an analytical approach for the prediction of the joining zone parameters with respect to the loads to be transferred is presented in this article. For the validation of the developed approach, experimental studies regarding the load transfer under quasi-static tension are performed. The major parameters considered in these investigations are the groove dimensions and its shape. In order to reduce the mass of a structure, hollow mandrels can be applied. To analyze how the reduced compressive strength of such inner connection elements influences the joining behavior and the load transfer of electromagnetically crimped connections, experimental studies are performed subsequent to the studies on the general groove parameters. Based on the obtained results, design strategies and a process window for the manufacturing of such joints are developed. To show the potential of electromagnetic form‑fit joining, example connections joined in accordance with the established design guidelines are tested under quasi‑static and cyclic loading.
... This debate still occurred within the framework of predictive Futures Studies. In this approach, the future is discovered through extrapolation (Armstrong 1970). Through prediction, profits could be made and management optimized. ...
Chapter
This essay puts forward a theory of gesture in relationship to temporality. The essay explores gestures as ongoing body-jumping performances that have the potential to carry history in different directions, toward alternative futures and re-remembered pasts, with each re-irruptive singularity.
... Decisions are often taken under uncertainties over future periods; as a result, decisions depend on information that are estimated or projected for these future periods. Thus, good decisions will be result of the accuracy of forecasting models used by the decision-maker (Armstrong, 1985). Companies usually use forecasting models in order to develop different kind of plans that improve their performances inside of their supply chains. ...
Article
Full-text available
The blood center is the focus unit of a blood supply chain; and it has among its main assignments, the following activities: blood collection from donors, blood processing and blood components distribution to hospitals and healthcare institutions. It is surely the nervous center of the chain where planning, monitoring and controlling of these activities must be efficiently performed in order to prevent the stockout or outdate of blood components. All efficiency performance provided by a blood center will depend on the quality of its planning process, which starts with a good accuracy of forecasting required for blood supply and blood components. In this paper, a computational environment, which is oriented for forecasting of blood components, is presented. The idea is to improve the planning of the inventory balance process of the blood supply chain.
... Methods that are used can range from simple univariate forecasting to multivariate forecasting that is very complicated. According to Armstrong (1985) one of the most popular methods of time series analysis is the Box-Jenkins approach. Despite its popularity, it is very complicated and very expensive to apply. ...
Article
Full-text available
This research explores the accuracy of economic forecasts by the Ministry of Finance of South Africa. The Ministry of Finance began to implement economic forecasts in 1997 in the time of Minister Trevor Manuel. These forecasts were presented every year since 1997 in the annual budget speech. The Minister of Finance (Pravin Gordhan) who succeeded Minister Manuel continued to deliver economic forecasts in the annual budget speech since 2010. The research focused on the period 1997 – 2009 when Minister Manuel was the ruling political leader of National Treasury, although data for 2010 and beyond is included. A mixed-methods approach was followed to compare the annual forecasts with the actual and official statistics presented by the South African Reserve Bank and the Department of Statistics. Economic forecasts are important for the business sector, the financial sector and the government sector. These forecasts about economic indicators are used by the fiscal authority to provide information regarding the possible outcome of output, revenue and debt for the new budget year. These forecasts can also be used by various institutions for budget purposes, draft of financial contracts and planning of capital projects. Not all institutions can afford to manage a highly technical business unit that delivers economic forecasts run by various mathematical models and post graduate economic employees. The research demonstrated that the Ministerial forecasts for real growth and inflation can be used successfully. DOI: 10.5901/mjss.2014.v5n21p265
... The growth rate of the field (as measured by the literature cited in JUU) over the last 30 years is in excess of 14 percent per year. This is substantially in excess of the growth rate for publications on forecasting which I estimated, on the basis of my literature review(Armstrong, 1978), to be about 8 percent per year over this time period. This, in turn, exceeds the growth rate for papers in the social sciences of about 6 per cent per year, based on data from the social science citation index from 1969-1979. ...
Article
This book provides a convenient collection of important papers relevant to a subset of judgmental forecasting. My review discusses: (i) the scope of the readings (ii) the importance of the readings (iii) what is new (iv) how the book is organized (v) advice on using the book, and (vi) who should read the book.
... Next, we use the validation data set to identify the most suitable neural network architecture. The error measure used to evaluate the performance of neural networks is the mean absolute percentage error (MAPE) [Armstrong, 1985] given by ...
Article
We present the Neural-network-based Upper hybrid Resonance Determination (NURD) algorithm for automatic inference of the electron number density from plasma wave measurements made on board NASA's Van Allen Probes mission. A feedforward neural network is developed to determine the upper hybrid resonance frequency, fuhr, from electric field measurements, which is then used to calculate the electron number density. In previous missions, the plasma resonance bands were manually identified, and there have been few attempts to do robust, routine automated detections. We describe the design and implementation of the algorithm and perform an initial analysis of the resulting electron number density distribution obtained by applying NURD to 2.5 years of data collected with the Electric and Magnetic Field Instrument Suite and Integrated Science (EMFISIS) instrumentation suite of the Van Allen Probes mission. Densities obtained by NURD are compared to those obtained by another recently developed automated technique and also to an existing empirical plasmasphere and trough density model.
... n y y MAE n i i i ∑ = − = 1 | ˆ | MAE = 0 → perfect accuracy 0 < MAE < ∞ Root Mean Square Error (RMSE): -Measures the average magnitude of the error ( ) n y y RMSE n i i i ∑ = − = 1 2 ˆ RMSE > MAE → variation in the errors exists 0 < RMSE < ∞ Mean Absolute Percentage Error (MAPE) -Measure of accuracy expressed as percentage n y y y MAPE n i i i i ∑ = − = 1 | / ) ˆ ( | [48] ...
Article
Full-text available
Quantitative structure-activity relationships are mathematical models constructed based on the hypothesis that structure of chemical compounds is related to their biological activity. A linear regression model is often used to estimate and predict the nature of the relationships between a measured activity and some measure or calculated descriptors. Linear regression helps to answer main three questions: does the biological activity depend on structure information; if so, the nature of the relationship is linear; and if yes, how good is the model in prediction of the biological activity of new compounds. This manuscript presents the steps on linear regression analysis moving from theoretical knowledge to an example conducted on sets of endocrine disrupting chemicals.
... The log of the accuracy ratio, i.e., ln(ˆ p/p), has been introduced to select less biased models. However, this measure presents the same problem as MRAE: it is undefined when p is 0. Symmetric Mean Absolute Percentage Error (SMAPE) [Armstrong 1978] seems the best alternative considering all the above factors: it is a percentage, it is always defined and its reliability for model selection purposes is comparable to that of ln(ˆ p/p) according to [Tofallis 2014]. However, SMAPE is less popular than MRAE, it only appears in [González et al. 2016]. ...
Article
Full-text available
The task of quantification consists in providing an aggregate estimation (e.g., the class distribution in a classification problem) for unseen test sets, applying a model that is trained using a training set with a different data distribution. Several real-world applications demand this kind of method that does not require predictions for individual examples and just focuses on obtaining accurate estimates at an aggregate level. During the past few years, several quantification methods have been proposed from different perspectives and with different goals. This article presents a unified review of the main approaches with the aim of serving as an introductory tutorial for newcomers in the field.
... The main advantage of using MAPE is that it is simple and that it does not depend on scale. A quirk of MAPE is that it is asymetric in that it will be higher if the forecast overpredicts rather than under-predicts (looking at extremes: A forecast of zero is at most 100% off, an over prediction can have an infinitely large error)[5]. In their review, Fildes et al [13] use no less than four alternate methods: root mean square error (RMSE), geometric root mean square error (GRMSE), mean absolute scaled error (MASE), and geometric mean relative absolute error (GRelAE). ...
Thesis
Full-text available
I create multiple models to explain global city-pair air travel demand and find that origin and destination fixed effects are essential to improve the explanatory power and forecasting accuracy of the model. Other findings coincide with other studies, showing that income and population are important drivers of demand and that distance impacts demand negatively. Fare elasticities are in the range of -0.6 to -0.8, slightly less than unit elastic and within the range found in the literature. The purpose of this analysis is to build an econometric forecasting model. The main limitations of such models can be data availability. I show in model 1 that data issues may be helped by using origin and destination fixed effects. Model 1 is also tested using a pseudo-maximum likelihood estimator which is shown not to improve the forecast. Model 2 is built based on this result using fixed effects instead of geo-economic data. This allows the model coverage to be extended by more than 1100 cities, making this tool one of the most comprehensive econometric models of air transport in the literature. Model 2 is based on service variables fare and frequency, both highly significant. The mean absolute percentage error of the chosen forecasting model is 5.8%.
Article
Full-text available
This dissertation estimates a time series of the natural rate of unemployment via competing methods, and evaluates such methods both within sample and for forecasting purposes. The methods used include the Kalman and Hodrick-Prescott filters. Evaluation proceeds along two lines. First, a Phillips curve is used in order to assess how well deviations of actual inflation from expected inflation are explained by deviations of the actual unemployment rate from estimated natural rates within a given sample period. The evaluative criterion is the overall fit of each regression. The analysis indicates that the in-sample fit of the Phillips curve changes little when different estimators of the natural rate of unemployment are considered. However, incorporating the 12-month-ahead forecast of inflation given by the Livingston survey improves the overall fit of the curve. Beyond the within-sample overall fit of a Phillips curve, analysis is conducted in order to assess the usefulness of various estimators of the natural rate for purposes of forecasting and policy-making. In order to test “out-of-sample,” recursive least squares is used in a Phillips curve context in order to estimate the natural rate and generate a corresponding forecast of inflation for next period. Additionally, estimates of the natural rate derived from a Kalman filter, the Hodrick-Prescott filter, and a method incorporating structural determinants of unemployment are included in a Phillips curve in order to generate a forecast of one-period-ahead inflation. Despite earlier results regarding the Livingston survey information, the analysis does not indicate that the Livingston survey information can aid policymakers in forecasting inflation. Further, even though the most common method of estimating natural rates is via a Phillips curve, the Phillips curve method of estimating the natural rate of unemployment does not appear superior–in terms of its forecasting power–to other methods of estimating the natural rate when the Phillips curve is utilized as the forecasting equation. Slight evidence is provided in support of the Hodrick-Prescott filter as an estimator of the natural rate of unemployment. Both the Kalman and Hodrick-Prescott filters feature the benefit of requiring only one series of data, and the Kalman approach is quite parsimonious.
Article
Modern lightweight concept structures are increasingly composed of several dissimilar materials. Due to the different material properties of the joining partners, conventional and widely used joining techniques often reach their technological limits when applied in the manufacturing of such multi-material structures. This leads to an increasing demand for appropriate joining technologies, like joining by die-less hydroforming (DHF) for connecting tubular workpieces. The present work introduces an analytical model to determine the achievable strength of form fit connections. This approach, taking into account the material parameters as well as thegroove and tube geometry, is based on a membrane analysis assuming constant wall thicknesses. Besides a fundamental understanding of the load transfer mechanism, this analytic approach allows a reliable joining zone design. To validate the model, experimental investigations using aluminum specimens are performed. A mean deviation between the calculated and the measured joint strength of about 19 % was found. This denotes a good suitability of the analytical approach for the design process of the joining zone.
Book
Forecasting-the art and science of predicting future outcomes-has become a crucial skill in business and economic analysis. This volume introduces the reader to the tools, methods, and techniques of forecasting, specifically as they apply to financial and investing decisions. With an emphasis on "earnings per share" (eps), the author presents a data-oriented text on financial forecasting, understanding financial data, assessing firm financial strategies (such as share buybacks and R&D spending), creating efficient portfolios, and hedging stock portfolios with financial futures. The opening chapters explain how to understand economic fluctuations and how the stock market leads the general economic trend; introduce the concept of portfolio construction and how movements in the economy influence stock price movements; and introduce the reader to the forecasting process, including exponential smoothing and time series model estimations. Subsequent chapters examine the composite index of leading economic indicators (LEI); review financial statement analysis and mean-variance efficient portfolios; and assess the effectiveness of analysts' earnings forecasts. Using data from such firms as Intel, General Electric, and Hitachi, Guerard demonstrates how forecasting tools can be applied to understand the business cycle, evaluate market risk, and demonstrate the impact of global stock selection modeling and portfolio construction. © Springer Science+Business Media New York 2013. All rights are reserved.
Article
Full-text available
Rainfall erosivity is the power of rainfall to cause soil erosion by water. The rainfall erosivity index for a rainfall event, EI30, is calculated from the total kinetic energy and maximum 30 min intensity of individual events. However, these data are often unavailable in many areas of the world. The purpose of this study was to develop models that relate more commonly available rainfall data resolutions, such as daily or monthly totals, to rainfall erosivity. Eleven stations with one-minute temporal resolution rainfall data collected from 1961 through 2000 in the eastern water-erosion areas of China were used to develop and calibrate 21 models. Seven independent stations, also with one-minute data, were utilized to validate those models, together with 20 previously published equations. Results showed that models in this study performed better or similar to models from previous research to estimate rainfall erosivity for these data. Prediction capabilities, as determined using symmetric mean absolute percentage errors and Nash–Sutcliffe model efficiency coefficients, were demonstrated for the 41 models including those for estimating erosivity at event, daily, monthly, yearly, average monthly and average annual time scales. Prediction capabilities were generally better using higher resolution rainfall data as inputs. For example, models with rainfall amount and maximum 60 min rainfall amount as inputs performed better than models with rainfall amount and maximum daily rainfall amount, which performed better than those with only rainfall amount. Recommendations are made for choosing the appropriate estimation equation, which depend on objectives and data availability.
Article
Full-text available
The Resource-Based View of the firm attributes firms' superior performance thanks to a sustained competitive advantage. Value to customers and brands are essential elements of competitive advantage and marketing managers are those managing them. In this paper we propose that marketing manager can be a source of firm's competitive advantage if they possess some distinctive competencies. To find out them we conduct a Delphi. We found that the top-ten competencies are those related with long-term marketing planning and strategy, sales and marketing alignment, corporate image and reputation, and managerial traditional competencies. Furthermore, we classify them into knowledge, skills and attitudes. Half of the top-ten competencies were classified as attitudes. We conclude our paper suggesting more in depth analysis of the top-ten and the rest of core competencies, assessing differences among different type of experts.
Article
Accuracy in forecasting expected loss costs may well be the most important determinant of the ultimate profitability of a cohort of property-liability insurance policies. The existing literature on claim cost forecasting focuses on the selection of the "best" forecasting model or method, discarding information provided by closely ranked alternatives. In this article, we emphasize the selection of a "good" forecast rather than a forecasting model, where goodness is defined using multiple criteria that may be vague or fuzzy. Fuzzy set theory is proposed as a mechanism for combining forecasts from alternative models using multiple fuzzy criteria. The fuzzy approach is illustrated using forecasts of automobile bodily injury liability pure premiums. We conclude that fuzzy set theory provides an effective method for combining statistical and judgmental criteria in actuarial decision making.
Thesis
Full-text available
As a way to reduce a vehicle’s weight, the application of space frame structures has been increasing. This innovative lightweight design concept is already commonly applied in the low volume production of cars. Due to the high stiffness and low mass, extruded aluminum profiles are particularly suitable for the manufacturing of such structures. But the potential for great weight reduction using space frames is curtailed by the difficulties associated with manufacturing the space frames. These structures have complex demands on joining technologies, and conventional processes often are pushed to their technological limits. A promising alternative to connect extruded aluminum profiles without heating or penetration is joining by electromagnetic crimping. Compared to adhesive bonding and welding, the process also requires a less extensive preparation of the joining zone. This technique is characterized by the use of pulsed magnetic fields to form a profile made of an electrically conductivity material into form-fit elements, like grooves, of the other joining partner. Thereby, an interlock is generated which enables the load transfer. However, existing process and joint design methodologies require either sophisticated numerical modeling or extensive experimental studies. The influence of some major process and joining zone parameters, like the forming direction and the groove shape, on the joint strength is also still unknown. Additionally, it has not been analyzed how a mass reduction in the joining zone and the resulting change of the radial strength of the joining partners affects the crimping process and the transferable load. Therefore, a fundamental process understanding of the manufacturing and the load transfer of form-fit connections manufactured by electromagnetic crimping is developed in this thesis. Based on analytical, experimental, and numerical studies, major parameters are identified and their influence on the joining process and the achievable joint strength is analyzed. For the analytical investigations a continuous approach describing the manufacturing of the connections as well as the load transfer is introduced here. This model also facilitates the process and joining zone design of electromagnetically crimped connections. Furthermore, a process window considering the influence of a mass reduction in the joining zone on the connection strength is developed based on the experimental results and the analytical approach. http://hdl.handle.net/2003/33921
Article
This study aims to examine the forecasting accuracy of a combined method by using quarterly tourism arrival data in Hong Kong. The voluntary integration of statistical forecasts and experts’ judgmental revisions are achieved through a Delphi procedure in a Web-based Tourism Demand Forecasting System (TDFS). The forecasting performance is evaluated using the absolute percentage error (APE), mean absolute percentage error (MAPE) and root mean square percentage error (RMSPE). This study also compares the forecast performance of the combined method to a number of alternative forecasting models, i.e. the Naïve models, exponential smoothing and Box-Jenkins time-series models. The empirical results show that the combined forecasts consistently outperform the baseline forecasts obtained from vector autoregression (VAR) models over the forecasting period of 2008Q1–2011Q4, suggesting the value of adopting such an integration procedure. The findings indicate that some gains are obtained from integrating experts’ judgments into statistical forecasts, particularly for the short term. In addition, combined forecasting does not always lead to satisfactory forecasting performance, particularly when there is a lack of important contextual information.
Article
An experìment ís reported where individual priority informatíon was used to predict the príoríty information generated by a group of these individuals. The results suggest that knowledge of the individual priority information is useful in predicting the alternative most favored. The individual information is helpful to predict various other arrangements of the priority rankings about half the time. All of the predictions out perform the chance prediction.
Article
Full-text available
The behaviour of poker players and sports gamblers has been shown to change after winning or losing a significant amount of money on a single hand. In this paper, we explore whether there are changes in experts’ behaviour when performing judgmental adjustments to statistical forecasts and, in particular, examine the impact of ‘big losses’. We define a big loss as a judgmental adjustment that significantly decreases the forecasting accuracy compared to the baseline statistical forecast. In essence, big losses are directly linked with wrong direction or highly overshooting judgmental overrides. Using relevant behavioural theories, we empirically examine the effect of such big losses on subsequent judgmental adjustments exploiting a large multinational data set containing statistical forecasts of demand for pharmaceutical products, expert adjustments and actual sales. We then discuss the implications of our findings for the effective design of forecasting support systems, focusing on the aspects of guidance and restrictiveness.
Conference Paper
An introduction to Geotypical Growth-based Load Forecasting (GGLF), long-term power distribution load forecasting based on biological concepts and segmented geographies, is presented. Using load data obtained from 165 substations in Southern Idaho and Southeastern Oregon, this document (1) describes the reasoning for using Living Systems Theory (LST) as a basis for long-term distribution load forecasting, (2) shows the relationship between the MW growth rates of the substations to their observed peak loads, (3) provides the rationale for segmenting the substations into their various geographical characteristics (geotypes), and (4) discusses a logistical regression curve-fitting model that represents the load characteristics of five example geotypes. Example geotypes discussed in the document are those common to a high plains geography, semi-arid climate type. Recommendations for additional research that applies GGLF to other climate types and to other MW load densities are also suggested.
Conference Paper
Private and public organizations use forecasts to inform a number of decisions, including decisions about product development, competition, and technology investments. We evaluated technological forecasts to determine how forecast methodology and eight other attributes influence accuracy. We found that, of the nine attributes assessed, only forecast methodology and time horizon had a statistically significant influence on accuracy. Forecasts using quantitative methods were more accurate than other forecasting methods and forecasts predicting shorter time horizons were more accurate that those predicting longer time horizons.
Technical Report
Full-text available
In this study, focus is on a systematic way to detect future changes in trends that may affect the dynamics in the agro-food sector, and on the combination of opinions of experts. For the combination of expert opinions, the usefulness of multilevel models is investigated. Bayesian data analysis is used to obtain parameter estimates. The approach is illustrated by two case studies. The results are promising, but the procedures are just a first step into an appropriate combination of expert combination, which has to be completed on important issues, such as the identification of some well-known biases.
Article
Full-text available
Surveys show that the mean absolute percentage error (MAPE) is the most widely used measure of prediction accuracy in businesses and organizations. It is, however, biased: when used to select among competing prediction methods it systematically selects those whose predictions are too low. This has not been widely discussed and so is not generally known among practitioners. We explain why this happens. We investigate an alternative relative accuracy measure which avoids this bias: the log of the accuracy ratio, that is, log (prediction/actual). Relative accuracy is particularly relevant if the scatter in the data grows as the value of the variable grows (heteroscedasticity). We demonstrate using simulations that for heteroscedastic data (modelled by a multiplicative error factor) the proposed metric is far superior to MAPE for model selection. Another use for accuracy measures is in fitting parameters to prediction models. Minimum MAPE models do not predict a simple statistic and so theoretical analysis is limited. We prove that when the proposed metric is used instead, the resulting least squares regression model predicts the geometric mean. This important property allows its theoretical properties to be understood.
Article
This paper summarizes the key conditions under which the index method is valuable for forecasting and describes the procedures one should use when developing index models. The paper also addresses the specific concern of selecting inferior candidates when using the bio-index as a nomination helper. Political decision-makers should not use the bioindex as a stand-alone method but should combine forecasts from a variety of different methods that draw upon different information.
Chapter
The past 10 years have witnessed a significant fluctuation in China’s stock market: It went through a severe recession from 2004 to 2005, while other stock markets (such as Hong Kong’s stock market and the US stock market, and so on) had already entered a bull market over the corresponding time period. Afterwards, China’s stock market enjoyed unforeseeable prosperity The growth rate of the Shanghai Composite Index reached a record high of 427 percent — far more volatile than other developed markets during the same period.
Technical Report
Full-text available
The validity of the manmade global warming alarm requires the support of scientific forecasts of (1) a substantive long-term rise in global mean temperatures in the absence of regulations, (2) serious net harmful effects due to global warming, and (3) cost-effective regulations that would produce net beneficial effects versus alternatives policies, including doing nothing. Without scientific forecasts for all three aspects of the alarm, there is no scientific basis to enact regulations. In effect, the warming alarm is like a three-legged stool: each leg needs to be strong. Despite repeated appeals to global warming alarmists, we have been unable to find scientific forecasts for any of the three legs. We drew upon scientific (evidence-based) forecasting principles to audit the forecasting procedures used to forecast global mean temperatures by the Intergovernmental Panel on Climate Change (IPCC)—leg “1” of the stool. This audit found that the IPCC procedures violated 81% of the 89 relevant forecasting principles. We also audited forecasting procedures, used in two papers, that were written to support regulation regarding the protection of polar bears from global warming —leg “3” of the stool. On average, the forecasting procedures violated 85% of the 90 relevant principles. The warming alarmists have not demonstrated the predictive validity of their procedures. Instead, their argument for predictive validity is based on their claim that nearly all scientists agree with the forecasts. This count of “votes” by scientists is not only an incorrect tally of scientific opinion, it is also, and most importantly, contrary to the scientific method. We conducted a validation test of the IPCC forecasts that were based on the assumption that there would be no regulations. The errors for the IPCC model long-term forecasts (for 91 to 100 years in the future) were 12.6 times larger than those from an evidence-based “no change” model. Based on our own analyses and the documented unscientific behavior of global warming alarmists, we concluded that the global warming alarm is the product of an anti-scientific political movement. Having come to this conclusion, we turned to the “structured analogies” method to forecast the likely outcomes of the warming alarmist movement. In our ongoing study we have, to date, identified 26 similar historical alarmist movements. None of the forecasts behind the analogous alarms proved correct. Twenty-five alarms involved calls for government intervention and the government imposed regulations in 23. None of the 23 interventions was effective and harm was caused by 20 of them. Our findings on the scientific evidence related to global warming forecasts lead to the following recommendations: 1. End government funding for climate change research. 2. End government funding for research predicated on global warming (e.g., alternative energy; CO2 reduction; habitat loss). 3. End government programs and repeal regulations predicated on global warming. 4. End government support for organizations that lobby or campaign predicated on global warming.
Chapter
Bisher beschränkten sich mit Künstlichen Neuronalen Netzen (KNN) erstellte Prognosen überwiegend auf den kurz- bzw. mittelfristigen Bereich. Dieser Beitrag zeigt an einem Beispiel aus der Automobilbranche auf, wie die Einsatzpotentiale von KNN auch für die Erstellimg von langfristigen Vorhersagen genutzt werden können. Dafür wird der Ansatz der KNN mit der quantitativen Risikoanalyse kombiniert, um prognostische Unsicherheiten zu erfassen und graphisch darzustellen.
Article
Satellite precipitation data is an indirect measurement for land and water area over the globe. The information from this data is important especially to the area where direct rain gauge measurement is limited. However, the precipitation estimates require calibration and validation to assure the accuracy. Satellite rainfall estimations together with comprehensive ground validation and calibration have been found to offer good prospect for accurate and global rainfall database especially for remote areas and large bodies of water. This work presents performance evaluation of TRMM 3B43 V7 rainfall retrieval algorithms over Malaysia. Inter-comparison and validation of TRMM 3B43 V7 rainfall product with ground measurement is analysed statistically. The result of continuous statistical evaluation shows good agreement, in which, the best correlation for the algorithm 3B43 versus rain gauge is 0.9384. At lower percentage bias threshold, the 2 by 2 categorical statistic of rain or no rain occurrence for annual estimation reveals lower value of probability of detection and higher value of false alarm ratio. However, reverse results are shown at higher bias threshold. The accuracy of the algorithm for a threshold of 1–10 % which falls within International Telecommunication Union—Radio recommendation for radio propagation to discriminate between rain and no rain is 0.53, 0.49 and 0.48 for annual, monthly and wet season, respectively. From analysis, the categorical statistical approach has been able to reveal the level of accuracy of the algorithms as applicable to detection and estimation of rainfall.
Chapter
Telecommunications has become one of the most dynamic markets with short innovation cycles and a multitude of different players. “Time to market” and cost effective development of products meeting customers’ needs are not only buzzwords, but have become critical success factors, which determine the fate of any player in the industry. Equipment providers, service providers and operators have to claim their position in a global, highly competitive environment. The cost effective development and introduction of innovative products, which meet customer needs and wishes are a major competitive advantage. This stage is one of the most complex and most difficult situations in business. With the failure rate of innovations typically being as high as 70% or 80% (Cooper / Kleinschmidt, 1991), the decision maker has a good chance of error when deciding which products to develop and later launch in the market. This is true for all industries, but especially so for the telecommunications industry with its shortening product life cycles and resulting pressure to be increasingly innovative.
Article
Full-text available
Goal: This study aims at a systematic assessment of five computational models of a birdcage coil for magnetic resonance imaging (MRI) with respect to accuracy and computational cost. Methods: The models were implemented using the same geometrical model and numerical algorithm, but different driving methods (i.e., coil "defeaturing"). The defeatured models were labeled as: specific (S2), generic (G32, G16), and hybrid (H16, H16fr-forced). The accuracy of the models was evaluated using the "Symmetric Mean Absolute Percentage Error" ("SMAPE"), by comparison with measurements in terms of frequency response, as well as electric (‖����⃗⃗ ‖) and magnetic (‖����⃗⃗ ‖) field magnitude. Results: All the models computed the ‖����⃗⃗ ‖ within 35 % of the measurements, only the S2, G32, and H16 were able to accurately model the ‖����⃗⃗ ‖ inside the phantom with a maximum SMAPE of 16 %. Outside the phantom, only the S2 showed a SMAPE lower than 11 %. Conclusions: Results showed that assessing the accuracy of ‖����⃗⃗ ‖ based only on comparison along the central longitudinal line of the coil can be misleading. Generic or hybrid coils - when properly modeling the currents along the rings/rungs - were sufficient to accurately reproduce the fields inside a phantom while a specific model was needed to accurately model ‖����⃗⃗ ‖ in the space between coil and phantom. Significance: Computational modeling of birdcage body coils is extensively used in the evaluation of RF-induced heating during MRI. Experimental validation of numerical models is needed to determine if a model is an accurate representation of a physical coil.
Conference Paper
When time series are generated by chaotic systems, a reasonable estimation of large prediction horizons is hard to obtain, but this may be required by some applications. Over the last years, some researchers have focused on the use of ensembles and meta-learning as a strategy for improving prediction accuracy. This paper addresses the problem of selecting and combining models for the design of efficient long-term predictors of chaotic time series based on meta-learning and self-organization. We propose and evaluate the use of four heuristic rules for selecting models using a self-organizing map (SOM) neural network and meta-features. The meta-features are extracted from the performances of each involved model when applied to the training time series. A trained SOM map, which was generated using these meta-features, allows the selection of models with diverse behaviors. Two strategies for the combination of models are compared; one is based on the average and a second is based on the median of the forecasts of the selected models. The experiments were executed using four types of series: the time series dataset provided by the NN5 tournament and time series generated from the Mackey-Glass equation, from an ARIMA model and from a sine function. In most cases, the best results were obtained using a percentage of the models belonging to the group that contained the best model. Our results also showed that a combination using a median strategy obtained better results that using an average strategy.
Article
Full-text available
Econometric or time-series forecasts in the telecommunications industry (e. g., service demand, revenue) are an important element in a number of decision making processes; i. e., staffing, budgeting, tariff setting and other marketing strategies. Since these forecasts are only one of a number of inputs to decision making, no optimality criterion can be defined. The absence of an optimality criterion and the large number of series involved makes the selection of models an even more difficult exercise. Usually, the selection process is subject to two validation procedures: first, statistical tests on historical data to ensure inclusion of meaningful explanatory variables and proper fit, and second, tests of the model’s ability to allow the evaluation of the impact of future (hypothetical) market conditions and/or internal or external (e. g., Government) policies. In this paper, a two-stage ‘semi-automatic’ selection criterion, which produces a subset of feasible and ‘ranked’ models using an internal validation procedure is suggested. The criterion used in the first stage is based on the performance of competing models in predicting observed data points (forward validation); the selected subset is then validated at the second stage through subjective assessments (scenario validation).
Chapter
A Self Organizing Map (SOM) projects high-dimensional feature vectors onto a low-dimensional space. If an appropriate feature vector is chosen, this ability may be used for measuring and adjusting different levels of diversity in the selection of models for building ensembles. In this paper, we present the results of using a SOM for selecting suitable models in ensembles used for long-term time series prediction. The temporal behavior of the predictors is represented by feature vectors built with a sequence of the errors achieved in each prediction step. Each neuron in the map represents a cluster of models with similar accuracy; the adjustment of diversity between models is achieved by measuring the distance between neurons on the map. Our experiments showed that this strategy generated ensembles with an appropriate level of diversity among their components, obtaining a better performance than just using a unique model.
Working Paper
Full-text available
This paper presents a quantitative framework for forecasting immigrant integration using immigrant density as the single driver. By comparing forecasted integration estimates based on data collected up to specific periods in time, with observed integration quantities beyond the specified period, we show that: Our forecasts are prompt-readily available after a short period of time, accurate-with a small relative error-and finally robust-able to predict integration correctly for several years to come. The research reported here proves that the proposed model of integration and its forecast framework are simple and effective tools to reduce uncertainties about how integration phenomena emerge and how they are likely to develop in response to increased migration levels in the future.
Article
The widespread use of the Internet and online forecasting systems offer unprecedented opportunities to leverage collective intelligence to produce increasingly accurate forecasts. Forecast support systems also offer the opportunity to address one of the weakest aspects of expert forecasting methods, the identificationof experts. In the published literature, significant criticism is addressed to the subjectivity of expert identification methods, as different methods can lead, mutatis mutandis, to significantly different results (for a review, see Baker et al., 2006; Larréché andMoinpur, 1983). This paper introduces an approach to objectively define levels of expertise within large groups in a panel setting. This information is used to fine-tune panel members’ contribution to the compound forecast, in the attempt of improvingthe accuratenessof the aggregated forecast. Tested on prospects collected from the UNWorld Tourism Organization (UNWTO) Panel of Tourism Experts-probably world’s most widely used and influential forecasts for the tourism sector- the proposed approach proves efficient in identifying experts within large groups of individuals. Results also indicate that the method is promising in leveraging their collective knowledge to return more accurate forecasts compared to simpler methods
ResearchGate has not been able to resolve any references for this publication.