Applied Stochastic Models in Business and Industry

Published by Wiley

Online ISSN: 1526-4025

·

Print ISSN: 1524-1904

Articles


Figure 2: Negative second derivative of log-likelihood, the sigmoid penalty and MCP in logistic regression. 
Table 3 : Simulation results for Poisson regression under different penalties. 
Table 4 : Simulation results for probit regression under different penalties. 
Simulation results for TP and FP under different penalties with the tuning parameter selected by BIC. Mean values are presented. 
Likelihood Adaptively Modified Penalties
  • Article
  • Full-text available

August 2013

·

397 Reads

·

·

Zhiliang Ying
A new family of penalty functions, adaptive to likelihood, is introduced for model selection in general regression models. It arises naturally through assuming certain types of prior distribution on the regression parameters. To study stability properties of the penalized maximum likelihood estimator, two types of asymptotic stability are defined. Theoretical properties, including the parameter estimation consistency, model selection consistency, and asymptotic stability, are established under suitable regularity conditions. An efficient coordinate-descent algorithm is proposed. Simulation results and real data analysis show that the proposed method has competitive performance in comparison with existing ones.
Download
Share

Stochastic Optimisation for Allocation Problems with Shortfall Risk Constraints

January 2006

·

26 Reads

The US economy is arguably following an unsustainable trajectory. The main indicators of this are a large current account deficit, a large federal budget deficit and trend-wise increasing costs of Social Security and Medicare. In this chapter, we will discuss these observations and to what extent the financial and economic crisis may have changed the outlook. Before this, we need to define what we mean by sustainability. An often used definition of sustainability is that the inter-temporal budget restriction is satisfied.

Assessment and Propagation of Input Uncertainty in Tree-based Option Pricing Models

May 2009

·

41 Reads

This paper aims to provide a practical example on the assessment and propagation of input uncertainty for option pricing when using tree-based methods. Input uncertainty is propagated into output uncertainty, reflecting that option prices are as unknown as the inputs they are based on. Option pricing formulas are tools whose validity is conditional not only on how close the model represents reality, but also on the quality of the inputs they use, and those inputs are usually not observable. We provide three alternative frameworks to calibrate option pricing tree models, propagating parameter uncertainty into the resulting option prices. We finally compare our methods with classical calibration-based results assuming that there is no options market established. These methods can be applied to pricing of instruments for which there is not an options market, as well as a methodological tool to account for parameter and model uncertainty in theoretical option pricing.

COPAR - Multivariate time series modeling using the COPula AutoRegressive model

June 2014

·

359 Reads

Analysis of multivariate time series is a common problem in areas like finance and economics. The classical tool for this purpose are vector autoregressive models. These however are limited to the modeling of linear and symmetric dependence. We propose a novel copula-based model which allows for non-linear and asymmetric modeling of serial as well as between-series dependencies. The model exploits the flexibility of vine copulas which are built up by bivariate copulas only. We describe statistical inference techniques for the new model and demonstrate its usefulness in three relevant applications: We analyze time series of macroeconomic indicators, of electricity load demands and of bond portfolio returns.

The range inter-event process in a symmetric birth-death random walk

July 2001

·

23 Reads

This paper provides new results for the range inter-events process of a birth–death random walk. Motivations for determining and using the inter-range event distribution have two sources. First, the analytical results we obtain are simpler than the range process and make it easier, therefore, to use statistics based on the inter-range event process. Further, most of the results for the range process are based on long-run statistical properties which limits their practical usefulness while inter-range events are by their nature ‘short-term’ statistics. Second, in many cases, data on amplitude change are easier to obtain and calculate than range and standard deviation processes. As a results, the predicted statistical properties of the inter-range event process can provide an analytical foundation for the development of statistical tests that may be used practically. Application to outlier detection, volatility and time-series analysis is discussed. Copyright © 2001 John Wiley & Sons, Ltd.

On estimating the conditional expected shortfall

September 2008

·

174 Reads

The paper focuses on satisfaction with income and proposes a utility model built on two value systems, the `Ego' system - described as one own income assessment relatively to one own past and future income - and the `Alter' system - described as one own income assessment relatively to a reference group. We show how the union of these two value systems and the use of relative deprivation measures can lead to a model able to accommodate a wide range of theories on income and happiness. The model is then tested using the Consortium of Household Panels for European Socio-economic Research (CHER), a collection of 19 panel surveys including over 1.2 m. individual observations. We find absolute income to sit at the intersection between the `Ego' and the `Alter' systems and to play the most prominent role in explaining satisfaction with income. Relative deprivation is also found to be important for understanding the income-happiness nexus while we find income expectations to be less relevant once we control for absolute income. Overall, the `Alter' system (the cross-section comparison with others) seems to be more relevant in valuing income than the `Ego' system (the longitudinal self-comparison of income).

Construction of efficient experimental designs under multiple resource constraints

February 2014

·

137 Reads

For computing exact designs of experiments under multiple resource constraints, we developed a heuristic method related to the Detmax procedure. To illustrate the performance of the heuristic, we computed D-efficient designs for a block model with limits on the numbers of blocks, for a quadratic regression model with simultaneous marginal and cost constraints, and for a non-linear regression model with simultaneous direct and cost constraints. The numerical examples demonstrate that the proposed heuristic generates comparable or better results than competing algorithms, even in their specific domains of application.

Efficient Performance Evaluation of the Generalized Shiryaev--Roberts Detection Procedure in a Multi-Cyclic Setup

November 2014

·

85 Reads

We propose a numerical method to evaluate the performance of the emerging Generalized Shiryaev--Roberts (GSR) change-point detection procedure in a "minimax-ish" multi-cyclic setup where the procedure of choice is applied repetitively (cyclically) and the change is assumed to take place at an unknown time moment in a distant-future stationary regime. Specifically, the proposed method is based on the integral-equations approach and uses the collocation technique with the basis functions chosen so as to exploit a certain change-of-measure identity and the GSR detection statistic's unique martingale property. As a result, the method's accuracy and robustness improve, as does its efficiency since using the change-of-measure ploy the Average Run Length (ARL) to false alarm and the Stationary Average Detection Delay (STADD) are computed simultaneously. We show that the method's rate of convergence is quadratic and supply a tight upperbound on its error. We conclude with a case study and confirm experimentally that the proposed method's accuracy and rate of convergence are robust with respect to three factors: (a) partition fineness (coarse vs. fine), (b) change magnitude (faint vs. contrast), and (c) the level of the ARL to false alarm (low vs. high). Since the method is designed not restricted to a particular data distribution or to a specific value of the GSR detection statistic's headstart, this work may help gain greater insight into the characteristics of the GSR procedure and aid a practitioner to design the GSR procedure as needed while fully utilizing its potential.

Efficient Prediction Designs for Random Fields

November 2014

·

166 Reads

For estimation and predictions of random fields it is increasingly acknowledged that the kriging variance may be a poor representative of true uncertainty. Experimental designs based on more elaborate criteria that are appropriate for empirical kriging are then often non-space-filling and very costly to determine. In this paper, we investigate the possibility of using a compound criterion inspired by an equivalence theorem type relation to build designs quasi-optimal for the empirical kriging variance, when space-filling designs become unsuitable. Two algorithms are proposed, one relying on stochastic optimization to explicitly identify the Pareto front, while the second uses the surrogate criteria as local heuristic to chose the points at which the (costly) true Empirical Kriging variance is effectively computed. We illustrate the performance of the algorithms presented on both a simple simulated example and a real oceanographic dataset.

Unspecified distribution in single disorder problem

March 2009

·

29 Reads

The purpose of this article is to present the issue of algorithms of the change detection model for the political business cycle. Political business cycle issue is interesting in the context of the current political situation in Europe, ie, the progressive integration of the European Union countries and the wave of financial problems that affected the state, which has been regarded so far as economically stable. Monitoring of this phenomenon is characterized by the fact that we do not usually have full information about the behavior of business indexes before and after the change. It is assumed that we are observing a stochastic sequence whose mathematical model predicts a sudden change. The process is Markovian when the change moment is given. The initial problem of disorder detection is transformed to the optimal stopping of the observed sequence. In order to construct an algorithm for estimating the moment of change, we transform the task into an equivalent problem of optimal stopping based on the observed magnitude and some statistics. The analysis obtained from the transformation of the problem is the source of the change point estimation algorithms. The formula for the optimal decision functions is derived.

Generalized dynamic linear models for financial time series

February 2001

·

94 Reads

In this paper we consider a class of conditionally Gaussian state space models and discuss how they can provide a flexible and fairly simple tool for modelling financial time series, even in presence of different components in the series, or of stochastic volatility. Estimation can be computed by recursive equations, which provide the optimal solution under rather mild assumptions. In more general models, the filter equations can still provide approximate solutions. We also discuss how some models traditionally employed for analysing financial time series can be regarded in the state-space framework. Finally, we illustrate the models in two examples to real data sets.

On the Estimation of the Heavy-Tail Exponent in Time Series using the Max-Spectrum

May 2009

·

27 Reads

This paper addresses the problem of estimating the tail index α of distributions with heavy, Pareto-type tails for dependent data, that is of interest in the areas of finance, insurance, environmental monitoring and teletraffic analysis. A novel approach based on the max self-similarity scaling behavior of block maxima is introduced. The method exploits the increasing lack of dependence of maxima over large size blocks, which proves useful for time series data. We establish the consistency and asymptotic normality of the proposed max-spectrum estimator for a large class of m-dependent time series, in the regime of intermediate block-maxima. In the regime of large block-maxima, we demonstrate the distributional consistency of the estimator for a broad range of time series models including linear processes. The max-spectrum estimator is a robust and computationally efficient tool, which provides a novel time-scale perspective to the estimation of the tail exponents. Its performance is illustrated over synthetic and real data sets. Copyright

Estimation in integer - valued moving average models

November 2001

·

211 Reads

The paper presents new characterizations of the integer-valued moving average model. For four model variants we give moments and probability generating functions. Yule-Walker and conditional least squares estimators are obtained and studied by Monte Carlo simulation. A new generalized method of moment estimator based on probability generating functions is presented and shown to be consistent and asymptotically normal.The small sample performance is in some instances better than those of alternative estimators. The techniques are illustrated on a time series of traded stocks.

The Stochastic Unit Root Model And Fractional Integration: An Extension To The Seasonal Case

September 2007

·

21 Reads

This paper contributes empirically to our understanding of informed traders. It analyzes traders' characteristics in a foreign exchange electronic limit order market via anonymous trader identities. We use six indicators of informed trading in a cross-sectional multivariate approach to identify traders with high price impact. More information is conveyed by those traders' trades which--simultaneously--use medium-sized orders (practice stealth trading), have large trading volume, are located in a financial center, trade early in the trading session, at times of wide spreads and when the order book is thin.

Option Hedging By An Influent Informed Investor

November 2011

·

39 Reads

This work extends the study of hedging problems in markets with asymmetrical information: an agent is supposed to possess an additional information on market prices, unknown to the common investor. The financial hedging problem for the influential and informed trader is modeled by a forward–backward stochastic differential equation, to be solved under an initial enlargement of the Brownian filtration. An existence and uniqueness theorem is proved under standard assumptions. The financial interpretation is derived, in terms of investment strategy for the informed and influential agent, as well as the conclusions concerning the general influenced market, in terms of completeness of the market. An example of such influenced and informed model is provided. Copyright

Implementing Loss Distribution Approach for Operational Risk

May 2009

·

1,005 Reads

To quantify the operational risk capital charge under the current regulatory framework for banking supervision, referred to as Basel II, many banks adopt the Loss Distribution Approach. There are many modeling issues that should be resolved to use the approach in practice. In this paper we review the quantitative methods suggested in literature for implementation of the approach. In particular, the use of the Bayesian inference method that allows to take expert judgement and parameter uncertainty into account, modeling dependence and inclusion of insurance are discussed.

Percentile residual life orders

May 2011

·

64 Reads

With survival data there is often interest not only in the survival time distribution but also in the residual survival time distribution. In fact, regression models to explain residual survival time might be desired. Building upon recent work of Kottas & Gelfand ["J. Amer. Statist. Assoc." 96 (2001) 1458], we formulate a semiparametric median residual life regression model induced by a semiparametric accelerated failure time regression model. We utilize a Bayesian approach which allows full and exact inference. Classical work essentially ignores covariates and is either based upon parametric assumptions or is limited to asymptotic inference in non-parametric settings. No regression modelling of median residual life appears to exist. The Bayesian modelling is developed through Dirichlet process mixing. The models are fitted using Gibbs sampling. Residual life inference is implemented extending the approach of Gelfand & Kottas ["J. Comput. Graph. Statist." 11 (2002) 289]. Finally, we present a fairly detailed analysis of a set of survival times with moderate censoring for patients with small cell lung cancer. Copyright 2003 Board of the Foundation of the Scandinavian Journal of Statistics..

Reply to the paper ‘Do not adjust coefficients in Shapley value regression’ by U. Gromping, S. Landau, Applied Stochastic Models in Business and Industry, 2009; DOI: 10.1002/asmb.773

March 2010

·

58 Reads

We consider some inference problems concerning the drift parameters of multi-factors Vasicek model (or multivariate Ornstein–Uhlebeck process). For example, in modeling for interest rates, the Vasicek model asserts that the term structure of interest ...

Latent variable modelling of price‐change in 295 manufacturing industries

January 2003

·

18 Reads

In contrast to traditional regression analysis, latent variable modelling (LVM) can explicitly differentiate between measurement errors and other random disturbances in the specification and estimation of econometric models. This paper argues that LVM could be a promising approach to test economic theories because applied research in business and economics is based on statistical information, which is frequently inaccurately measured. Considering the theory of industry-price determination, where the price variables involved are known to include a large measurement error, a latent variable, structural-equations model is constructed and applied to data on 7381 product categories classified into 295 manufacturing industries of the USA economy. The obtained estimates, compared and evaluated against a traditional regression model fitted to the same data, show the advantages of the LVM analytical framework, which could lead a long drawn-out conflict between empirical results and theory to a satisfactory reconciliation. Copyright © 2003 John Wiley & Sons, Ltd.


The application of neural networks to predict abnormal stock returns using insider trading data

October 2002

·

75 Reads

Until now, data mining statistical techniques have not been used to improve the prediction of abnormal stock returns using insider trading data. Consequently, an investigation using neural network analysis was initiated. The research covered 343 companies for a period of 4½ years. Study findings revealed that the prediction of abnormal returns could be enhanced in the following ways: (1) extending the time of the future forecast up to 1 year; (2) increasing the period of back aggregated data; (3) narrowing the assessment to certain industries such as electronic equipment and business services and (4) focusing on small and midsize rather than large companies. Copyright © 2002 John Wiley & Sons, Ltd.

Dividend payments in the classical risk model under absolute ruin with debit interest

May 2009

·

37 Reads

This paper attempts to study the dividend payments in a compound Poisson surplus process with debit interest. Dividends are paid to the shareholders according to a barrier strategy. An alternative assumption is that business can go on after ruin, as long as it is profitable. When the surplus is negative, a debit interest is applied. At first, we obtain the integro-differential equations satisfied by the moment-generating function and moments of the discounted dividend payments and we also prove the continuous property of them at zero. Then, applying these results, we get the explicit expressions of the moment-generating function and moments of the discounted dividend payments for exponential claims. Furthermore, we discuss the optimal dividend barrier when the claim sizes have a common exponential distribution. Finally, we give the numerical examples for exponential claims and Erlang (2) claims. Copyright © 2008 John Wiley & Sons, Ltd.

Mixed effect models for absolute log returns of ultra high frequency data

May 2006

·

54 Reads

Considering absolute log returns as a proxy for stochastic volatility, the influence of explanatory variables on absolute log returns of ultra high frequency data is analysed. The irregular time structure and time dependency of the data is captured by utilizing a continuous time ARMA(p,q) process. In particular, we propose a mixed effect model class for the absolute log returns. Explanatory variable information is used to model the fixed effects, whereas the error is decomposed in a non-negative Lévy driven continuous time ARMA(p,q) process and a market microstructure noise component. The parameters are estimated in a state space approach. In a small simulation study the performance of the estimators is investigated. We apply our model to IBM trade data and quantify the influence of bid-ask spread and duration on a daily basis. To verify the correlation in irregularly spaced data we use the variogram, known from spatial statistics. Copyright © 2006 John Wiley & Sons, Ltd.

The proper interpretation of sales promotion effects: Supplement elasticities with absolute sales effects

July 2005

·

746 Reads

Sales promotions such as temporary price reductions are frequently used by managers to stimulate sales in the short run. Marketing academics and practitioners tend to rely on price elasticities to summarize sales promotion effects. Although elasticities have some attractive benefits such as the invariance to measurement units, they have led to three misinterpretations in the marketing literature, as described in this paper. The proper theoretical and managerial interpretation of sales promotion effects is obtained by expressing effects in terms of absolute sales. Copyright © 2005 John Wiley & Sons, Ltd.

A computer experiment application to the design and optimization of a capacitive accelerometer

March 2009

·

29 Reads

·

N. Gil-Negrete

·

·

[...]

·

A. Asensio
An accelerometer is a transducer that allows measuring the acceleration acting on a structure. Physically, an accelerometer consists of a central mass suspended by thin and flexible arms and its performance is highly dependent on the dimensions of both the mass and the arms. The two most important parameters when evaluating the performance of these devices are the sensitivity and the operating frequency range (or bandwidth), the latter one being limited to of the resonance frequency. Therefore, it is very convenient to gain knowledge on how changes in the dimensions of the mass and arms affect the value of the natural frequency of the accelerometer, as it will provide guidelines to design accelerometers that fulfil frequency requirements of a specific application. A quadratic polynomial function of the natural logarithm of the frequency versus geometrical factors has been obtained using response surface methodology approach. A faced-centered cube design was used in the experimentation. The data were obtained conducting computer simulations using finite element design techniques. A better understanding of how these variables affect the value of frequency has been reached, which will be very useful for the device design purposes. Copyright © 2009 John Wiley & Sons, Ltd.

Top-cited authors