Article

Forecasting from others’ experience: Bayesian estimation of the generalized Bass model

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

We propose a Bayesian estimation procedure for the generalized Bass model that is used in product diffusion models. Our method forecasts product sales early based on previous similar markets; that is, we obtain pre-launch forecasts by analogy. We compare our forecasting proposal to traditional estimation approaches, and alternative new product diffusion specifications. We perform several simulation exercises, and use our method to forecast the sales of room air conditioners, BlackBerry handheld devices, and compressed natural gas. The results show that our Bayesian proposal provides better predictive performances than competing alternatives when little or no historical data are available, which is when sales projections are the most useful.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Yet, Bayesian models require higher computing power which, however, is not problematic anymore in most use cases today. In conclusion, these methodological improvements are desirable in order to obtain models with richer information as Hassan and Blandon demonstrated in 2016 [13]. ...
... Bayesian models are able to offer a remedy for the difficulty of obtaining longitudinal timeline data by incorporating prior knowledge about the model coefficients, e.g. from previous diffusion models of similar technologies. This ability enables them to compensate for a weaker data basis as demonstrated by Hassan and Blandon [13]. Therefore, besides providing richer information compared to traditional Bass models, the utilization of a Bayesian approach seems to be a methodological imperative not only for retrospectively researching adoption and diffusion of established systems, but also for forecasting the developments of newer, innovative systems such as clinical decision support systems (CDSS) in image processing which promise to promote safer and better patient care. ...
Article
Full-text available
Radiology has a reputation for having a high affinity to innovation - particularly with regard to information technologies. Designed for supporting the peculiarities of radiological diagnostic workflows, Radiology Information Systems (RIS) and Picture Archiving and Communication Systems (PACS) developed into widely used information systems in hospitals and form the basis for advancing the field towards automated image diagnostics. RIS and PACS can thus serve as meaningful indicators of how quickly IT innovations diffuse in secondary care settings - an issue that requires increased attention in research and health policy in the light of increasingly fast innovation cycles. We therefore conducted a retrospective longitudinal observational study to research the diffusion dynamics of RIS and PACS in German hospitals between 2005 and 2017. Based upon data points collected within the "IT Report Healthcare" and building on Rogers' Diffusion of Innovation (DOI) theory, we applied a novel methodological technique by fitting Bayesian Bass Diffusion Models on past adoption rates. The Bass models showed acceptable goodness of fit to the data and the results indicated similar growth rates of RIS and PACS implementations and suggest that market saturation is almost reached. Adoption rates of PACS showed a slightly higher coefficient of imitation (q = 0.25) compared to RIS (q = 0.11). However, the diffusion process expands over approximately two decades for both systems which points at the need for further research into how innovation diffusion can be accelerated effectively. Furthermore, the Bayesian approach to Bass modelling showed to have several advantages over the classical frequentists approaches and should encourage adoption and diffusion research to adapt similar techniques.
... The former reflects our prior beliefs about the magnitude of the coefficient and the latter reflects the strength of this belief with a large V corresponding to less weight on the prior belief. This can be considered a 'subjective empirical' Bayesian approach where informative priors are built based on previous studies to improve precision of posterior estimates due to shrinking sample and prior information (e.g., Ramírez-Hassan & Montoya-Blandón, 2020). ...
Article
Full-text available
Food price elasticities (PEs) are essential for evaluating the impacts of food pricing interventions to improve dietary and health outcomes. This paper innovates the use of experimental purchasing data from a recent New Zealand virtual supermarket experiment to estimate PEs for a large set of disaggregated foods across major food groups relevant for food policies in a Bayesian multistage demand framework. We propose the use of available prior information to elicit prior demand parameter assumptions that are consistent with published PEs and economic assumptions and are weighted according to expert knowledge, increasing precision in PE inference and policy predictions, and yielding somewhat stronger price effects.
... She et al. [66] analysed wind power development factors by generalized Bass model. In addition, Bayesian estimation procedure for the generalized Bass model was proposed to forecast product sales such as room air conditioners, BlackBerry handheld devices, and compressed natural gas [63]. ...
Article
Full-text available
This study aimed to develop a new diffusion model for box-office forecasting by modifying the generalized Bass diffusion model with incorporation of search trend data and historical movie-audience data. To that end, first, movie-audience data (i.e., the number of moviegoers) and NAVER search trend data for each of the top 30 movies released in Korea in 2018 were collected by day. Then, the modified generalized Bass diffusion model, newly proposed in this paper, was applied in order to estimate the diffusion parameters. The results of our empirical case study on the Korean film market show that NAVER search trend data plays an important role in box-office forecasting after a movie is released. This study contributes to the extant literature by proposing a new diffusion model, which is a novel online big-data-driven methodology of box-office forecasting. In addition, comparison analysis with two other representative diffusion models was conducted, and the proposed model showed superior prediction power.
... Eq. (1) implies that the importance of p is greatest in the beginning of the diffusion process (because there are many non-adopters) but monotonically recedes with time; while q exerts increasing influence on potential adopters as the number of cumulative adopters grows. Many previous studies interpret the q as word-of-mouth effect [30,31]. As to enterprises, the q could be interpreted as the inter-firm influence driven by competitive pressure [21,32], which is constrained by the inadequacy of information flows among enterprises. ...
Article
Full-text available
Energy efficiency technologies (EETs) are crucial for saving energy and reducing carbon dioxide emissions. However, the diffusion of EETs in small and medium-sized enterprises is rather slow. Literature shows the interactions between innovation adopters and potential adopters have significant impacts on innovation diffusion. Enterprises lack the motivation to share information, and EETs usually lack observability, which suppress the inter-firm influence. Therefore, an information platform, together with proper policies encouraging or forcing enterprises to disclose EET-related information, should help harness inter-firm influence to accelerate EETs' diffusion. To explore whether and how such an information platform affects EETs' diffusion in small and medium-sized enterprises, this study builds an agent-based model to mimic EET diffusion processes. Based on a series of controlled numerical experiments, some counter-intuitive phenomena are discovered and explained. The results show that the information platform is a double-edged sword that notably accelerates EETs' diffusion by approximately 47% but may also boost negative information to diffuse even faster and delay massive adoption of EETs. Increasing network density and the intensity of inter-firm influence are effective to speed EET diffusion, but their impacts diminish drastically after reaching some critical values (0.05 and 0.15 respectively) and eventually harm the stability of the system. Hence, the findings implicate that EET suppliers should carefully launch their promising but immature products; policies that can reduce the perceived risk by enterprises and the effort to maintain an informative rather than judgmental information platform can prominently mitigate the negative side effects brought by high fluidity of information.
... Eq. (1) implies that the importance of p is greatest in the beginning of the diffusion process (because there are many non-adopters) but monotonically recedes with time; while q exerts increasing influence on potential adopters as the number of cumulative adopters grows. Many previous studies interpret the q as word-of-mouth effect [30,31]. As to enterprises, the q could be interpreted as the inter-firm influence driven by competitive pressure [21,32], which is constrained by the inadequacy of information flows among enterprises. ...
Preprint
Energy efficiency technologies (EETs) are crucial for saving energy and reducing carbon dioxide emissions. However, the diffusion of EETs in small and medium-sized enterprises is rather slow. Literature shows the interactions between innovation adopters and potential adopters have significant impacts on innovation diffusion. Enterprises lack the motivation to share information, and EETs usually lack observability, which suppress the inter-firm influence. Therefore, an information platform, together with proper policies encouraging or forcing enterprises to disclose EET-related information, should help harness inter-firm influence to accelerate EETs' diffusion. To explore whether and how such an information platform affects EETs' diffusion in small and medium-sized enterprises, this study builds an agent-based model to mimic EET diffusion processes. Based on a series of controlled numerical experiments, some counter-intuitive phenomena are discovered and explained. The results show that the information platform is a double-edged sword that notably accelerates EETs' diffusion by approximately 47% but may also boost negative information to diffuse even faster and delay massive adoption of EETs. Increasing network density and the intensity of inter-firm influence are effective to speed EET diffusion, but their impacts diminish drastically after reaching some critical values (0.05 and 0.15 respectively) and eventually harm the stability of the system. Hence, the findings implicate that EET suppliers should carefully launch their promising but immature products; policies that can reduce the perceived risk by enterprises and the effort to maintain an informative rather than judgmental information platform can prominently mitigate the negative side effects brought by high fluidity of information.
... Eq. (1) implies that the importance of p is greatest at the beginning of the diffusion process (because there are many non-adopters) but monotonically recedes with time; while q exerts increasing influence on potential adopters as the number of cumulative adopters grows. Many previous studies interpret the q as word-ofmouth effect [30,31]. As to enterprises, the q could be interpreted as the inter-firm influence driven by competitive pressure [21,32], which is constrained by the inadequacy of information flows among enterprises. ...
Article
Energy efficiency technologies (EETs) are crucial for saving energy and reducing carbon dioxide emissions. However, the diffusion of EETs in small and medium-sized enterprises is rather slow. Literature shows the interactions between innovation adopters and potential adopters have significant impacts on innovation diffusion. Enterprises lack the motivation to share information, and EETs usually lack observability, which suppress the inter-firm influence. Therefore, an information platform, together with proper policies encouraging or forcing enterprises to disclose EET-related information, should help harness inter-firm influence to accelerate EETs’ diffusion. To explore whether and how such an information platform affects EETs’ diffusion in small and medium-sized enterprises, this study builds an agent-based model to mimic EET diffusion processes. Based on a series of controlled numerical experiments, some counter-intuitive phenomena are discovered and explained. The results show that the information platform is a double-edged sword that notably accelerates EETs’ diffusion by approximately 47% but may also boost negative information to diffuse even faster and delay massive adoption of EETs. Increasing network density and the intensity of inter-firm influence are effective to speed EET diffusion, but their impacts diminish drastically after reaching some critical values (0.05 and 0.15 respectively) and eventually harm the stability of the system. Hence, the findings implicate that EET suppliers should carefully launch their promising but immature products; policies that can reduce the perceived risk by enterprises and the effort to maintain an informative rather than judgmental information platform can prominently mitigate the negative side effects brought by high fluidity of information.
Article
Full-text available
Few past studies have tackled the relationship between marketing strategies and revenue forecasts of live streamers, not to mention the influence of streamer heterogeneity. This study applies the Hierarchical Bayesian (HB) model to examine the predictive effects of viewers' comments and streamer' behaviors on viewers' gift-sending behavior in live streaming while considering the effect of streamer heterogeneity. In particular, we empirically analyze 38,183 samples of time data from 10 food live-stream samples. We find that the effects of viewers' comment features and streamers' marketing strategies on viewers' gift-sending behavior are mainly influenced by the cross-level effect of streamers' heterogeneities. These results reveal that existing live-streaming studies might have overlooked the impact of streamers' heterogeneities, offering only biased conclusions. Finally, the model proposed in this study has good predictive accuracy for live streamer revenue. 【Keywords】word-of-mouth, discrete emotion theory, live streamer's behavior and characteristics, gift-sending, Hierarchical Bayesian model 摘 要 過去有關直播主的行銷策略與營收預測之研究十分匱乏,且忽略考慮直播主異質性之 影響。本研究應用層級貝氏模型,檢驗在考慮直播主異質性下,觀眾的留言特質和 直播主行銷策略對觀眾送禮行為之預測價值。本研究針對 10 部美食直播共 38,183 筆 時間資料進行分析,發現留言特質和直播主行銷策略對觀眾送禮行為之效果主要受到 直播主異質性的跨層次影響。此顯示過去忽略直播主異質性影響的研究結論可能有偏 誤。最後,本研究提出的模型對直播主營收有很好的預測力。 口碑、分立情緒理論、直播主行為特質、送禮、層級貝氏模型 93
Article
Companies use celebrity accounts on microblogs to promote product information. Predicting the popularity of product information before publication is important for celebrity account selection and microblog account management. Previous studies mainly considered the characteristics of information content, context, and information sources while ignoring the impact of the heterogeneity of user retweeting decision-making processes on the popularity of information. In this study, we analyze the retweeting decision processes of two types of users with different information sources and build a two-stage process model of product information diffusion. Based on this model, we explore the interest decline rate of users and propose two improved Bass models: the exponential- and power-function improved models. The experiment results and model comparisons show that the exponential-function improved model outperforms the Bass, Gompertz and power-function improved models, and is suitable for the pre-release prediction of product information popularity. The interest decline rate of users in retweeting product information follows an exponential function, and product information diffusion on microblogs is mainly driven by the celebrity effect. Our research reveals the mechanism of the interest attenuation effect, the celebrity effect, and product information quality affecting product information diffusion on microblogs, which can aid further research on product information popularity.
Article
Intense innovation competition is increasing products’ complexity. Consumers have to trade off among multiple product attributes before purchasing. Information technologies boost frequent interactions among consumers and lead their opinions to high mutability, which poses challenges for product suppliers. To obtain an in-depth understanding of the diffusion mechanism of emerging technologies in a highly competitive, complex, and dynamic environment, this paper builds an agent-based multi-dimensional relative agreement model and uses smartwatches as a concrete example to analyze their diffusion processes. Three numerical experiments are conducted, respectively focusing on: (1) characteristics of consumer groups; (2) new media marketing strategy; (3) initial expectation management. The results demonstrate: (1) high connectivity with low information uncertainty thresholds benefits product diffusion rate; (2) early promotion and moderate publicity are effective when new media is involved to promote new products; (3) the same initial expectation management strategy may give rise to different diffusion patterns; (4) highly inconsistent consumer opinions hinder product diffusion.
Technical Report
Full-text available
Implementation of methods Extremum Surface Estimator (ESE), Extremum Distance Estimator (EDE) and their iterative versions BESE and BEDE in order to identify the inflection point of a curve.
Article
Full-text available
We put forward the Scaled Beta2 (SBeta2) as a flexible and tractable family for modeling scales in both hierarchical and non-hierarchical settings. Various sensible alternatives to the overuse of vague Inverted Gamma priors have been proposed, mainly for hierarchical models. Several of these alternatives are particular cases of the SBeta2 or can be well approximated by it. This family of distributions can be obtained in closed form as a Gamma scale mixture of Gamma distributions, as the Student distribution can be obtained as a Gamma scale mixture of Normal variables. Members of the SBeta2 family arise as intrinsic priors and as divergence based priors in diverse situations, hierarchical and non-hierarchical. The SBeta2 family unifies and generalizes different proposals in the Bayesian literature, and has numerous theoretical and practical advantages: it is flexible, its members can be lighter, as heavy or heavier tailed as the half-Cauchy, and different behaviors at the origin can be modeled. It has the reciprocality property, i.e if the variance parameter is in the family the precision also is. It is easy to simulate from, and can be embedded in a Gibbs sampling scheme. Short of not being conjugate, it is also amazingly tractable: when coupled with a conditional Cauchy prior for locations, the marginal prior for locations can be found explicitly as proportional to known transcendental functions, and for integer values of the hyperparameters an analytical closed form exists. Furthermore, for specific choices of the hyperparameters, the marginal is found to be an explicit “horseshoe prior”, which are known to have excellent theoretical and practical properties. To our knowledge this is the first closed form horseshoe prior obtained. We also show that for certain values of the hyperparameters the mixture of a Normal and a Scaled Beta2 distribution also gives a closed form marginal. Examples include robust normal and binomial hierarchical modeling and meta-analysis, with real and simulated data.
Article
Full-text available
Our task is to find time efficient and statistically consistent estimators for revealing the true inflection point of a planar curve when we have only a probably noisy set of points for it, thus we are introducing extremum surface (ESE) and distance estimator (EDE) methods. The analysis is based on the geometric properties of the inflection point for a smooth function. Iterative versions of the methods are also given and tested. Numerical experiments are performed for the class of sigmoid curves and comparison with other available procedures is carried out. It is proven that both methods are quite fast in computational execution. Under a rather common noise type EDE can give a 96% confidence interval, while it always provides estimations for data with more than a million cases at a negligible execution time. An alternative way of mode computation for a distribution by using its CDF is given as a real massive data example.
Article
Full-text available
The successful introduction of new durable products plays an important part in helping companies to stay ahead of their competitors. Decisions relating to these products can be improved by the availability of reliable pre-launch forecasts of their adoption time series. However, producing such forecasts is a difficult, complex and challenging task, mainly because of the non-availability of past time series data relating to the product, and the multiple factors that can affect adoptions, such as customer heterogeneity, macroeconomic conditions following the product launch, and technological developments which may lead to the product’s premature obsolescence. This paper provides a critical review of the literature to examine what it can tell us about the relative effectiveness of three fundamental approaches to filling the data void : (i) management judgment, (ii) the analysis of judgments by potential customers, and (iii) formal models of the diffusion process. It then shows that the task of producing pre-launch time series forecasts of adoption levels involves a set of sub-tasks, which all involve either quantitative estimation or choice, and argues that the different natures of these tasks mean that the forecasts are unlikely to be accurate if a single method is employed. Nevertheless, formal models should be at the core of the forecasting process, rather than unstructured judgment. Gaps in the literature are identified, and the paper concludes by suggesting a research agenda so as to indicate where future research efforts might be employed most profitably.
Article
Full-text available
In this paper, we propose a new wide class of hypergeometric heavy tailed priors that is given as the convolution of a Student-t density for the location parameter and a Scaled Beta 2 prior for the squared scale parameter. These priors may have heavier tails than Student-t priors, and the variances have a sensible behaviour both at the origin and at the tail, making it suitable for objective analysis. Since the representation of our proposal is a scale mixture, it is suitable to detect sudden changes in the model. Finally, we propose a Gibbs sampler using this new family of priors for modelling outliers and structural breaks in Bayesian dynamic linear models. We demonstrate in a published example, that our proposal is more suitable than the Inverted Gamma’s assumption for the variances, which makes very hard to detect structural changes.
Article
Full-text available
Abstract Inthis paper we review managerial applications of diffusion models in marketing. We first develop definitions for the basic Bass model and some of its key extensions. Following this, we briefly review a number of applications from which we draw some lessons on building and using diffusion models for predictive and normative applications. We also provide a Microsoft Excel- based software that implements the Bass and generalized Bass diffusion models along with a tutorial to illustrate how to use the software. Finally, we include a case on High Definition Television (HDTV) that allows the reader to explore how to use diffusion models (with our software) to address a business forecasting problem. The other chapters in this book ,have focused ,on analytic ,developments ,and extensions of
Article
Full-text available
We critically examine alternate models of the diffusion of new products and the turning points of the diffusion curve. On each of these topics, we focus on the drivers, specifications, and estimation methods researched in the literature. We discover important generalizations about the shape, parameters, and turning points of the diffusion curve and the characteristics of diffusion across early stages of the product life cycle. We point out directions for future research.
Article
Full-text available
High-Definition Television promises to be the next generation of television. This technology has broad implications for consumer markets, as well as the underlying manufacturing, technology development, and R&D activities of firms. Under increasing pressure from various groups, the U.S. government must make major policy and funding decisions based on its assessment of the likely demand for HDTV. Three published reports which forecast sales of HDTV after its scheduled introduction in the mid-1990s are available. Unfortunately, these forecasts offer widely differing perspectives on HDTV's potential. This paper presents an approach that links product segmentation (based on historical demand parameters, and marketing and manufacturing related variables) and demand forecasting for new products. The published HDTV forecasts are then assessed using this segmentation scheme. Differing from the Congressional Budget Office's earlier evaluation, this analysis indicates that one report is consistent with historical data from the home appliance industry.
Article
Full-text available
This paper develops a model and an associated estimation procedure to forecast and control the rate of sales for a new product. A repeat-purchase diffusion model is developed, incorporating the effect of marketing variables---detailing force effects in particular---as well as a word-of-mouth effect. Bayesian estimation, with priors developed from past products is used to estimate and update the parameters of the model. The procedure is used to develop marketing policies for new product introduction.
Article
Full-text available
An important aspect of marketing practice is the targeting of consumer segments for differential promotional activity. The premise of this activity is that there exist distinct segments of homogeneous consumers who can be identified by readily available demographic information. The increased availability of individual consumer panel data open the possibility of direct targeting of individual households. The goal of this paper is to assess the information content of various information sets available for direct marketing purposes. Information on the consumer is obtained from the current and past purchase history as well as demographic characteristics. We consider the situation in which the marketer may have access to a reasonably long purchase history which includes both the products purchased and information on the causal environment. Short of this complete purchase history, we also consider more limited information sets which consist of only the current purchase occasion or only information on past product choice without causal variables. Proper evaluation of this information requires a flexible model of heterogeneity which can accommodate observable and unobservable heterogeneity as well as produce household level inferences for targeting purposes. We develop new econometric methods to implement a random coefficient choice model in which the heterogeneity distribution is related to observable demographics. We couple this approach to modeling heterogeneity with a target couponing problem in which coupons are customized to specific households on the basis of various information sets. The couponing problem allows us to place a monetary value on the information sets. Our results indicate there exists a tremendous potential for improving the profitability of direct marketing efforts by more fully utilizing household purchase histories. Even rather short purchase histories can produce a net gain in revenue from target couponing which is 2.5 times the gain from blanket couponing. The most popular current electronic couponing trigger strategy uses only one observation to customize the delivery of coupons. Surprisingly, even the information contained in observing one purchase occasion boasts net couponing revenue by 50% more than that which would be gained by the blanket strategy. This result, coupled with increased competitive pressures, will force targeted marketing strategies to become much more prevalent in the future than they are today.
Article
Full-text available
Over a large number of new products and technological innovations, the Bass diffusion model (Bass 1969) describes the empirical adoption curve quite well. In this study, we generalize the Bass model to include decision variables such as price and advertising. The generalized model reduces to the Bass model as a special case and explains why the Bass model works so well without including decision variables. We compare our generalized Bass model to other approaches from the literature for including decision variables into diffusion models, and our results provide both theoretical and empirical support for the generalized Bass model. We also show how our generalized Bass model can be used for product planning purposes.
Article
Full-text available
One of the most important applications of the Bass model is “guessing by analogy” — the parameter estimates of the Bass model for analogous products can be used to predict the diffusion pattern of a new product. However, estimates based on left-hand truncated data will be biased unless care is taken to adjust for the bias. We demonstrate the prevalence of left truncation in historical sales data and present a method for dealing with the bias by the Virtual Bass Model. We use the model to develop a database of parameter estimates that may be used in “guessing by analogy.”
Article
Full-text available
Diffusion of renewable energy technologies (RETs) are driven by policies and incentives due to their inherent characteristics such as high upfront costs, lack of level playing field but distinct advantages from energy security, environmental and social considerations. Even after three decades of their promotion, only 20–25% of their potential has been realized. The theory of diffusion modeling allows analysis of diffusion processes and study of growth rates of different technologies and underlying diffusion factors. Their applications have focused on commercial and consumer products such as television, automobiles and IT products and their applications to RETs have been limited. Diffusion analysis of RETs have been based on barriers’ to RET adoption and techno–economic, learning and experience curve approaches. It is observed that these diffusion models when applied to commercial products do not deal with the issues of policy influences which are critical to RET diffusion. Since policies drive RET diffusion, the models for analyzing RET diffusion should allow establishing explicit relationships between the diffusion parameters and policies and their impact on diffusion rates. Given the potential of renewable energy technologies for sustainable development, the aim of this paper is to review different diffusion theory based models and their applicability to RET diffusion analysis.
Article
A meta-analysis of 213 applications of diffusion models from 15 articles relates model parameters to the nature of the innovation, the country under study, model specification, and estimation procedure. The effect of use of the same data by several researchers is examined, as are weighting schemes for improving efficiency of the meta-analysis. A Bayesian scheme is used to combine results from the metaanalysis with new data for estimation of parameters in a new situation.
Article
Since the publication of the Bass model in 1969, research on the modeling of the diffusion of innovations has resulted in a body of literature consisting of several dozen articles, books, and assorted other publications. Attempts have been made to reexamine the structural and conceptual assumptions and estimation issues underlying the diffusion models of new product acceptance. The authors evaluate these developments for the past two decades. They conclude with a research agenda to make diffusion models theoretically more sound and practically more effective and realistic.
Article
In this paper, we analyze the effect on posterior parameter distributions of four possible alternative prior distributions, namely Normal-Inverse Gamma, Normal-Scaled Beta two, Student’s t-Inverse Gamma and Student’s t-Scaled Beta two. We show the effects of these prior distributions when there is apparently conflict between the sample information and the elicited hyperparameters. In particular, we show that there is not systematic differences of posterior parameter distributions associated with these four priors using data of piped water demand in a linear model with autoregressive errors. To test the hypothesis that this result is due to using a moderate sample size and a relatively high level of expert’s uncertainty, we perform some simulation exercises assuming smaller sample sizes and lower expert’s uncertainty. We obtain the general same pattern, although Student’s t models are slightly less affected by prior information when there is a high level of expert’s certainty, and Scaled Beta two models exhibit a higher level of posterior dispersion of the variance parameter.
Purpose – The emergence of a pharmaceutical drug as a late entrant in a homogeneous category is a relevant issue for strategy implementation in the pharmaceutical industry. This paper aims to suggest a methodology for making pre-launch forecasts with a complete lack of information for a late entrant. Design/methodology/approach – The diffusion process of the emerging entrant is estimated using the diffusion dynamics of pre-existing drugs, after an appropriate assessment of the drug’s entrance point. The authors’ methodology is applied to study the late introduction of a pharmaceutical drug in Italy within the category of ranitidine. Historical data of seven already active drugs in the category are used to assess and estimate ex ante the dynamics of a late entrant (Ulkobrin). Findings – The results of applying the procedure to the ranitidine market reveal a high degree of accuracy between the ex post observed values of the late entrant and its ex ante mean predicted trajectory. Moreover, the assessed launch date corresponds to the actual date. Research limitations/implications – The category has to be homogeneous to ensure a high degree of similarity among the existing drugs and the late entrant. For this reason, radical innovations cannot be forecast with thismethodology. Originality/value – The proposed approach contributes to the still challenging research field of pre-launch forecasting by estimating the dynamic features of a homogeneous category and exploiting them for forecasting purposes. Keywords Diffusion models, Pharmaceutical new product forecasting, Sales data
Article
We develop a simple method for forecasting and benchmarking consumer trial of new products. First, we extend the exponential trial growth models used in controlled test markets to the context of national product launches. This provides a marketing science benchmark against which our new approach can be compared.
Article
For mid-term demand forecasting, the accuracy, stability, and ease of use of the forecasting method are considered important user requirements. We propose a new forecasting method using linearization of the hazard rate formula of the Bass model. In the proposal, reduced non-linear least square method is used to determine the market potential estimate, after the estimates for the coefficient of innovation and the coefficient of imitation are obtained by using ordinary least square method with the new linearization of the Bass model. Validations of 29 real data sets and 36 simulation data sets show that the proposed method is accurate and stable. Considering the user requirements, our method could be suitable for mid-term forecasting based on the Bass model. It has high forecasting accuracy and superior stability, is easy to understand, and can be programmed using software such as MS Excel and Matlab.
Article
The Bass model has been one of the most popular and widely adopted diffusion models owing to its parsimony as well as its usefulness for interpreting diffusion dynamics. However, its inherent limitations have led various researchers to expand and modify the original model. In this research, we propose another modified version of the Bass model, namely the Bass model with integration constant (BMIC), which is not bound to the zero-initial level condition, a stochastic property that leads to two issues, namely initial demand and left-truncated data. Regarding the diffusion process with regard to the initial demand issue, our model exhibits estimation performances equivalent to a more complex candidate model. Moreover, its relative thriftiness improves stability in the parameter estimation. The proposed model also successfully handles left-truncated data. In addition, in contrast to the Virtual Bass model, the BMIC successfully estimates limited data even without knowing the length of the omitted data. Further, the true but latent launch time can also be estimated by using the model proposed herein.
Article
Although empirical evidence shows that advertising wears out over time and that advertising pulsation may be superior to a strategy of constant spending, current advertising models neither represent the wearout phenomenon nor consider pulsation as an optimal strategy. In this article, the wearout phenomenon is represented in a model which distinguishes between level stimuli and differential stimuli of advertising. The model is applied to several brands, and both advertising stimuli are found to be significant. A pulsation strategy is shown to be optimal under both an unconstrained and a constrained advertising budget.
Article
This paper studies the estimation of the steady state mean of an output sequence from a discrete event simulation. It considers the problem of the automatic generation of a confidence interval of prespecified width when there is an initial transient present. It explores a procedure based on Schruben's Brownian bridge model for the detection of nonstationarity and a spectral method for estimating the variance of the sample mean. The procedure is evaluated empirically for a variety of output sequences. The performance measures considered are bias, confidence interval coverage, mean confidence interval width, mean run length, and mean amount of deleted data. If the output sequence contains a strong transient, then inclusion of a test for stationarity in the run length control procedure results in point estimates with lower bias, narrower confidence intervals, and shorter run lengths than when no check for stationarity is performed. If the output sequence contains no initial transient, then the performance measures of the procedure with a stationarity test are only slightly degraded from those of the procedure without such a test. If the run length is short relative to the extent of the initial transient, the stationarity tests may not be powerful enough to detect the transient, resulting in a procedure with unreliable point and interval estimates.
Article
Mathematical models are often used to describe the sales and adoption patterns of products in the years following their launch and one of the most popular of these models is the Bass model. However, using this model to forecast sales time series for new products is problematical because there is no historic time series data with which to estimate the model’s parameters. One possible solution is to fit the model to the sales time series of analogous products that have been launched in an earlier time period and to assume that the parameter values identified for the analogy are applicable to the new product. In this paper, we investigate the effectiveness of this approach by applying four forecasting methods based on analogies (and variants of these methods) to the sales of consumer electronics products marketed in the USA. We found that all the methods tended to lead to forecasts with high absolute percentage errors, which is consistent with other studies of new product sales forecasting. The use of the means of published parameter values for analogies led to higher errors than the parameters we estimated from our own data. When using our own data, averaging the parameter values of multiple analogies rather than relying on a single most-similar, product led to improved accuracy. However, there was little to be gained by using more than five or six analogies.
Article
Purpose ‐ The purpose of this study is to propose a systematic method for the diffusion of forecasting technology in the pre-launch stage. Design/methodology/approach ‐ The authors designed survey question items that are familiar to interviewees as well as algebraically transformable into the parameters of a logistic diffusion model. In addition, they developed a procedure that reduces inconsistency in interviewee responses, removes outliers, and verifies conformability, in order to reduce the error and yield robust estimation results. Findings ‐ The results show that the authors' method performed better in the empirical cases of digital media broadcasting and internet protocol television in terms of sum of squared error compared with an existing survey-based method, a regression method, and the guessing-by-analogy method. Specifically, the authors' method can reduce the error by using the conformability and outlier tests, while the consistency factor contributes to determining the final estimate with personal estimates. Research limitations/implications ‐ The procedure proposed in this study is confined to the presented logistic model. Future research should aim to extend its application to other representative diffusion models such as the Bass model and the Gompertz model. Practical implications ‐ The authors' method provides a better quality of forecasting for innovative new products and services compared with the guessing-by-analogy method, and it contributes to managerial decisions such as those in production planning. Originality/value ‐ The authors introduce the concepts of conformability and consistency in order to reduce the error from personal biases and mistakes. Based on these concepts, they develop a procedure to yield robust estimation results with less error.
Article
Studies estimating the Bass model and other macro-level diffusion models with an unknown ceiling feature three curious empirical regularities: (i) the estimated ceiling is often close to the cumulative number of adopters in the last observation period, (ii) the estimated coefficient of social contagion or imitation tends to decrease as one adds later observations to the data set, and (iii) the estimated coefficient of social contagion or imitation tends to decrease systematically as the estimated ceiling increases. We analyze these patterns in detail, focusing on the Bass model and the nonlinear Least squares (NLS) estimation method, Using both empirical and simulated diffusion data, we show that NLS estimates of the Bass model coefficients are biased and that they change systematically as one extends the number of observations used in the estimation. We also identify the lack of richness in the data compared to the complexity of the model (known as ill-conditioning) as the cause of these estimation problems. In an empirical analysis of twelve innovations, we assess how tho model parameter estimates change as one adds later observations to the data set. Our analysis shows that, on average, a 10% increase in the observed cumulative market penetration is associated with, roughly, a 5% increase in estimated market size m, a 10% decrease in the estimated coefficient of imitation q, and a 15% increase the estimated coefficient of innovation p. A simulation study shows that the NLS parameter estimates of the Bass model change systematically as one adds later observations to the data set, even in the absence of model misspecification, We find about-the same effect sizes as in the empirical analysis. The simulation also shows that the estimates are biased and that the amount of bias is a function of (i) the amount of noise in the data, (ii) the number of data points, and (iii) the difference between the cumulative penetration in the last observation period and the true penetration ceiling (i.e., the extent of right censoring). All three conditions affect the level of ill-conditioning in the estimation, which, in turn, affects bias in NLS regression. In situations consistent with marketing applications, m can be underestimated by 20%, p underestimated by the same amount, and q overestimated by 30%. The existence of a downward bias in the estimate of tn and an upward bias in the estimate of q, and the fact that these biases become smaller as the number of data points increases and the censoring decreases, can explain why systematic changes in the parameter estimates are observed in many applications. A reduced bias, though, is not the only possible explanation for the systematic change in parameter estimates observed in empirical studies, Not accounting for the growth in the population, for the effect of economic and marketing variables, or for population heterogeneity is likely to result In increasing (m) over cap and decreasing (q) over cap as well. In an analysis of six innovations, however, we find that attempts to address possible model misspecification problems by making the model more flexible and adding free parameters result in larger rather than smaller systematic changes in the estimates. The bias and systematic change problems we identify are sufficiently large to make long-term predictive, prescriptive and descriptive applications of Bass-type models problematic. Hence, our results should be of interest to diffusion researchers as well as to users of diffusion models, including market forecasters and strategic market planners.
Article
We conducted research for planning the launch of a satellite television product, leading to a prelaunch forecast of subscriptions of satellite television over a five-year horizon. The forecast was based on the Bass model. We derived parameters of the model in part from stated-intentions data from potential consumers and in part by guessing by analogy. The 1992 forecast of the adoption and diffusion of satellite television proved to be quite good in comparison with actual subscriptions over the five-year period from 1994 through 1999.
Article
Various noninformative prior distributions have been suggested for scale parameters in hierarchical models. We construct a new folded-noncentral-$t$ family of conditionally conjugate priors for hierarchical standard deviation parameters, and then consider noninformative and weakly informative priors in this family. We use an example to illustrate serious problems with the inverse-gamma family of "noninformative" prior distributions. We suggest instead to use a uniform prior on the hierarchical standard deviation, using the half-$t$ family when the number of groups is small and in other settings where a weakly informative prior is desired. We also illustrate the use of the half-$t$ family for hierarchical modeling of multiple variance parameters such as arise in the analysis of variance.
Article
We generalize the method proposed by Gelman and Rubin (1992a) for monitoring the convergence of iterative simulations by comparing between and within variances of multiple chains, in order to obtain a family of tests for convergence. We review methods of inference from simulations in older to develop convergence-monitoring summaries that are relevant for the purposes for which the simulations are used. We recommend applying a battery of tests for mixing based on the comparison of inferences from individual sequences and from the mixture of sequences. Finally, we discuss multivariate analogues, for assessing convergence of several parameters simultaneously.
Book
Getting an innovation adopted is difficult; a common problem is increasing the rate of its diffusion. Diffusion is the communication of an innovation through certain channels over time among members of a social system. It is a communication whose messages are concerned with new ideas; it is a process where participants create and share information to achieve a mutual understanding. Initial chapters of the book discuss the history of diffusion research, some major criticisms of diffusion research, and the meta-research procedures used in the book. This text is the third edition of this well-respected work. The first edition was published in 1962, and the fifth edition in 2003. The book's theoretical framework relies on the concepts of information and uncertainty. Uncertainty is the degree to which alternatives are perceived with respect to an event and the relative probabilities of these alternatives; uncertainty implies a lack of predictability and motivates an individual to seek information. A technological innovation embodies information, thus reducing uncertainty. Information affects uncertainty in a situation where a choice exists among alternatives; information about a technological innovation can be software information or innovation-evaluation information. An innovation is an idea, practice, or object that is perceived as new by an individual or an other unit of adoption; innovation presents an individual or organization with a new alternative(s) or new means of solving problems. Whether new alternatives are superior is not precisely known by problem solvers. Thus people seek new information. Information about new ideas is exchanged through a process of convergence involving interpersonal networks. Thus, diffusion of innovations is a social process that communicates perceived information about a new idea; it produces an alteration in the structure and function of a social system, producing social consequences. Diffusion has four elements: (1) an innovation that is perceived as new, (2) communication channels, (3) time, and (4) a social system (members jointly solving to accomplish a common goal). Diffusion systems can be centralized or decentralized. The innovation-development process has five steps passing from recognition of a need, through R&D, commercialization, diffusions and adoption, to consequences. Time enters the diffusion process in three ways: (1) innovation-decision process, (2) innovativeness, and (3) rate of the innovation's adoption. The innovation-decision process is an information-seeking and information-processing activity that motivates an individual to reduce uncertainty about the (dis)advantages of the innovation. There are five steps in the process: (1) knowledge for an adoption/rejection/implementation decision; (2) persuasion to form an attitude, (3) decision, (4) implementation, and (5) confirmation (reinforcement or rejection). Innovations can also be re-invented (changed or modified) by the user. The innovation-decision period is the time required to pass through the innovation-decision process. Rates of adoption of an innovation depend on (and can be predicted by) how its characteristics are perceived in terms of relative advantage, compatibility, complexity, trialability, and observability. The diffusion effect is the increasing, cumulative pressure from interpersonal networks to adopt (or reject) an innovation. Overadoption is an innovation's adoption when experts suggest its rejection. Diffusion networks convey innovation-evaluation information to decrease uncertainty about an idea's use. The heart of the diffusion process is the modeling and imitation by potential adopters of their network partners who have adopted already. Change agents influence innovation decisions in a direction deemed desirable. Opinion leadership is the degree individuals influence others' attitudes
Article
A broad class of normal and non-normal models for processes with non-negative and non-decreasing mean function is presented. This class is called exponential growth models and the inferential procedure is based on dynamic Bayesian forecasting techniques. The aim is to produce the analysis on the original variable avoiding transformation and giving to the practitioner the opportunity to communicate easily with the model. This class of models includes the well-known exponential, logistic and Gompertz models. Models for counting data are compared with the Normal models using the appropriate variance law. In the examples, the novel aspects of this class of models are illustrated showing an improved performance over simple, standard linear models.
Article
Since growth curves are often used to produce medium- to long-term forecasts for planning purposes, it is obviously of value to be able to associate an interval with the forecast trend. The problems in producing prediction intervals are well described by Chatfield. The additional problems in this context are the intrinsic non-linearity of the estimation procedure and the requirement for a prediction region rather than a single interval. The approaches considered are a Taylor expansion of the variance of the forecast values, an examination of the joint density of the parameter estimates, and bootstrapping. The performance of the resultant intervals is examined using simulated data sets. Prediction intervals for real data are produced to demonstrate their practical value.
Article
The paper that I authored and that was published in Management Science in 1969 (Bass 1969) has become widely known as the "Bass Model" (see Morrison and Raju 2004). The model of the diffusion of new products and technologies developed in the paper is one of the most widely applied models in management science. It was especially gratifying for me to learn that INFORMS members have voted the "Bass Model" paper as one of the Top 10 Most Influential Papers published in the 50-year history of Management Science in connection with the 50th anniversary of the journal. In this commentary on the paper I shall discuss some background and history of the development of the paper, the reasons why the model has been influential, some important extensions of the model, some examples of applications, and some examples of the frontiers of research involving the Bass Model. In the current period, in which there is much discussion about the marketing of applications of management science methods and practice, I hope that this commentary will be useful in providing insights about some of the properties of models that will be applied.
Article
In this issue, we reproduce 10 of the most important papers published in Management Science from 1954 to 2003. Each paper is followed by a commentary by the author(s) and other scholars that offers insights into the background, creation, or subsequent impact of the paper.
Article
A method for obtaining early forecasts for the sales of new durable products based on Hierarchical Bayes procedures is presented. The Bass model is implemented within this framework by using a nonlinear regression approach. The linear regression model has been shown to have numerous shortcomings. Two stages of prior distributions use sales data from a variety of dissimilar new products. The first prior distribution describes the variation among the parameters of the products, and the second prior distribution expresses the uncertainty about the hyperparameters of the first prior. Before observing sales data for a new product launch, the forecasts are the expectation of the first stage prior distribution. As sales data become available, the forecasts adapt to the unique features of the product. Early forecasting and the adaptive capability are the two major payoffs from using Hierarchical Bayes procedures. This contrasts with other estimation approaches which either use a linear model or provide reasonable forecasts only after the inflection point of the time series of the sales. The paper also indicates how the Hierarchical Bayes procedure can be extended to include exogenous variables.
Article
Schmittlein and Mahajan (Schmittlein, D. C., V. Mahajan. 1982. Maximum likelihood estimation for an innovation diffusion model of new product acceptance. (Winter) 57–78.) made an important improvement in the estimation of the Bass (Bass, F. M. 1969. A new product growth model for consumer durables. (January) 215–227.) diffusion model by appropriately aggregating the continuous time model over the time intervals represented by the data. However, by restricting consideration to only sampling errors and ignoring all other errors (such as the effects of excluded marketing variables), their Maximum Likelihood Estimation (MLE) seriously underestimates the standard errors of the estimated parameters. This note uses an additive error term to model sampling and other errors in the Schmittlein and Mahajan formulation. The proposed Nonlinear Least Squares (NLS) approach produces valid standard error estimates. The fit and the predictive validity are roughly comparable for the two approaches. Although the empirical applications reported in this paper are in the context of the Bass diffusion model, the NLS approach is also applicable to other diffusion models for which cumulative adoption can be expressed as an explicit function of time.
Article
Essential Concepts from Distribution TheoryThe Goal of Inference and Bayes' TheoremConditioning and the Likelihood PrinciplePrediction and BayesSummarizing the PosteriorDecision Theory, Risk, and the Sampling Properties of Bayes EstimatorsIdentification and Bayesian InferenceConjugacy, Sufficiency, and Exponential FamiliesRegression and Multivariate Analysis ExamplesIntegration and Asymptotic Methods Importance SamplingSimulation Primer for Bayesian ProblemsSimulation from Posterior of Multivariate Regression Model
Article
A consistent pattern observed for really new household consumer durables is a takeoff or dramatic increase in sales early in their history. The takeoff tends to appear as an elbow-shaped discontinuity in the sales curve showing an average sales increase of over 400%. In contrast, most marketing textbooks as well as diffusion models generally depict the growth of new consumer durables as a smooth sales curve. Our discussions with managers indicate that they have little idea about the takeoff and its associated characteristics. Many managers did not even know that most successful new consumer durables had a distinct takeoff. Their sales forecasts tend to show linear growth. Yet, knowledge about the takeoff is crucial for managers to decide whether to maintain, increase, or withdraw support of new products. It is equally important for industry analysts who advise investors and manufacturers of complementary and substitute products. Although previous studies have urged researchers to examine the takeoff, no research has addressed this critical event. While diffusion models are commonly used to study new product sales growth, they do not explicitly consider a new product's takeoff in sales. Indeed, diffusion researchers frequently use data only from the point of takeoff. Therefore, nothing is known about the takeoff or models appropriate for this event. Our study provides the first analysis of the takeoff. In particular, we address three key questions: (i) How much time does a newly introduced product need to takeoff? (ii) Does the takeoff have any systematic patterns? (iii) Can we predict the takeoff? We begin our study by developing an operational measure to determine when the takeoff occurs. We found that when the base level of sales is small, a relatively large percentage increase could occur without signaling the takeoff. Conversely, when the base level of sales is large, the takeoff sometimes occurs with a relatively small percentage increase in sales. Therefore, we developed a “threshold for takeoff.” This is a plot of percentage sales growth relative to a base level of sales, common across all categories. We define the takeoff as the first year in which an individual category's growth rate relative to base sales crosses this threshold. The threshold measure correctly identifies the takeoff in over 90% of our categories. We model the takeoff with a hazard model because of its advantages for analyzing time-based events. We consider three primary independent variables: price, year of introduction, and market penetration, as well as several control variables. The hazard model fits the pattern of takeoffs very well, with price and market penetration being strong correlates of takeoff. Our results provide potential generalizations about the time to takeoff and the price reduction, nominal price, and penetration at takeoff. In particular, we found that: • On average for 16 post-World War II categories: — the price at takeoff is 63% of the introductory price; — the time to takeoff from introduction is six years; — the penetration at takeoff is 1.7%. • The time to takeoff is decreasing for more recent categories. For example, the time to takeoff is 18 years for categories introduced before World War II, but only six years for those introduced after World War II. • Many of the products in our sample had a takeoff near three specific price points (in nominal dollars): $1000, $500 and $100. In addition, we show how the hazard model can be used to predict the takeoff. The model predicts takeoff one year ahead with an expected average error of 1.2 years. It predicts takeoff at a product's introduction with an expected average error of 1.9 years. Even against the simple mean time to takeoff of six years for recent categories, the model's performance represents a tremendous improvement in prediction. It represents an immeasurable improvement in prediction for managers who currently have no idea about how long it takes for a new product to takeoff. The threshold rule for determining takeoff can be used to distinguish between a large increase in sales and a real takeoff. Some limitations of this study could provide fruitful areas for future research. Our independent variables suffer from endogeneity bias, so alternative variables or methods could address this limitation. Also, the takeoff may be related to additional variables such as relative advantage over substitutes and the presence of complementary products. Finally, examination of sales from takeoff to their leveling off could be done with an integrated model of takeoff and sales growth or with the hazard model we propose. Generalizations about this period of sales growth could also be of tremendous importance to managers of new products.
Article
A maximum likelihood approach is proposed for estimating an innovation diffusion model of new product acceptance originally considered by Bass (Bass, F. M. 1969. A new product growth model for consumer durables. (January) 215–227.). The suggested approach allows: (1) computation of approximate standard errors for the diffusion model parameters, and (2) determination of the required sample size for forecasting the adoption level to any desired degree of accuracy. Using histograms from eight different product innovations, the maximum likelihood estimates are shown to outperform estimates from a model calibrated using ordinary least squares, in terms of both goodness of fit measures and one-step ahead forecasts. However, these advantages are not obtained without cost. The coefficients of innovation and imitation are easily interpreted in terms of the expected adoption pattern, but individual adoption times must be assumed to represent independent draws from this distribution. In addition, instead of using standard linear regression, another (simple) program must be employed to estimate the model. Thus, tradeoffs between the maximum likelihood and least squares approaches are also discussed.
Article
Wind power technology is analyzed in terms of diffusion, with incentive effects introduced as exogenous dynamics in the Generalized Bass Model (GBM) framework. Estimates and short-term forecasts of the life-cycles of wind power are provided for the US and Europe, as they have similar geographic areas, as well as for some leading European countries. GBMs have the best performance in model selection, and are ranked first in terms of forecast accuracy over a set of different accuracy measures and forecasting horizons, relative to the Standard Bass, Logistic, and Gompertz models.
Article
We introduce a cross-population, adaptive diffusion model that can be used to forecast the diffusion of an innovation at early stages of the diffusion curve. In this model, diffusion patterns across the populations depend on each other. We extend the model presented by Putsis, Balasubramanian, Kaplan and Sen (1997) [Putsis, W.P., Balasubramanian, S., Kaplan, E.H., Sen, S.K., 1997. Mixing behavior in cross-country diffusion. Marketing Science, 16 (4), 354–369.] by introducing time-varying parameters. Furthermore, we apply the matching procedure as proposed by Dekimpe, Parker and Sarvary (1998) [Dekimpe, M.G., Parker, Ph.M., Sarvary, M., 1998. Staged estimation of international diffusion models: An application to global cellular telephone adoption. Technological Forecasting and Social Change, 57 (1–2), 105–132.]. We adaptively estimate the model parameters using an extension of the augmented Kalman Filter with Continuous States and Discrete observations, developed by Xie, Song, Sirbu and Wang (1997) [Xie, J., Song, M., Sirbu, M., Wang, Q., 1997. Kalman filter estimation of new product diffusion models. Journal of Marketing Research, 34 (3), 378–393.].We apply the method to the diffusion of both Internet access at home and mobile telephony among households in 15 countries of the European Union. The results show that forecasts obtained from our model outperform those from independent diffusion models for each country separately, as well as forecasts from the mixing-behavior model by Putsis et al. (1997).
Article
Innovative markets are those which are undergoing rapid development due to changing customer needs or improving technological capability. Because these markets are so dynamic, new products are introduced frequently, and there is a high degree of uncertainty regarding their potential for success. We review literature relevant to firm decision-making, including such topics as timing the adoption of a technological innovation, determining optimal spending on an innovative technology, and predicting the success of a class of products which are based on a particular innovative technology. We consider these problems both at the micro (individual firm) and macro (aggregate) levels.
Article
The wealth of research into modelling and forecasting the diffusion of innovations is impressive and confirms its continuing importance as a research topic. The main models of innovation diffusion were established by 1970. (Although the title implies that 1980 is the starting point of the review, we allowed ourselves to relax this constraint when necessary.) Modelling developments in the period 1970 onwards have been in modifying the existing models by adding greater flexibility in various ways. The objective here is to review the research in these different directions, with an emphasis on their contribution to improving on forecasting accuracy, or adding insight to the problem of forecasting.The main categories of these modifications are: the introduction of marketing variables in the parameterisation of the models; generalising the models to consider innovations at different stages of diffusions in different countries; and generalising the models to consider the diffusion of successive generations of technology.We find that, in terms of practical impact, the main application areas are the introduction of consumer durables and telecommunications.In spite of (or perhaps because of) the efforts of many authors, few research questions have been finally resolved. For example, although there is some convergence of ideas of the most appropriate way to include marketing mix-variables into the Bass model, there are several viable alternative models.Future directions of research are likely to include forecasting new product diffusion with little or no data, forecasting with multinational models, and forecasting with multi-generation models; work in normative modelling in this area has already been published.
Article
Diffusion processes of new products and services have become increasingly complex and multifaceted in recent years. Consumers today are exposed to a wide range of influences that include word-of-mouth communications, network externalities, and social signals. Diffusion modeling, the research field in marketing that seeks to understand the spread of innovations throughout their life cycle, has adapted to describe and model these influences.We discuss efforts to model these influences between and across markets and brands. In the context of a single market, we focus on social networks, network externalities, takeoffs and saddles, and technology generations. In the context of cross-markets and brands, we discuss cross-country influences, differences in growth across countries, and effects of competition on growth.On the basis of our review, we suggest that the diffusion framework, if it is to remain a state-of-the-art paradigm for market evolution, must broaden in scope from focusing on interpersonal communications to encompass the following definition: Innovation diffusion is the process of the market penetration of new products and services that is driven by social influences, which include all interdependencies among consumers that affect various market players with or without their explicit knowledge.Although diffusion modeling has been researched extensively for the past 40 years, we believe that this field of study has much more to offer in terms of describing and incorporating current market trends, which include the opening up of markets in emerging economies, web-based services, online social networks, and complex product–service structures.
Article
Since the 1960s, a number of new product diffusion models have been developed and applied in marketing. This paper reviews the theoretical origins, specifications, data requirements, estimation procedures, and pre-launch calibration possibilities for these aggregate models. Following a critical review of both the problems and the potential benefits of these models, a number of suggestions are made with respect to future academic and applied research involving new product diffusion forecasting.
Article
Innovation diffusion processes are generally described at aggregate level with models like the Bass Model (BM) and the Generalized Bass Model (GBM). However, the recognized importance of communication channels between agents has recently suggested the use of agent-based models, like Cellular Automata. We argue that an adoption or purchase process is nested in a communication network that evolves dynamically and indirectly generates a latent non-constant market potential affecting the adoption phase.Using Cellular Automata we propose a two-stage model of an innovation diffusion process. First we describe a communication network, an Automata Network, necessary for the “awareness” of an innovation. Then, we model a nested process depicting the proper purchase dynamics. Through a mean field approximation we propose a continuous representation of the discrete time equations derived by our nested two-stage model. This constitutes a special non-autonomous Riccati equation, not yet described in well-known international catalogues. The main results refer to the closed form solution that includes a general dynamic market potential and to the corresponding statistical analysis for identification and inference. We discuss an application to the diffusion of a new pharmaceutical drug.