This paper describes a dynamic model of the New Product Development (NPD) process. The model has been occurring from best practice noticed in our research conducted at a range of situations. The model contributes to determine and put an IT company's NPD activities into the frame of the overall NPD process[1]. It has been found to be a useful tool for organizing data on IT company's NPD activities without enforcement an excessively restrictive research methodology refers to the model of NPD. The framework, which strengthens the model, will help to promote a research of the methods undertaken within an IT company's NPD process, thus promoting understanding and improvement of the simulation process[2]. IT companies tested many techniques with several different practices designed to improve the validity and efficacy of their NPD process[3]. Supported by the model, this research examines how widely accepted stated tactics are and what impact these best tactics have on NPD performance. The main assumption of this study is that simulation of generation of new ideas[4] will lead to greater NPD effectiveness and more successful products in IT companies. With the model implementation, practices concern the implementation strategies of NPD (product selection, objectives, leadership, marketing strategy and customer satisfaction) are all more widely accepted than best practices related with controlling the application of NPD (process control, measurements, results). In linking simulation with impact, our results states product success depends on developing strong products and ensuring organizational emphasis, through proper project selection. Project activities strengthens both product and project success. IT products and services success also depends on monitoring the NPD procedure through project management and ensuring team consistency with group rewards. Sharing experiences between projects can positively influence the NPD process.
Businesses from all Information Technology sectors use market segmentation[1] in their product development[2] and strategic planning[3]. Many studies have concluded that market segmentation is considered as the norm of modern marketing. With the rapid development of technology, customer needs are becoming increasingly diverse. These needs can no longer be satisfied by a mass marketing approach and follow one rule. IT Businesses can face with this diversity by pooling customers[4] with similar requirements and buying behavior and strength into segments. The result of the best choices about which segments are the most appropriate to serve can then be made, thus making the best of finite resources.
Despite the attention which segmentation gathers and the resources that are invested in it, growing evidence suggests that businesses have problems operationalizing segmentation[5]. These problems take various forms. There may have been a rule that the segmentation process necessarily results in homogeneous groups of customers for whom appropriate marketing programs and procedures for dealing with them can be developed. Then the segmentation process, that a company follows, can fail. This increases concerns about what causes segmentation failure and how it might be overcome. To prevent the failure, we created a dynamic simulation model of market segmentation[6] based on the basic factors leading to this segmentation.
The purpose of this paper is to examine the impact of knowledge [1] creation mode (e.g. goal-driven and goal-free) and organizational culture on knowledge creation and sharing performance in the context of high technology (high-tech) companies [2] with the contribution of Dynamic Simulation Model. Both goal-free and goal-framed creation modes are more likely to support knowledge creation, while the goal-driven mode is not likely to be favorable for knowledge creation. The paper has leveraged the systems dynamic paradigm to conduct sustainable enterprise modelling and iThink system to implement the models. High-tech companies who are frequently looking for new ideas for product design [3] and manufacturing technologies [4] are more likely to adopt the goal-free creation mode. High-tech companies who would like to emphasize goal achievement with respect to creation in manufacturing should form an organizational culture with a characteristic of market competition [5]. Also, a company with both goal-free and/or goal-framed creation modes is more likely to be willing to frame its strategic decisions (or goals) and then freely look for creative ways to reach the goals.
In recent years Singular Spectrum Analysis (SSA), a relatively novel but powerful technique in time series analysis, has been developed and applied to many practical problems across different fields. In this paper we review recent developments in the theoretical and methodological aspects of SSA from the perspective of analyzing and forecasting economic and financial time series, and also present some new results. In particular, we (a) show what are the implications of SSA for the frequently invoked unit root hypothesis of economic and financial times series; (b) introduce two new versions of SSA, based on the minimum variance estimator and based on perturbation theory; (c) discuss the concept of causality in the context of SSA; and (d) provide a variety of simulation results and real world applications, along with comparisons with other existing methodologies.
Singular spectrum analysis is a natural generalization of principal component methods for time series data. In this paper we propose an imputation method to be used with singular spectrum-based techniques which is based on a weighted combination of the forecasts and hindcasts yield by the recurrent forecast method. Despite its ease of implementation, the obtained results suggest an overall good fit of our method, being able to yield a similar adjustment ability in comparison with the alternative method, according to some measures of predictive performance.
In this paper we introduce a new method for robust principal component analysis. Classical PCA is based on the empirical covariance matrix of the data and hence it is highly sensitive to outlying observations. In the past, two robust approaches have been developed. The first is based on the eigenvectors of a robust scatter matrix such as the MCD or an S-estimator, and is limited to relatively low-dimensional data. The second approach is based on projection pursuit and can handle high-dimensional data. Here, we propose the ROBPCA approach which combines projection pursuit ideas with robust scatter matrix estimation. It yields more accurate estimates at noncontaminated data sets and more robust estimates at contaminated data. ROBPCA can be computed fast, and is able to detect exact fit situations. As a byproduct, ROBPCA produces a diagnostic plot which displays and classifies the outliers. The algorithm is applied to several data sets from chemometrics and engineering.
IN addition to the well-known warming of ~0.5 °C since the middle of the nineteenth century, global-mean surface temperature records1-4display substantial variability on timescales of a century or less. Accurate prediction of future temperature change requires an understanding of the causes of this variability; possibilities include external factors, such as increasing greenhouse-gas concentrations5-7 and anthropogenic sulphate aerosols8-10, and internal factors, both predictable (such as El Niño11) and unpredictable (noise12,13). Here we apply singular spectrum analysis14-20 to four global-mean temperature records1-4, and identify a temperature oscillation with a period of 65-70 years. Singular spectrum analysis of the surface temperature records for 11 geographical regions shows that the 65-70-year oscillation is the statistical result of 50-88-year oscillations for the North Atlantic Ocean and its bounding Northern Hemisphere continents. These oscillations have obscured the greenhouse warming signal in the North Atlantic and North America. Comparison with previous observations and model simulations suggests that the oscillation arises from predictable internal variability of the ocean-atmosphere system.
Surface winds and surface ocean hydrography in the subpolar North Atlantic appear to have been influenced by variations in
solar output through the entire Holocene. The evidence comes from a close correlation between inferred changes in production
rates of the cosmogenic nuclides carbon-14 and beryllium-10 and centennial to millennial time scale changes in proxies of
drift ice measured in deep-sea sediment cores. A solar forcing mechanism therefore may underlie at least the Holocene segment
of the North Atlantic's “1500-year” cycle. The surface hydrographic changes may have affected production of North Atlantic
Deep Water, potentially providing an additional mechanism for amplifying the solar signals and transmitting them globally.
In this paper, we investigate the possibility of using multivariate singular spectrum analysis (SSA), a nonparametric technique in the field of time series analysis, for mortality forecasting. We consider a real data application with 9 European countries: Belgium, Denmark, Finland, France, Italy, Netherlands, Norway, Sweden, and Switzerland, over a period 1900 to 2009, and a simulation study based on the data set. The results show the superiority of multivariate SSA in comparison with the univariate SSA, in terms of forecasting accuracy.
It has been fifteen years since the articulation of agile manifesto in 2001, which
brought great changes in software application development (Dingsøyr et al. 2012).
According to the Manifesto for Agile Software Development (agileAlliance.org),
agile methods value “(1) individuals and interactions over processes and tools,
(2) working software over comprehensive documentation, (3) customer collaboration
over contract negotiation, and (4) responding to change over following a plan”
(Wadhwa and Sharma 2015; Salo and Abrahamsson 2007; Mushtaq and Qureshi
2012).
Agility is the ability to detect and address the business perspective to remain
inventive and aggressive in a labile and rapidly changing business environment.
The continuous evolution of twenty-first century technology forced companies’
environment to become increasingly dynamic and organizations to constantly
modify their software requirements to adjust with the new environment
(Moniruzzaman and Hossain 2013).
A software development process is designated as an aggregation of methods, practices, and techniques that are used to procure and substantiate software and its produced product (Wadhwa and Sharma, Adv Comput Sci Inf Technol 2:370–374, 2015). The majority of agile methods promote teamwork, cooperation, and adjustability through the life cycle of the developing procedure (Moniruzzaman and Hossain, Global J Comput Sci Technol 13, 2013). The first and maybe best known agile methods, between others, are Scrum and eXtreme Programming (XP) (Salo and Abrahamsson, Softw Process: Improv Pract 12, 2007). Scrum is more focused on management practices for software development, whereas XP emphasizes on the activities of software implementing (Mushtaq and Qureshi, Inf Technol Comput Sci 6, 2012). In this paper we explore the principles of two agile methodologies, XP and Scrum, and we propose the appropriate methodology for the development of collaboration tools.
Purpose
– The integrated purpose of the libraries’ communication plan in general is to create and accomplish scientific events aiming, first of all, at covering the extensive demand for the scientific conferences. Their primary objective is to raise the prestigious brand name of their organisation, which constitutes the organizing authority. At the same time, this authority, except for its non-profit charitable profile, aims to financial gains by attracting participants for its sustainability. Furthermore, these academic events have contributed to the utmost dissemination of the library’s brand name to an expanding mass of people to the extent of attracting new visitors (Broady-Preston and Lobo, 2011). One of the qualitative academic events, among others, is the creation of academic-nature events, whose following-up is blocked by a multitude of financial barriers according to the new visitors’ viewpoint. Considering the economic crisis, the purpose of this paper is the creation of interesting, in the science of library, online events, just like the online conferences (Broady-Preston and Swain, 2012).
Design/methodology/approach
– This paper highlights the advantages of the dynamic modelling of systems aimed at developing a successful online conference. In this research, the authors have used the science of design and the research methodology for testing the concept of modeling.
Findings
– This paper examines the interface among several dimensions for the development of dynamic models. The validity and usefulness of those models in the process of decision-making has been confirmed by the usage of dynamic models in various sectors.
Originality/value
– This paper applies the system and the concepts of dynamic modelling, which are pioneering elements as to their nature and evolution.
We briefly describe the methodology of the Singular Spectrum Analysis (SSA), some versions and extensions of the basic version of SSA as well as connections between SSA and subspace-based methods in signal processing. We also briefly touch upon some history of SSA and mention some areas of application of SSA.
Most of the existing time series methods of feature extraction involve
complex algorithm and the extracted features are affected by sample size
and noise. In this paper, a simple time series method for bearing fault
feature extraction using singular spectrum analysis (SSA) of the
vibration signal is proposed. The method is easy to implement and fault
feature is noise immune. SSA is used for the decomposition of the
acquired signals into an additive set of principal components. A new
approach for the selection of the principal components is also
presented. Two methods of feature extraction based on SSA are
implemented. In first method, the singular values (SV) of the selected
SV number are adopted as the fault features, and in second method, the
energy of the principal components corresponding to the selected SV
numbers are used as features. An artificial neural network (ANN) is used
for fault diagnosis. The algorithms were evaluated using two
experimental datasets—one from a motor bearing subjected to
different fault severity levels at various loads, with and without
noise, and the other with bearing vibration data obtained in the
presence of a gearbox. The effect of sample size, fault size and load on
the fault feature is studied. The advantages of the proposed method over
the exiting time series method are discussed. The experimental results
demonstrate that the proposed bearing fault diagnosis method is simple,
noise tolerant and efficient.
Singular spectrum analysis (SSA) is a new non-parametric technique of time series analysis, based on principles of multivariate statistics, that decomposes a given time series into a set of independent additive time series. Fundamentally, the method projects the original time series onto a vector basis obtained from the series itself, following the procedure of principal component analysis. In the present work, SSA is applied to the analysis of the vibration signals acquired in a turning process in order to extract information correlated with the state of the tool. That information has been presented to a neural network for determination of tool flank wear. The results showed that SSA is well-suited to the task of signal processing. Thus, it can be concluded that SSA is quite encouraging for future applications in the area of tool condition monitoring (TCM).
Thin sputtered indium oxide films with an additive deposited on the half of their surface are considered as Seebeck effect devices for sensing of methane and ethanol. The electron concentration of the oxide film is controlled with an applied voltage perpendicular to the film with the use of a buried gate under it, in the same way the channel conductance in a MOS device is controlled by the gate voltage. Due to the different chemisorption mechanisms of methane and ethanol, a gate voltage modulating the free electron concentration of the oxide film enhances sensitivity to methane, whereas it does not influence sensitivity to ethanol.
Data outliers or other data inhomogeneities lead to a violation of the assumptions of traditional statistical estimators and methods. Robust statistics offers tools that can reliably work with contaminated data. Here, outlier detection methods in low and high dimension, as well as important robust estimators and methods for multivariate data are reviewed, and the most important references to the corresponding literature are provided. Algorithms are discussed, and routines in R are provided, allowing for a straightforward application of the robust methods to real data.
A theoretical investigation of the dependence of gas sensitivity of nanostructured semiconductor gas sensors on cluster size is presented. The clusters are represented as spheres and the adsorbed gas as a surface state density. The sensitivity is calculated as a change in conductivity over a change of surface state density. The results show that there is a critical cluster size, which is material dependent, at which the sensitivity is maximal. The cluster size for maximum sensitivity of several metal oxide gas sensors of practical interest, such as ZnO, SnO2, TiO2 and In2O3, is predicted and discussed.
During the experimental study of CO sensitivity of SnO 2 resistive‐type gas sensors in the presence of water vapor, a transient effect was observed which elucidates the CO sensing mechanism in tin oxide. More precisely, after the removal of CO, an increase of the measured conductance was observed, depending on substrate temperature. An explanation of this phenomenon is proposed, which is based on the conductance modulation due to three different mechanisms: Formate desorption, occupation of lattice sites by oxygen molecules, and diffusion of lattice oxygen vacancies to sensor’s bulk.
Li and Chen (J. Amer. Statist. Assoc. 80 (1985) 759) proposed a method for principal components using projection-pursuit techniques. In classical principal components one searches for directions with maximal variance, and their approach consists of replacing this variance by a robust scale measure. Li and Chen showed that this estimator is consistent, qualitative robust and inherits the breakdown point of the robust scale estimator. We complete their study by deriving the influence function of the estimators for the eigenvectors, eigenvalues and the associated dispersion matrix. Corresponding Gaussian efficiencies are presented as well. Asymptotic normality of the estimators has been treated in a paper of Cui et al. (Biometrika 90 (2003) 953), complementing the results of this paper. Furthermore, a simple explicit version of the projection-pursuit based estimator is proposed and shown to be fast to compute, orthogonally equivariant, and having the maximal finite-sample breakdown point property. We will illustrate the method with a real data example.
The results of a standard principal component analysis (PCA) can be affected by the presence of outliers. Hence robust alternatives to PCA are needed. One of the most appealing robust methods for principal component analysis uses the Projection–Pursuit principle. Here, one projects the data on a lower-dimensional space such that a robust measure of variance of the projected data will be maximized. The Projection–Pursuit-based method for principal component analysis has recently been introduced in the field of chemometrics, where the number of variables is typically large. In this paper, it is shown that the currently available algorithm for robust Projection–Pursuit PCA performs poor in the presence of many variables. A new algorithm is proposed that is more suitable for the analysis of chemical data. Its performance is studied by means of simulation experiments and illustrated on some real data sets.
A new model is proposed that takes into account oxygen vacancies in tin oxide resistive-type, assuming that lattice oxygen modifies the rate of oxygen adsorption. Applying this hypothesis in a Monte Carlo simulation, effects observed in thick-film samples are explained. Morevoer, computational techniques have been used in order to simulate different thick-film structures, and the role of both surface coverage and reduction in the sensing mechanism is investigated. The simulation results are in good qualitative agreement with the experimental results obtained from our samples. In particular, phenomena like undershoot and overshoot of the sample's resistance, very long recovery times and poisoning of the sensor surface are discussed.
Phase fitting has been extensively used during the last years to improve the
behaviour of numerical integrators on oscillatory problems. In this work, the
benefits of the phase fitting technique are embedded in discrete Lagrangian
integrators. The results show improved accuracy and total energy behaviour in
Hamiltonian systems. Numerical tests on the long term integration (100000
periods) of the 2-body problem with eccentricity even up to 0.95 show the
efficiency of the proposed approach. Finally, based on a geometrical evaluation
of the frequency of the problem, a new technique for adaptive error control is
presented.
In this work, we present a new approach to the construction of variational integrators. In the general case, the estimation of the action integral in a time interval is used to construct a symplectic map . The basic idea here, is that only the partial derivatives of the estimation of the action integral of the Lagrangian are needed in the general theory. The analytic calculation of these derivatives, give raise to a new integral which depends not on the Lagrangian but on the Euler--Lagrange vector, which in the continuous and exact case vanishes. Since this new integral can only be computed through a numerical method based on some internal grid points, we can locally fit the exact curve by demanding the Euler--Lagrange vector to vanish at these grid points. Thus the integral vanishes, and the process dramatically simplifies the calculation of high order approximations. The new technique is tested for high order solutions in the two-body problem with high eccentricity (up to 0.99) and in the outer solar system. Comment: 15 pages, 4 figures, 2 tables
In situ continuous monitoring of radioactivity in the water environment has many advantages compared to sampling and analysis techniques but a few shortcomings as well. Apart from the problems encountered in the assembly of the carrying autonomous systems, continuous operation some times alters the response function of the detectors. For example, the continuous operation of a photomultiplier tube results in a shift in the measured spectrum towards lower energies, making thus necessary the re-calibration of the detector. In this work, it is proved, that when measuring radioactivity in seawater, a photo peak around 50 keV will be always present in the measured spectrum. This peak is stable, depends only on the scattering rates of photons in seawater and, when it is detectable, can be used in conjunction with other peaks (40K and/or 208Tl) as a reference peak for the continuous calibration of the detector.
A nonlinear forecast system for the sea surface temperature (SST) anomalies over the whole tropical Pacific has been developed using a multi-layer perceptron neural network approach, where sea level pressure and SST anomalies were used as predictors to predict the five leading SST principal components at lead times from 3 to 15 months. Relative to the linear regression (LR) models, the nonlinear (NL) models showed higher correlation skills and lower root mean square errors over most areas of the domain, especially over the far western Pacific (west of 155 degrees E) and the eastern equatorial Pacific off Peru at lead times longer than 3 months, with correlation skills enhanced by 0.10-0.14. Seasonal and decadal changes in the prediction skills in the NL and LR models were also studied.
Robust singular value decomposition technical report number