Article

Comparing Experimental Design Strategies for Quality Improvement with Minimal Changes to Factor Levels

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The "small factor change" problem, where an experimental design strategy is used to find a certain amount of improvement in a response while changing the factor levels as little as possible, is addressed. Using a recently developed test bed for response surfaces, we have simulated a broad range of response surface functions and collected empirical results on the performance of seven experimental design strategies when confronted with this problem. I. Introduction In this research, a set of experimental design strategies is applied to a situation that we call the small factor change problem to determine which of these strategies performs best on selected measures. The goal of experimentation in the small factor change problem is to gain a specific amount of improvement in a response while changing the factor levels as little as possible. As an example, consider an automobile design problem where there is a specified miles per gallon (MPG) rating desired. Some of the primary factors th...

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... In general, the relationship between the studies of the simulation and the design of experiments continues to evolve. McDaniel and Ankenman [1,2] have developed a 'test bed' set of assumptions that permits them to use simulation to compare experimental design approaches. In this paper, for perhaps the first time, experimental design methods are proposed that are derived from simulation optimization. ...
... The subject of the implied assumptions of response surface methods has received relatively little attention until recently, see, for example, [1,2,9]. As simulation optimization begins to play a greater role in experimental design, we predict that this will change because the criteria and the assumptions will completely determine the design and modeling strategy through direct optimization. ...
... Therefore, the final model form and associated experimental design cannot be known before the experiments are performed and these criteria would need to be generalized to be applied. For this reason and because it has the simple, intuitive interpretation of being the 'plus or minus' prediction errors, we base the comparison on the square root of the expected integrated mean-squared error, sqrt(EIMSE), where the formula for the EIMSE is given in (2). This formula depends on the parameters L and β c . ...
Conference Paper
Full-text available
We propose “low cost response surface methods” (LCRSM) that typically require half the experimental runs of standard response surface methods based on central composite and Box Behnken designs (G. Box and D.W. Behnken, 1960) but yield comparable or lower modeling errors under realistic assumptions. In addition, the LCRSM methods have substantially lower modeling errors and greater expected savings compared with alternatives with comparable numbers of runs, including small composite designs and computer-generated designs based on popular criteria such as D-optimality. Therefore, when simulation runs are expensive, low cost response surface methods can be used to create regression meta-models for queuing or other system optimization. The LCRSM procedures are also apparently the first experimental design methods derived as the solution to a simulation optimization problem. For these reasons, we say that LCRSM are “for and from” simulation optimization. We compare the proposed LCRSM methods with a large number of alternatives based on six criteria. We conclude that the proposed methods offer an attractive alternative to current methods in many relevant situations
... In general, the relationship between the studies of the simulation and the design of experiments continues to evolve. McDaniel and Ankenman [1,2] have developed a 'test bed' set of assumptions that permits them to use simulation to compare experimental design approaches. In this paper, for perhaps the first time, experimental design methods are proposed that are derived from simulation optimization. ...
... The subject of the implied assumptions of response surface methods has received relatively little attention until recently, see, for example, [1,2,9]. As simulation optimization begins to play a greater role in experimental design, we predict that this will change because the criteria and the assumptions will completely determine the design and modeling strategy through direct optimization. ...
... Therefore, the final model form and associated experimental design cannot be known before the experiments are performed and these criteria would need to be generalized to be applied. For this reason and because it has the simple, intuitive interpretation of being the 'plus or minus' prediction errors, we base the comparison on the square root of the expected integrated mean-squared error, sqrt(EIMSE), where the formula for the EIMSE is given in (2). This formula depends on the parameters L and β c . ...
Article
Full-text available
We propose low cost response surface methods (LCRSM) that typically require half the experimental runs of standard response surface methods based on central composite and Box Behnken designs but yield comparable or lower modeling errors under realistic assumptions. In addition, the LCRSM methods have substantially lower modeling errors and greater expected savings compared with alternatives with comparable numbers of runs, including small composite designs and computer-generated designs based on popular criteria such as D-optimality. Therefore, when simulation runs are expensive, low cost response surface methods can be used to create regression meta-models for queuing or other system optimization. The LCRSM procedures are also apparently the first experimental design methods derived as the solution to a simulation optimization problem. For these reasons, we say that LCRSM are for and from simulation optimization. We compare the proposed LCRSM methods with a large number of alternatives based on six criteria. We conclude that the proposed methods offer attractive alternative to current methods in many relevant situations.
... OFAT, which can effectively screen for significant main factors and their levels [31], was used to screen for optimum culture conditions and medium compositions that affect the biomass yield of the candidate Bacillus strain. The culture conditions included temperature, pH, agitation speed, and inoculation quantity, and gradient ranges from 22 to 42 °C (in 3 °C increments), 6.0 to 9.5 (in 0.5 unit increments), 90 to 230 rpm (in 20 rpm increments), and 0.5 to 4.0% (in 0.5% increments), respectively. ...
... Additionally, the optimal C source, N source, and inorganic salt of CGMCC 17603 were 25 g/L glucose, 15 g/L yeast extract, and 5 g/L MgSO4•7H2O, respectively (Figure 4). The results revealed that the OFAT optimization could effectively screen out the optimal levels of key factors, and further confirmed the rationality of factor gradients and the validity of these results [31]. Importantly, industrial-grade glucose and yeast extract are inexpensive and meet our low-cost requirements for culture medium. ...
Article
Full-text available
In arid and semi-arid desert ecosystems, physical, chemical, and vegetative measures were used to prevent wind erosion. However, studies on the utilization of microbial resources for sand fixation are still limited. To fill this gap, a new strain of Bacillus tequilensis CGMCC 17603 with high productivity of exopolysaccharide (EPS) was isolated from biological soil crusts, and its high-density culture technology and sand-fixing ability were studied. The one-factor-at-a-time approach (OFAT) and Box–Behnken design of CGMCC 17603 showed that the optimum culture conditions were pH 8.5, temperature 31 °C, agitation speed 230 rpm, and inoculation quantity 3%, and the optimum medium was 27.25 g/L glucose, 15.90 g/L yeast extract, and 5.61 g/L MgSO4•7H2O. High-density culture showed that the biomass and EPS yield of CGMCC 17603 increased from 9.62 × 107 to 2.33 × 109 CFU/mL, and from 8.01 to 15.61 g/L, respectively. The field experiments showed that CGMCC 17603 could effectively improve the ability of sand fixation and wind prevention. These results indicated that B. tequilensis, first isolated from cyanobacterial crusts, can be considered as an ideal soil-fixing agent to combat desertification in arid and semi-arid areas.
... Chi and Cheng [2] also applied the OFAT method in their study of the coagulation of milk processing plant wastewater using chitosan. The main advantages of the OFAT method is the fact that it allows a rapid identification of the influence of the factors and the experimental outcomes can be readily understood [3,4]. However, the use of OFAT is being discouraged due to the following reasons [5][6][7][8]: ...
... Therefore, research into other objectives for kriging model estimation such as cross-validation, as suggested by one of the reviewers, is an important topic for investigation. Third, the comparison methodology could be extended to include small random errors and a broader set of test functions, as described by McDaniel and Ankenman (2000). ...
... Koita (1994) showed that an OFAT method was effective in identifying selected interactions after running fractional factorial designs as part of an overall approach to sequential experimentation . McDaniel and Ankenman (2000) provided empirical evidence that for " small factor change problems, " a strategy using OFAT and Box–Behnken designs often worked better than a comparable strategy using fractional factorial designs when there is no error in the response. Qu and Wu (2005) used OFAT techniques to construct resolution V designs with economical run size. ...
Article
This article concerns adaptive experimentation as a means for making improvements in design of en-gineering systems. A simple method for experimentation, called "adaptive one-factor-at-a-time," is de-scribed. A mathematical model is proposed and theorems are proven concerning the expected value of the improvement provided and the probability that factor effects will be exploited. It is shown that adaptive one-factor-at-a-time provides a large fraction of the potential improvements if experimental error is not large compared with the main effects and that this degree of improvement is more than that provided by resolution III fractional factorial designs if interactions are not small compared with main effects. The theorems also establish that the method exploits two-factor interactions when they are large and exploits main effects if interactions are small. A case study on design of electric-powered aircraft supports these results. KEY WORDS: Adaptive experimentation; Design of experiments; Fractional factorial design; One fac-tor at a time.
Article
Full-text available
We used three test functions to compare all combinations of five experimental design classes with either second-order response surface (RS) or kriging modeling methods. The findings included the following: 1) conclusions about which method performed best, even for a single case study, greatly depended on the specific experimental designs used to represent each class of designs; 2) unavoidable bias errors constituted the largest source of prediction errors when regression modeling was used with designs generated to address bias errors; and 3) estimation errors, which could be attributed to the use of the likelihood estimation objective, dominated prediction errors in kriging modeling. We tentatively conclude that, for cases in which the number of runs is comparable to the number of terms in a quadratic polynomial model, similar prediction errors can be expected from both kriging and regression modeling procedures as long as regression is used in combination with experimental designs generated to address bias errors.
Article
This paper attempts to explain the empirically demonstrated phenomena that, under some conditions, one-at-a-time experiments outperform orthogonal arrays (on average) in parameter design of engineering systems. Five case studies are presented, each based on data from previously published full factorial experiments on actual engineering systems. Computer simulations of adaptive one-at-a-time plans and orthogonal arrays were carried out with varying degrees of pseudo-random error added to the data. The average outcomes are plotted for both approaches to optimization. For each of the five case studies, the main effects and interactions of the experimental factors are presented and analyzed to explain the observed simulation results. It is shown that, for some types of engineering systems, "one-at-a-time" designs consistently exploit interactions despite the fact that these designs lack the resolution to estimate interactions. It is also confirmed that orthogonal arrays are adversely affected by confounding of main effects and interactions.
Article
This dissertation documents a meta-analysis of 113 data sets from published factorial experiments. The study quantifies regularities observed among main effects and multi-factor interactions. Such regularities are critical to efficient planning and analysis of experiments, and to robust design of engineering systems. Three previously observed properties are analyzed - effect sparsity, hierarchy, and heredity. A new regularity on effect synergism is introduced and shown to be statistically significant. It is shown that a preponderance of active two-factor interaction effects are synergistic, meaning that when main effects are used to increase the system response, the interactions provide an additional increase and that when main effects are used to decrease the response, the interactions generally counteract the main effects. Based on the investigation of system regularities, a new strategy is proposed for evaluating and comparing the effectiveness of robust parameter design methods. A hierarchical probability model is used to capture assumptions about robust design scenarios. A process is presented employing this model to evaluate robust design methods.
Conference Paper
This paper concerns the role of experimentation in engineering design, especially the process of making improvements through parameter design. A simple mathematical model is proposed for studying experimentation including a model of adaptive one-factor-at-a-time experimentation. Theorems are proven concerning the expected value of the improvement provided by adaptive experimentation. Theorems are also proven regarding the probability that factor effects will be exploited by the process. The results suggest that adaptive one-factor-at-a-time plans tend to exploit two-factor interactions when they are large or otherwise exploit main effects if interactions are small. As a result, the adaptive process provides around 80% of the improvements achievable via parameter design while exploring a small fraction of the design alternatives (less than 20% if the system has more than five variables).
Article
tions, adaptive one-factor-at-a-time experiments outperform fractional factorial experi- ments in improving the performance of mechanical engineering systems. Five case stud- ies are presented, each based on data from previously published full factorial physical experiments at two levels. Computer simulations of adaptive one-factor-at-a-time and fractional factorial experiments were carried out with varying degrees of pseudo-random error. For each of the five case studies, the average outcomes are plotted for both approaches as a function of the strength of the pseudo-random error. The main effects and interactions of the experimental factors in each system are presented and analyzed to illustrate how the observed simulation results arise. The case studies show that, for certain arrangements of main effects and interactions, adaptive one-factor-at-a-time ex- periments exploit interactions with high probability despite the fact that these designs lack the resolution to estimate interactions. Generalizing from the case studies, four mechanisms are described and the conditions are stipulated under which these mecha- nisms act. DOI: 10.1115/1.2216733
Article
Thesis (Ph. D.)--Massachusetts Institute of Technology, Engineering Systems Division, 2007. Includes bibliographical references (p. 111-118). This thesis considers the problem of achieving better system performance through adaptive experiments. For the case of discrete design space, I propose an adaptive One-Factor-at-A-Time (OFAT) experimental design, study its properties and compare its performance to saturated fractional factorial designs. The rationale for adopting the adaptive OFAT design scheme become clear if it is imbedded in a Bayesian framework: it becomes clear that OFAT is an efficient response to step by step accrual of sample information. The Bayesian predictive distribution for the outcome by implementing OFAT and the corresponding principal moments when a natural conjugate prior is assigned to parameters that are not known with certainty are also derived. For the case of compact design space, I expand the treatment of OFAT by the removal of two restrictions imposed on the discrete design space. The first is that the selection of input level at each iteration depends only on observed best response and does not depend on other prior information. In most real cases, domain experts possess knowledge about the process being modeled that, ideally, should be treated as sample information in its own right-and not simply ignored. (cont.) Treating the design problem Bayesianly provides a logical scheme for incorporation of expert information. The second removed restriction is that the model is restricted to be linear with pair-wise interactions - implying that the model considers a relatively small design space. I extend the Bayesian analysis to the case of generalized normal linear regression model within the compact design space. With the concepts of c-optimum experimental design and Bayesian estimations, I propose an algorithm for the purpose of achieving optimum through a sequence of experiments. I prove that the proposed algorithm would generate a consistent Bayesian estimator in its limiting behavior. Moreover, I also derive the expected step-wise improvement achieved by this algorithm for the analysis of its intermediate behavior, a critical criterion for determining whether to continue the experiments. by Hungjen Wang. Ph.D.
Article
This dissertation documents a meta-analysis of 113 data sets from published factorial experiments. The study quantifies regularities observed among main effects and multi-factor interactions. Such regularities are critical to efficient planning and analysis of experiments, and to robust design of engineering systems. Three previously observed properties are analyzed - effect sparsity, hierarchy, and heredity. A new regularity on effect synergism is introduced and shown to be statistically significant. It is shown that a preponderance of active two-factor interaction effects are synergistic, meaning that when main effects are used to increase the system response, the interactions provide an additional increase and that when main effects are used to decrease the response, the interactions generally counteract the main effects. Based on the investigation of system regularities, a new strategy is proposed for evaluating and comparing the effectiveness of robust parameter design methods. A hierarchical probability model is used to capture assumptions about robust design scenarios. A process is presented employing this model to evaluate robust design methods. (cont.) This process is then used to explore three topics of debate in robust design: 1) the relative effectiveness of crossed versus combined arrays; 2) the comparative advantages of signal-to-noise ratios versus response modeling for analysis of crossed arrays; and 3) the use of adaptive versus "one shot" methods for robust design. For the particular scenarios studied, it is shown that crossed arrays are preferred to combined arrays regardless of the criterion used in selection of the combined array. It is shown that when analyzing the data from crossed arrays, signal-to-noise ratios generally provide superior performance; although that response modeling should be used when three-factor interactions are absent. Most significantly, it is shown that using an adaptive inner array design crossed with an orthogonal outer array resulted in far more improvement on average than other alternatives. Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2006. Includes bibliographical references (p. 151-155). Page 156 blank.
Article
A method is presented for creating randomly generated polynomial functions to be used as a test bed of simulated response surfaces. The need for the test bed to perform empirical comparisons of experimental design strategies is discussed and the methods used to create the surfaces are explained. An important feature of the test bed is that the user can control some of the characteristics of the surfaces without directly controlling the surface functions. This allows the user to choose the types of surfaces on which a simulation study is run while preserving the random nature of the surfaces needed for a valid simulation study. I. Introduction The experimental study of a response surface for finding optimal or at least desirable settings for the factors is known as Response Surface Methodology (RSM) (see Myers and Montgomery, 1995). Many classes of experimental designs have been developed for RSM, such as factorials, fractional factorials, Box-Behnken designs, and central composite de...
Article
Traditionally, Plackett-Burman (PB) designs have been used in screening experiments for identifying important main effects. The PB designs whose run sizes are not a power of two have been criticized for their complex aliasing patterns, which according to conventional wisdom gives confusing results. This paper goes beyond the traditional approach by proposing the analysis strategy that entertains interactions in addition to main effects. Based on the precepts of effect sparsity and effect heredity, the proposed procedure exploits the designs' complex aliasing patterns, thereby turning their 'liability' into an advantage. Demonstration of the procedure on three real experiments shows the potential for extracting important information available in the data that has, until now, been missed. Some limitations are discussed, and extentions to overcome them are given. The proposed procedure also applies to more general mixed level designs that have become increasingly popular.
Book
COMPREHENSIVE COVERAGE OF NONLINEAR PROGRAMMING THEORY AND ALGORITHMS, THOROUGHLY REVISED AND EXPANDED Nonlinear Programming: Theory and Algorithms-now in an extensively updated Third Edition-addresses the problem of optimizing an objective function in the presence of equality and inequality constraints. Many realistic problems cannot be adequately represented as a linear program owing to the nature of the nonlinearity of the objective function and/or the nonlinearity of any constraints. The Third Edition begins with a general introduction to nonlinear programming with illustrative examples and guidelines for model construction. Concentration on the three major parts of nonlinear programming is provided: Convex analysis with discussion of topological properties of convex sets, separation and support of convex sets, polyhedral sets, extreme points and extreme directions of polyhedral sets, and linear programming Optimality conditions and duality with coverage of the nature, interpretation, and value of the classical Fritz John (FJ) and the Karush-Kuhn-Tucker (KKT) optimality conditions; the interrelationships between various proposed constraint qualifications; and Lagrangian duality and saddle point optimality conditions Algorithms and their convergence, with a presentation of algorithms for solving both unconstrained and constrained nonlinear programming problems Important features of the Third Edition include: New topics such as second interior point methods, nonconvex optimization, nondifferentiable optimization, and more Updated discussion and new applications in each chapter Detailed numerical examples and graphical illustrations Essential coverage of modeling and formulating nonlinear programs Simple numerical problems Advanced theoretical exercises The book is a solid reference for professionals as well as a useful text for students in the fields of operations research, management science, industrial engineering, applied mathematics, and also in engineering disciplines that deal with analytical optimization techniques. The logical and self-contained format uniquely covers nonlinear programming techniques with a great depth of information and an abundance of valuable examples and illustrations that showcase the most current advances in nonlinear problems.
Article
The Small Factor Change problem, where an experimental design strategy is used to find a certain amount of improvement in a response while changing the factors as little as possible, is addressed. This research uses simulated response surfaces to compare the performance of seven experimental design strategies when confronted with the Small Factor Change Problem. A method is presented for creating randomly generated polynomial functions to be used as a test bed of simulated response surfaces. The need for the test bed to perform empirical comparisons of experimental design strategies is discussed and the methods used to create the surfaces are explained. Some results and examples are presented demonstrating the types of surfaces created by the test bed and how the user can control the characteristics of the simulated surfaces. A user's guide is included to explain how response surface functions may be created with the response surface test bed. A broad range of these simulated response surfaces is created and empirical results are collected on the performance of the seven experimental design strategies when confronted with the Small Factor Change problem. Results are presented demonstrating that traditional response surface methods are more successful than the other designs strategies tested at finding acceptable solutions to the Small Factor Change problem with the least change to the factor levels. Source: Dissertation Abstracts International, Volume: 60-12, Section: B, page: 6305. Adviser: Bruce Ankennan. Thesis (Ph.D.)--Northwestern University, 1999.
Article
Experiments using designs with complex aliasing patterns are often performed - for example, two-level nongeometric Plackett-Burman designs, multilevel and mixed-level fractional factorial designs, two-level fractional factorial designs with hard-to-control factors, and supersaturated designs. Hamada and Wu proposed an iterative guided stepwise regression strategy for analyzing the data from such designs that allows entertainment of interactions. Their strategy provides a restricted search in a rather large model space, however. This article provides an efficient methodology based on a Bayesian variable-selection algorithm for searching the model space more thoroughly. We show how the use of hierarchical priors provides a flexible and powerful way to focus the search on a reasonable class of models. The proposed methodology is demonstrated with four examples, three of which come from actual industrial experiments.
Article
A method is presented for creating randomly generated polynomial functions to be used as a test bed of simulated response surfaces. The need for the test bed to perform empirical comparisons of experimental design strategies is discussed and the methods used to create the surfaces are explained. An important feature of the test bed is that the user can control some of the characteristics of the surfaces without directly controlling the surface functions. This allows the user to choose the types of surfaces on which a simulation study is run while preserving the random nature of the surfaces needed for a valid simulation study. I. Introduction The experimental study of a response surface for finding optimal or at least desirable settings for the factors is known as Response Surface Methodology (RSM) (see Myers and Montgomery, 1995). Many classes of experimental designs have been developed for RSM, such as factorials, fractional factorials, Box-Behnken designs, and central composite de...
Follow-Up Designs That Make the Most of an Initial Screening Experiment
  • R Mee
Mee, R. (1997), "Follow-Up Designs That Make the Most of an Initial Screening Experiment," pres. 41 st Annual Fall Technical Conference, Baltimore, MD, American Society for Quality.
S-PLUS User's Manual, Version 3.3 for Windows
Statistical Sciences, Inc. (1995), S-PLUS User's Manual, Version 3.3 for Windows, Seattle, Statistical Sciences, Inc.