ChapterPDF Available

Quality by Design - Designing Quality into Products and Processes

Authors:
  • Snee Associates, LLC

Abstract and Figures

The US Food and Drug Administration introduced the pharmaceutical and biotech industries to Quality by Design (QbD) in 2004. While new to these industries QbD had been used and found effective in many other industries for more than 40 years. This chapter discusses the “What”, “Why” and “How” of QbD. The focus is on the building blocks of QbD and the statistical concepts, methods and tools that enable the effective implementation of the approach. Numerous case studies from pharma and biotech are used to illustrate the approaches.
Content may be subject to copyright.
1
Quality by Design - Building Quality into Products and Processes
Published in:
Non-Clinical Statistics for Pharmaceutical and Biotechnology Industries,
L. Zhang, Editor, Chapter 18, Springer Publishing, New York, NY (2016)
Ronald D. Snee, PhD
Snee Associates, LLC
Newark, Delaware 19711
Ron@SneeAssociates.Com
Abstract
The US Food and Drug Administration introduced the pharmaceutical and biotech industries to
Quality by Design (QbD) in 2004. While new to these industries QbD had been used and found
effective in many other industries for more than 40 years. This chapter discusses the “What”,
“Why” and “How” of QbD. The focus is on the building blocks of QbD and the statistical
concepts, methods and tools that enable the effective implementation of the approach. Numerous
case studies from pharma and biotech are used to illustrate the approaches.
It’s About Quality Stupid
During the 1992 presidential election James Carville, campaign strategist for the Clinton election
team recognized that “It’s the Economy Stupid” suggesting that the key issue on the minds of
Americans was the U.S. economy. Clinton revised the thrust of his campaign and, as they say the
rest is history.
Similarly today in the 21st century, the key issue on the minds of US Food and Drug
Administration (FDA) and Pharmaceutical and Biotech companies is quality. Driven by greater
global competition and the growing impact of information technology, the pharmaceutical
industry faces a need to improve its performance. Speed to market, product quality, regulatory
compliance, cost reduction, waste, and cycle time are among the concerns that must be addressed
in a systematic, focused, and sustainable manner. Quality by design (QbD), an approach
introduced by the US Food and Drug Administration, provides an effective tool for addressing
these concerns.
This chapter discusses the key steps for implementing QbD and the associated statistical
concepts, methods and tools that can be used to implement the different steps. A focus is placed
on how to develop the process understanding that leads to a useful design space, process-control
methods, and characterization of process risk. Product and process life-cycle model validation is
also addressed. Integrating these concepts provides a holistic approach for effectively designing
and improving products and processes
Quality by Design: Its Origins and Building Blocks
2
Janet Woodcock, director of FDA's Center for Drug Evaluation and Research, defined the
desired state of pharmaceutical manufacturing as "a maximally efficient, agile, flexible
pharmaceutical manufacturing sector that reliably produces high-quality drug products without
extensive regulatory oversight". QbD has been suggested as the route to achieving Woodcock’s
vision. ICH Q8 (R1) Step 2 defines QbD as a
“Systematic approach to development that begins with predefined objectives, emphasizes
product and process understanding and process control, and is based on sound science
and quality risk management” (ICH 2005).
QbD is not new. The story begins in the 1920s with Sir Ronald A. Fisher. Fisher founded the
field of statistical design of experiments (DOE) while working on agricultural and biological
research studies in Rothamstead Experiment station in England. Fisher was frustrated that
existing data collected without any structured plan or design did not yield any useful results. He
published the first book on the subject (Fisher 1935). DOE is, of course, a critical building block
of QbD.
Fast forward some 25 years and we find that in the late 1940s and early 1950s that DOE as a
discipline beginning to gain acceptance by industry. The need in this case was how to effectively
experiment with industrial processes. The big advance, paradigm shift in modern terms, came
with the publication of the paper, “On the Experimental Attainment of Optimum Conditions” by
Box and Wilson (1951). This paper addressed the problem (the need) of optimization of
processes, which led to the idea of an operating window, which in pharma parlance is called
the design space. Box and Wilson were working at Imperial Chemical Industries researching
how to optimize chemical production processes. Their approach later became known as response
surface methodology (RSM).
So how did the design space idea reach pharma? In the late 1960s and early 1970s, Joe Schwartz,
a scientist at Merck saw the value of using RSM and process optimization in the development of
formulation processes. He continued his research at the University of the Sciences in
Philadelphia and educated several graduates on the subject who went to work in the pharma
industry (Schwartz, etal 1973).
But the RSM approach didn’t catch on in pharma like it did in the chemical and other process
industries. Apparently the process improvement need wasn’t yet identified in pharma. In the
early part of this century, however, the FDA saw the need for pharma to improve its processes
and developed the idea of QbD as the overarching system to aid the process.
One of the central leaders of the effort at the FDA was Ajaz Hussain, who was aware of
Schwartz’s work and deepened his knowledge of the subject by communicating with Schwartz to
learn about the approach from the master himself (Hussain 2009). So we see that QbD, like many
useful approaches is not completely new, building on the ideas of others. Indeed we stand on the
3
shoulders of giants; in this case, Fisher, Box, Schwartz and Hussain. Clearly QbD, with its
design space built on a sound foundation.
From an operational perspective, QbD is a systematic and scientific approach to product and
process design and development that uses the following:
• Multivariate data acquisition and modeling to identify and understand the critical
sources of variability
• Process-control techniques to ensure product quality and accurate and reliable
prediction of patient safety and product efficacy
• Product and process design space established for raw-material properties, process
parameters, machine parameters, environmental factors, and other conditions to enable
risk management
• Control space for formulation and process factors that affect product performance.
QbD is useful for improving existing products, developing and improving analytical methods,
and developing new products. The crux is implementing QbD in a cost-effective manner. The
following issues are critical in that assessment:
• Recognition that the end result of successful implementation of QbD are the design
space, process-control methodology and estimates of risk levels
• A strategy for identifying the critical process parameters that define the design space
• The creation of robust products and processes that sustain the performance of the
product and process over time
• The use of change-management techniques to enable the cultural change required for
success and long-term sustainability.
A lack of understanding of QbD in its entirety is a large stumbling block to its use. Stephen
Covey, chairman of the Covey Leadership Center, points out that a successful strategy for any
endeavor is to "begin with the end in mind" (Covey 1989)). Following Covey's advice, the first
step of QbD is to understand the critical outputs of QbD and then identify the critical building
blocks of QbD, namely, improving process understanding and control to reduce risk. The outputs
of design space, process-control procedures, and the risk level (both quantitative and qualitative
risk assessment) are consistent with this approach (ICH 2005).
Before results can be realized, however, the building blocks of QbD (Snee 2009a, 2009b) need to
be assembled (see Figure 1 and Table 1). The definition of QbD stated above calls for
“predefined objectives” which is referred to as the “Quality Target Product Profile” (QTPP).
Operationally the QTPP is defined as “a prospective summary of the quality characteristics of a
drug product that will be ideally achieved to assure the desired quality, taking into account safety
and efficacy of the drug product” (ICH 2005).
4
The QbD building blocks that enable the QTPP to be realized are outlined below:
Identify critical quality attributes (CQAs)
Characterize raw-material variation
Identify critical process parameters (CPPs)
Characterize design space
Ensure process capability, control, and robustness
Identify analytical method capability, control, and robustness
Create process-model monitoring and maintenance
Offer risk analysis and management
Implement Life cycle management: Continuous improvement and continued process
verification.
Attention must be paid to the product formulation, manufacturing process, and analytical
methods. Measurement is a process that needs to be designed, improved, and controlled just as
any other process. The QbD building blocks provide a picture of the critical elements of the
roadmap. It is critical to success to recognize how the building blocks are linked and sequenced
over time. Figure 1 provides a roadmap for implementing QbD telling us that QbD builds as the
product and process is developed; hence QbD is a sequential approach. The building blocks are
created and assembled using the principles of Statistical Engineering (Hoerl and Snee 2010)
which provides a framework for approaching large initiatives such as QbD.
Process Understanding: Critical to Process Development, Operation and Improvement
Process understanding is fundamental to the QbD approach. Indeed process understanding is an
integral part of the definition of QbD. Regulatory flexibility comes from showing that a given
process is well understood. According to FDA (2004), a process is generally considered to be
well understood when the following conditions are met:
1. All critical sources of variability are identified and explained
2. Variability is managed by the process
3. Product-quality attributes can be accurately and reliably predicted within the design space
established for the materials used, process parameters, manufacturing, environmental and
other conditions.
Process understanding is needed not only for product and process development, but also for
successful technology transfer from development to manufacturing and from site-to-site, which
includes transfer to contract manufacturing organizations (Alaedini, etal, 2007; Snee 2006; Snee,
etal 2008). It is very difficult, if not impossible, to successfully and effectively create, operate,
improve, or transfer a process that is not understood.
5
The importance of process understanding is illustrated by the following case. A new solid-dose,
24-hour controlled-release product for pain management had been approved but not yet validated
because it had encountered wide variations in its dissolution rate. The manufacturer did not know
whether the dissolution problems were related to the active pharmaceutical ingredient (API), the
excipients, or to variables in the manufacturing process - or to some combination of these
factors.
Frustrated with the lack of process understanding, the manufacturer narrowed the range of
possible causes of the unacceptable dissolution rate to nine potential variables four properties
of the raw material and five process variables. The team used a designed experiment (DOE) to
screen out irrelevant variables and to find the best operating values for the critical variables
(Snee, etal. 2008).
The analysis showed that one process variable exerted the greatest influence on dissolution and
that other process and raw material variables and their interactions also played a key role. The
importance of the process variable with the largest effect had been unknown prior to this
experiment even after more than eight years of development work. This enhanced process
understanding enabled the company to define the design space and the product was successfully
validated and launched.
This example illustrates the criticality of process understanding. The FDA noted the importance
of process understanding when they released “Guidance for Industry: PAT – A Framework for
Pharmaceutical Development, Manufacturing and Quality Assurance”, (FDA 2004). The FDA
was responding to the realities of the pharmaceutical and biotech industries; namely that Pharma
/Biotech needs to improve operations and speed up product development. Compliance continues
to be an issue and risks must be identified, quantified and reduced. The root causes of many
compliance issues relate to processes that are neither well understood nor well controlled.
Prediction of process and product performance requires some form of a model, Y=f(X). In this
conceptual model, Y is the process outputs such as Critical Quality Attributes (CQAs) of the
product and X denotes the various process and environmental variables that have an effect of the
process outputs, often referred to as Critical Process Parameters (CPPs). Models may be
empirical, developed from data, or mechanistic, based on first principles.
In developing process understanding it is helpful to create a process schematic such as the one
for a compression process shown in Figure 2. Here we see the process outputs (Ys), Process
inputs (Xs), Process control variables (Xs) and environmental variables (Xs). The goal is to
produce a process model of the form Y=f (Xs) that will accurately predict process performance
as measured by the Ys (CQAs).
6
McCurdy, etal (2010) provide an example of such models developed for a roller compaction
process. Among the models reported was a model in which tablet potency relative standard
deviation (RSD) was increased by increasing mill screen size (SS) and decreased with increasing
roller force (RF) and gap width (GW). They reported a quantitative model for the relationship:
Log (Tablet Potency RSD) = - 0.15 0.08 (RF) 0.06 (GW) + 0.06 (SS)
Process understanding summarized and codified in the form of the process model, conceptually
represented as Y=f(X), can contain any number of variables (Xs). These models typically include
linear, interaction and curvature terms as well as other types of mathematical functions.
At a strategic level, a way to assess process understanding is to observe how the process is
operating. When process understanding is adequate the following will be observed:
Stable processes (in statistical control) are capable of producing product that meets
specifications
Little firefighting and heroic efforts required to keep the process on target
Processes are running at the designed speed with little waste
Processes are operating with the expected efficiency and cost structure
Employee job satisfaction and customer satisfaction is high
Process performance is predictable
To assess the state of process understanding at an operational level we need a list of desired
characteristics. A list for assessing process understanding is discussed in the following sections
along with the identification of process problems that frequently result from lack of process
understanding and how to develop process understanding including what tools to use.
Assessing Process Understanding
The FDA definition of process understanding is useful at a high level but a more descriptive
definition is needed; a definition that can be used to determine if a process is understood at an
operational level.
Table 2 lists the characteristics that are useful in determining when process understanding exists
for a given process. First it is important that the critical variables (Xs) that drive the process are
known. Such variables are typically called critical process parameters (CPP). It is helpful to
broaden this definition to include both input and environmental variables as well as process
variables; sometimes referred as the “knobs” on the process.
It is important to know the critical environmental variables (uncontrolled noise variables), such
as ambient conditions and raw material lot variation, can have a major effect on the process
output (Ys). Designing the process to be insensitive to these uncontrolled variations results in a
“robust” process.
7
Measurement systems are in place and the amount of measurement repeatability and
reproducibility is known for both output (Y) and input (X) parameters. The measurement
systems need to be robust to minor and inevitable variations in how the procedures are used to
implement the methods on a routine basis. This critical aspect or process understanding is often
overlooked in the development process. Gage Repeatability and Reproducibility studies and
method robustness investigations are essential to proper understanding of the measurement
systems.
Process capability studies involving the estimation of process capability and process
performance indices (Cp, Cpk, Pp and Ppk) are useful in establishing process capability. Sample
size is a critical issue here. From a statistical perspective 30 samples is the minimum for
assessing process capability; much more useful indices are developed from samples on 60-90
observations. In Ch. 20 the authors recommend sample sizes of 100 to 200 for reasonable size
confidence intervals.
In assessing the various sources of risk in the process, it is essential that the potential process
failure modes be known. This is greatly aided by performing a failure modes and effects analysis
at the beginning of the development process and as part of the validation of the product
formulation and process selected for commercialization.
Process control procedures and plans should be in place. This will help assure that the process
remains on target at the desired process settings. This control procedure should also include a
periodic verification of the process model, Y=f(X), used to develop the design space. This is also
recommended by the FDA’s Process Validation Guidance (FDA 2011).
Process Problems are Typically Due To Lack of Process Understanding
Although it is “a blinding flash of the obvious”, it often overlooked that when you have a process
problem it is due to a lack of process understanding. When a process problem occurs you often
hear “Who did it; who do we blame”? Or “How do we get it fixed as soon as possible?” Juran
emphasized that 85% of the problems are due to the process and the remaining 15% are due to
the people who operate the process (Juran and DeFeo 2010).
While a sense of urgency in fixing process problems is appropriate some better questions to ask
are “How did the process fail?” and “What do we know about this process; do we have adequate
understanding of how this process works”?
Table 3 summarizes some examples of process problems and how new process understanding
lead to significant improvements; sometimes in unexpected areas. Note that these examples
cover a wide range of manufacturing and non-manufacturing issues including capacity shortfalls,
defective batches, process interruptions, batch release time and report error rates. All were
significant problems in terms of both financial and process performance. The increased process
understanding resulted in significant improvements.
8
How Do We Develop Process Understanding?
Consistent with the FDA (2004) definition of process understanding noted previously in this
Chapter , we see in Figure 3 that a critical first step in developing process understanding is to
recognize that process understanding is related to process variation. As you analyze process
variation and identify root causes of the variation, you increase your understanding of the
process. Process risk is an increasing function of process variation and a decreasing function of
process understanding. Increasing process understanding reduces process risk and increases
compliance.
In Figure 4 we see that analyzing the process by combining process theory and data
(measurements and observations, experiments and tribal knowledge in the form of what the
organization knows about the process). Science and engineering theory when interpreted in the
light of data enhances process understanding and results in more science and engineering being
used in understanding, improving and operating the process.
The integration of theory and data produces a process model, Y=f(X), and identifies the critical
variables that have a major effect on process performance. Fortunately there are typically only 3-
6 critical variables. This finding is based on the Pareto principle (80% of the variation is due to
20 percent of the causes) and experience of analyzing numerous processes in a variety of
environments by many different investigators (Juran and DeFeo 2010).
What Tools Do We Use to Develop Process Understanding?
Process analysis is strongly data based, creating the need for data-based tools for the collection
and analysis of data and knowledge-based tools that help us collect information on process
knowledge (Figure 5, Table 4). We are fortunate that all the tools needed to develop process
understanding described above are provided by QbD and Process Analytical Technology (FDA
2004) and Lean Six Sigma methodologies (Snee and Hoerl 2003, Snee 2007).
It all starts with a team which includes a variety of skills including formulation science, process
engineering, data management and statistics. In my experience Improvement teams often have
limited formulation science and data management skills. Process knowledge tools include the
process flow chart, value stream map, cause and effect matrix and failure modes and effects
analysis (FMEA).
The data-based tools include design of experiments, regression analysis, analysis of variance,
measurement system analysis and statistical process control. The DMAIC (Define, Measure,
Analyze, Improve and Control) process improvement framework and its tools are particularly
useful for solving process problems. A natural by-product of using DMAIC is the development
of process knowledge and understanding, which flow from the linking and sequencing of the
DMAIC tools. Development of process understanding is built into the method (Figure 5).
9
Design Space
Although all the building blocks of QbD are important, the creation and use of the design space
is arguably the most important aspect of QbD. The design space is the
“Multidimensional combination and interaction of input variables (e.g., material
attributes) and process parameters that have been demonstrated to ensure quality”.
The relation between the knowledge, design and control spaces are shown schematically in
Figure 6. A process can have more than one control space. The Control Space is the region or
point in the design space at which the process is operated. This space is also sometimes referred
to as the Normal Operating Region (NOR). A process can have more than one control space
within the design space.
A key question is how to create the design space, particularly when products are often locked
into a design space before the process is well understood. The following two-phase approach is
recommended:
• Create the design space during the development phase by focusing on minimizing risk
and paying close attention to collecting the data that are most critically needed to speed
up development and to understand the risk levels involved
• After the process has been moved into manufacturing, collect data during process
operation to refine the process model, design space, and control space as additional data
become available over time.
Continued Process Verification (CPV) from Stage 3 of the FDA Process Validation Guidance
(FDA 2011) is very effective in implementing the second phase of this approach. CPV and
process monitoring is an important building block of QbD and will be discussed in greater detail
later in this chapter.
The following examples illustrate the concepts behind the design space. Fundamental to the
construction of the design space is having a quantitative model, Y=f(X), for the product or
process being studied. Figure 7 shows a contour plot for dissolution (spec > 80%) and friability
(spec < 2%) as a function of two process parameters. We find the combination of process
parameters that will satisfy both the dissolution and friability specifications simultaneously by
overlaying the contour plots as shown in Figure 8. This approach, referred to as the Overlapping
Means Approach (OMA) by Peterson and Lief (2010) will be discussed later in further detail.
The location of the desired control space can also be found using mathematical optimization
techniques (Derringer and Suich 1980).
Figure 9 shows another example of overlaid contour plots being used to identify the
combinations of particle size and excipient concentration that will meet the dissolution
10
specifications. This plot makes it easy to see how the design space (white area) relates to the
variation in the process variables. The design space has more flexibility with respect to excipient
concentration than particle size.
Finding the Critical Variables
Product and process understanding are fundamental to QbD and the development of the model
Y = f (x1, x2, …, xp)
which is used to create the design space and create the process control methodology. The
question is how to create the process model quickly without missing any important variables.
Getting the right set of variables (i.e., critical process parameters, input variables such as raw-
material characteristics and environmental variables) in the beginning is critical. Sources of
variability and risk can be obtained in several ways. Interactions between raw-material
characteristics and process variables are ever-present and more easily understood with the use of
statistically designed experiments.
Figure 10 contains the critical elements of the approach. Identifying the critical variables often
begins what is called "tribal knowledge," meaning what the organization knows about the
product and process under study. This information is combined with the knowledge gained in
development and scale-up, a mechanistic understanding of the chemistry involved, literature
searches, and historical experience. The search for critical variables is a continuing endeavor
throughout the life of the product and process. Conditions change, and new knowledge is
developed, thereby potentially creating a need to refine the process model and its associated
design and control spaces.
The resulting set of variables are subsequently analyzed using a process map to round out the list
of candidate variables, the cause-and-effect matrix to identify the high-priority variables, and the
FMEA to identify how the process can fail. This work will identify those variables that require
measurement system analysis and those variables that require further experimentation (Hulbert,
etal 2008).
Identifying potential variables typically results in a long list of candidate variables, so a strategy
for prioritizing the list is needed. In the author's experience and that of others, the DOE-based
strategy-of-experimentation approach (see Figure 11), developed at DuPont Company in
Wilmington, DE, is a very effective approach (Pfeifer 1988). Developing an understanding of the
experimental environment and matching the strategy to the environment is fundamental to this
approach. A three-phase strategy (i.e., screening, characterization, and optimization) and two-
phase strategy (i.e., screening followed by optimization and characterization followed by
optimization) are the most effective. In almost all cases, an optimization experiment is run to
11
develop the model for the system that will be used to define the design space and the control
space.
The confirmation (i.e., validity check), through the experimentation model used to construct the
design space and control space is fundamental to this approach. Confirmation experiments are
conducted during the development phase. The model is confirmed periodically as the process
operates over time. This ongoing confirmation is essential to ensure that the process has not
changed and that the design and control spaces are still valid. The ongoing confirmation of the
model happens during the second phase of the development process, as previously described.
The screening characterization optimization (SCO) strategy is illustrated by the work of Yan
and Le-he (2007) who describe a fermentation optimization study that uses the screening
followed by optimization strategy. In this investigation 12 process variables were optimized. The
first experiment used a 16-run Plackett-Burman screening design (1946) to study the effects of
the 12 variables. The four variables with the largest effects were studied subsequently in a 16-run
optimization experiment. The optimized conditions produced an enzyme activity that was 54%
higher than the operations produced at the beginning of the experimentation work.
The SCO Strategy in fact embodies seven strategies developed using single and multiple
combinations of the screening, characterization and optimization phases. The end result of each
of these sequences is a completed project. There is no guarantee of success in a given instance,
only that SCO strategy will “raise your batting average” (Snee 2009c). It enables the user to get
the right data in the right amount at the right time.
The SCO Strategy works because of its underlying theory that has been tested and enhanced
through use as summarized below:
Pareto principle applies; the majority of the variation is due to a few causes (Xs). The
general wisdom is that most systems are driven by 3-6 variables
Plan for more than one experiment as process knowledge builds over time through
sequential experimentation
Experimental environment defines the appropriate design. Understand the environment
and the appropriate design will be easier to create
Multiple variables (inputs Xs and outputs Ys) must be studied simultaneously to develop
deep process knowledge
Most response functions can be adequately approximated within the region of interest by
1st and 2nd order polynomial models (Taylor Series expansion). Complex effects such as
extreme curvature, cubic functions and interactions involving more than two variables
rarely exist.
Effects of process instability can be reduced by using randomization and blocking
12
The SCO Strategy helps minimize the amount of data collected by recognizing the phases of
experimentation. Utilizing the different phases of experimentation results in the total amount of
experimentation being done in “bites”. These bites allow subject matter expertise and judgment
to be utilized more frequently and certainly at the end of each phase.
Formulation Studies
Attempting to optimize formulations involving multiple components by varying one component
at a time is, at best a low yield strategy. A formulation scientist’s experience is typical. He
reported that “an 11 component formulation was studied. A formulation that worked was found.
Unfortunately it was a very painful experience; it took a long time filled with uncertainty,
anxiety and lots of stress. What was worse is we were never sure that we were even close to the
best formulation”.
There must be a better way and fortunately there is (Cornell 2002, Snee and Piepel 2013). The
DOE and Strategy of Experimentation approaches described above can also be used effectively
in the development of formulations where the formula ingredients are expressed on a percent
basis and add to 100% (or 1.0 when the ingredients are expressed in proportions).
The concepts apply including:
Statistical designs (called mixture designs) are used to collect the data
o Screening experiments are used to identify the ingredient that are having the
largest effects on the formulation performance
o Optimization designs are used to identify the design space
Graphical analyses are performed to study the effects of the components
Models are fit to the data to describe the response surface and construct contour plots
Design spaces are constructed using contour plots and mathematical optimization
Confirmation experiments are conducted to verify the performance of the selected
formulations
There is no limit on the number of components that can be investigated and almost any
combination of component (wide and narrow) ranges can be studied.
As when experimenting with process variables the approach of creating formulations using
designed experiments (mixture designs in the case of formulations) enables the user to obtain the
right data in right amount in the right time.
Bayesian Design Space Predictive Approach
The design space concept is a major step forward beyond the “One-Factor-at-a-Time” approach
to the development and improvement of products and processes. Development of a design space
13
requires a deep understanding of the product and process involved. It is this understanding that
the FDA looks for in regulatory QbD filings. Construction of the design space also requires the
creation of quantitative models based on data collected from appropriately designed experiments.
Peterson and Lief (2010) emphasized a major limitation to the design space construction process
as is commonly practiced known as the Overlapping Means Approach (OMA). The approach has
been widely used since at least the 1960s. OMA, which was described above, is based on the use
of regression models to construct contour plots (overlapping when more than one response is
involved) and mathematical optimization to find the combination of predictor variables that will
produce product within specifications.
Predictions of OMA are based on the predicted means responses. As a result, at the edge of the
design space, approximately 50% (or more) of the time, the product is predicted to be outside
specifications, assuming the process variation is normally distributed. Moving the process target
closer to the center of the design space certainly reduces this probability. Peterson and Lief
(2010) illustrate a subtle but important risk issue with the OMA. For a design space based upon
the OMA involving multiple response types, it is possible for such a design space to contain
points near the boundary of intersecting mean response surface contours that correspond to
probabilities much less than 0.5 of meeting all specifications.
To avoid this limitation, Peterson (2008) and Peterson and Lief (2010) recommend a Bayesian
based approach which calculates the probability that product specifications will be met at
different combinations of the predictor variables. The resulting approach is quantitative, risk
based and utilizes prior product and process information. These are desirable characteristics
mentioned quite often in regulatory guidance. The Bayesian predictive approach uses design of
experiment to collect the needed data just as with the OMA. The difference is in the method of
design space construction, not in the experimentation involved.
Peterson and Lief (2010) discuss an example which illustrates the Bayesian approach and shows
how it compares to the OMA. The experiment involved three responses (spec limits): Y1=
disintegration time (<15min), Y2=friability (< 0.8%) and Y3=hardness (8-14 kp). The six
predictor variables involved were: X1=Amount Water Added, X2=Water Addition Rate,
X3=Wet Massing Time, X4=Main Compression Force, X5=Main Compression/Precompression
Ratio and X6=Speed.
Factor X6 was found to have no effect on any response and Factor X5 effect was small and set at
the center point for design space calculations. The resulting design space is shown in Figure 12
in which the OMA design space is shown in yellow and the predictive design space with
minimum reliability value = 0.8 shown in black. As expected, the predictive space is smaller than
the OMA design space. The worst probability of meeting all specifications for this OMA design
space was found to be 0.15.
14
One of the practical challenges of the Bayesian approach is that the computational capabilities
have not as yet been implemented in widely available software packages such as JMP, Design
Expert and Minitab. This situation will likely improve over time. Peterson and Lief (2010)
discuss several alternative computational approaches that are available in other software
packages and provide references to additional information and uses of the Bayesian approach.
In the meantime there are two things that should be considered when using the OMA. First the
selected control space and associated process target(s) should be verified with confirmation
experiments. This is a critical step in good experimental practice. The confirmation experiments
should be designed to estimate the probability of exceeding specifications. This is a characteristic
of both the OMA and the Bayesian approach.
Second, Stage 3 of the FDA Process Validation Guidance (FDA 2011) calls for “Continued
Process Verification” which is discussed in the next section. Stage 3 recommends a lifecycle
approach to process monitoring. Such an approach will quickly detect when product is out of
specification and should be designed to provide an early warning regarding when such a situation
might occur. Continued Process Verification
Routine, ongoing assessment of process performance and product quality is crucial to ensuring
that high quality pharmaceuticals reach the patient in a timely fashion. This need is addressed in
Stage 3 of the FDA Process Validation Guidance (FDA 2011) which calls for Continued Process
Verification (CPV). A systems approach to CPV, including key challenges, the role of quality by
design (QbD), and how to operate the system effectively is discussed in this section. A detailed
discussion of the FDA Process Validation Guidance is the subject of Chapter 19.
In traditional solid-dosage pharmaceutical manufacturing, process data are routinely analyzed at
two points in time to assess process stability and capability. During the production of each batch,
process operators and quality-control departments collect data to ensure stability and capability
and take appropriate remedial actions when needed. On a less frequent basis (i.e., monthly or
quarterly), batch-to-batch variation is analyzed based on product parameters to assess the long-
term stability and capability of the process.
Process Monitoring, Control and Capability. The industry and regulatory focus on QbD
places an even greater emphasis on the quality of pharmaceutical products and the performance
of the pharmaceutical-manufacturing processes. Process and product control are major building
blocks of QbD (Snee 2009a, b). The strategic structure of the International Society for
Pharmaceutical Engineering's (ISPE) Product Lifecycle Implementation Plan (PQLI) lists the
"process performance and product quality monitoring system" as one of its critical elements
(Berridge, etal 2009).
15
Process stability and capability are central to assessing the performance of any process.
Manufacturing processes that are stable and capable over time can be expected to consistently
produce product that is within specifications and thereby cause no harm to patients due to
nonconforming product. Stability and capability are described as follows (Montgomery 2013): A
stable manufacturing process is a process that is in a state of statistical control as each batch is
being produced and as batches are produced over time. A process in a state of statistical control
consistently produces product that varies within the process control limits; typically set at the
process average (X-Bar) plus and minus three standard deviations (SD) of the process variation
for the parameter of interest. Separate control limits are set for each parameter (e.g., tablet
thickness and hardness). Any sample value that falls outside of these limits indicates that the
process may not be in a state of statistical control.
A capable process is one that consistently produces tablets that are within specifications for all
tablet parameters (Montgomery 2013). A process capability analysis compares the process
variation to the lower and upper specification limits for the product. A broadly used measure of
process capability is the Ppk index, or process performance index, which is discussed in greater
detail later in this chapter.
A production process can include any one of the four combinations of stability and capability
(Hoerl and Snee 2012): stable and capable (desired state), stable and incapable, unstable and
capable, and unstable and incapable (worst possible situation).
Process stability and capability are typically evaluated at least twice:
1) During the production of each batch to ensure that the process is in control and to identify
when process adjustments are needed. Some key questions that need to be addressed during this
analysis include:
Is the batch production process stable during the production of the batch with no
trends, shifts, or cycles present?
Is the process capable of meeting specifications (i.e., are the process-capability
indices acceptable)?
Is the within-batch sampling variation small, indicating a stable production process?
2) Monthly or quarterly to ensure batch-to-batch control throughout a given year and between
years. Some important questions that should be addressed during this analysis include:
Is the batch-to-batch variation stable from year to year and within years with no
shifts, trends, or cycles present?
Is the batch-to-batch variation small?
These two analyses also help to assess the robustness of the process.
16
Control limit versus specification limit. Control limits are calculated from process data and
applied to the process. Control limits are used to assess the stability of the process and to
determine the need for process adjustments when out of control samples are detected. On the
other hand, specification limits apply to the product. Specification limits are used to assess the
capability of the process to produce a product that has the desired properties and characteristics.
A Systems Approach for Continued Process Verification
It is generally agreed in industry that a systems-based approach enables operations to be more
efficient and sustainable. Schematics of such systems are shown in Figure 13 for monitoring
individual batches and in Figure 14 for monitoring batch-to-batch variation (Snee and Hoerl
2003, 2005; Snee and Gardner 2008). The systems underlying Figures 13 and 14 have the
following characteristics:
Data are periodically collected from the process. Pharmaceutical manufacturing processes
are often monitored using 3060 minute samples.
These data are used to monitor processes for stability and capability using control charts,
process capability indices, analysis of variance, time plots, boxplots, and histograms.
The analysis identifies when process adjustments are needed to get the process back on
target.
Records are kept on the types of problems identified. As significant problems are
identified or problems begin to appear on a regular basis, the resulting issues and
documentation are incorporated into process-improvement activities to develop
permanent solutions.
Process improvement can be effectively completed using the DMAIC (define, measure, analyze,
improve, control) problem-solving and process-improvement framework (Snee and Hoerl 2003,
2005). The use of the tools in this framework is discussed in the next section.
Assessment tools
As a general principle, it is rare that a manufacturing process that is stable and capable will
produce a product that is out of specification. The primary purpose of a process monitoring
system is to address the question: Is this process capable of consistently producing product that is
within specifications over time? The statistical analyses conducted to answer this question are
briefly described below. These methods are generally accepted and well documented in the
literature (Montgomery 2009).
Control-chart analysis. A control-chart analysis is used to assess the stability of a process over
time. The Shewhart chart has been widely used to assess process stability since the 1930s. Other
types of control charts are also useful for monitoring processes (Montgomery 2009).
17
A stable process is a predictable process; a process whose product will vary within a stated set of
limits. A stable process is sometimes referred to as being in "a state of statistical control"
(Montgomery 2009). A stable process has no sources of special-cause variationthat is, effects
of variables are outside the process but have an effect on the performance of the process (e.g.,
process operators, ambient temperature and humidity, raw material lot).
The most commonly used indicator of special-cause variation is a process that has product
measurements outside of the control limits which are typically set at X-Bar plus and minus three
SD of the process variation for the parameter of interest.
For example a process may be producing tablets with an average hardness of 4.0 kp and a
standard deviation of 0.3 kp. The control limits are thus 4.0 +/3(0.3) for a range of 3.14.9. Any
tablet sample outside of that range is an indication that the process average may have changed
and a process adjustment may be needed. Separate control limits are set for each parameter.
Figure 15 shows a control chart for a total weight of 10 tablets manufactured by two tablet
presses. The graphic demonstrates that both presses are stable and in control and producing
tablets with the same average weight and variation.
Figure 16 shows a control chart for assay values for batches produced over a 3-year period. The
process is stable through the middle of Year 2 and begins to decrease in Year 3. When a process
adjustment is made, the batch assay values return close to the values observed in Year 1. This
example is interesting because a process shift is shown, but none of the batch assay values are
close to the assay specifications of 90110%.
Out-of-specification (OOS) and out-of-control (OOC) values require attention and sometimes an
investigation” is needed. These values are not always caused by manufacturing problems, and
may be caused by sampling errors, testing errors, or human administration errors such as
recording or data keying. The causes of OOS and OOC measurements should be carefully
considered when interpreting the OOC and OOS values and deciding on appropriate action.
Process-capability analysis. A process-capability analysis is conducted to determine the ability
of the process to meet product specifications. The details of process-capability analysis are
provided in Chapter 20. A brief discussion is included here to provide context. The Ppk value
represents the ratio of the difference between the process average and the nearest specification
divided by three times the process standard deviation (SD).
Ppk = Min (A, B) / 3SD
Where A is the upper specification minus the process average and B is the process average minus
the lower specification. Two main statistics are used to measure process capability: percent of
18
the measurements OOS and the process Ppk value. Some general guidelines for interpretation of
the Ppk value are summarized in Table 5. More specific interpretation may be created for each
application considering patient and process risk levels.
Process capability indices of Ppk of 2.0 and higher are consistent with high-performance
processes or robust processes. Figure 17 shows an example of process capability for tablet
weight. In this case, the Ppk value is 1.91 (an excellent category) which is based on the
distribution of tablet weights being a considerable distance from the lower and upper
specifications for tablet weight.
When a process is robust, small process upsets will not create OOS product. Accordingly, small
OOC signals do not result in OSS product. A process is said to be "robust" if its performance is
not significantly uninfluenced by variations in process inputs (e.g., raw material lot), process
variables (e.g., press force and speed), and environmental variables (e.g., ambient temperature
and humidity).
Analyzing Process Variation. Another way to assess process stability is to study the variation in
process performance that is caused by potential special-cause variation (e.g., tablet presses, raw-
material lots, and process operating teams). Analysis of variance (ANOVA) enables one to
identify variables that can increase variation in tablet parameters and that may produce OOS
product.
The boxplot in Figure 18 shows the distribution of tablet hardness values for a batch of tablets
produced by two different tablet presses (X and Y). Press X has a wider hardness distribution
than Press Y, yet none of the hardness values are outside the hardness specification of 16 kp.
With this data in mind, the process operators can determine whether to make process
adjustments.
After statistical significance of a comparison (e.g., average of Tablet Press X versus average of
Tablet Press Y) is established, the practical significance of the difference in average values must
be considered. This assessment is frequently carried out by expressing the observed difference in
average values as a percentage of the overall process average. Subject-matter expertise is used to
evaluate the practical importance of the observed percent difference.
Nested analysis of variance is another form of ANOVA used to assess process stability. Nested
ANOVA can estimate the portion of the total variation in the data attributed to various sources of
variation (Montgomery 2013). Typically, the larger the percent of the total variation attributed to
a source of variation, the more important is the source of variation. Low amounts (< 30%) of
long-term variation as determined by a nested ANOVA indicate a stable process (Snee and Hoerl
2012).
19
Creating Process Monitoring Systems
There are important considerations to take into account when designing, implementing, and
operating process monitoring systems. The first of these is process understandingthat is, a
deep knowledge of the variables that drive the process, which enables the accurate prediction of
process performance. Effective application of QbD will result in better process understanding.
As mentioned previously, it is crucial to understand the sources and magnitudes of measurement
variation, in particular the repeatability and reducibility of the measurement of the process
parameters. Gage R&R studies are an effective method for measuring the repeatability and
reproducibility of the measurement methods used (Montgomery 2009). Ruggedness studies are
effective for determining method robustness for measuring typical variations that occur during
the routine use of the method (Schweitzer, etal 2011).
A systematic method is necessary to keep track of special-cause variation and to determine
whether a systemic problem exists. This information can then be used to improve the process.
Two problems often observed related to monitoring systems include data not being analyzed
routinely, and not taking action when significant sources of variation are identified. Regular
management review and accountability can address both problems.
When a systematic approach is used, including regular review and action, the result is effective
process monitoring. More importantly, high-quality pharmaceuticals are provided to the patient.
Process and Product Robustness
Process and product robustness are critical to the performance of the process over time. Design
space should incorporate the results of robustness experiments. In general terms, robust means
that the performance of the entity is not affected by the uncontrolled variation it encounters.
Accordingly, a product is robust if its performance is not affected by uncontrolled variations in
raw materials, manufacture, distribution, use, and disposal (PQRI 2006). Examples of robust
products are user-friendly computers and software, pharmaceuticals that have no side effects
regardless of how or when they are administered and medical instruments for home use. In all
three of these instances it is desirable to have the product robust to the inexperience of the user.
A process is robust if its performance is not affected by uncontrolled variation in process inputs,
process variables, and environmental variables. Robust processes are those processes that
perform well when faced by large variations in input variables such as raw-material
characteristics and variation due to environmental variables such as differences in ambient
conditions, operating teams and equipment. It is also important that the process be insensitive to
20
variation in the levels of the critical process parameters (CPP) such as equipment speed, material
flow rates and process temperature.
Robustness studies can be carried out at various times during the development and operation of
the process. Some example are summarized in Table 6:
PQRI (2006) describes a robustness study associated with the development of a tablet press. The
effects of two variables are evaluated; compression pressure and press speed. The goal is to
maximize the average tablet dissolution and minimize the variation in tablet dissolution at a fixed
combination of compression pressure and press speed. The results for the 10-run central
composite design (one point was replicated) are shown in Table 7. The average and standard
deviation is based on 20 tablets collected at each point in the design. In Figure 19 we see that
desired settings are found to be in the area of Compression pressure of 250 and a press Speed of
approximately 150 180. These settings are based on a requirement of a relative standard
deviation (RSD = 100(standard deviation/average)) less than 4%.
This is somewhat unique situation in that average dissolution is affected only by compression
pressure while dissolution variation (measured by standard deviation and relative standard
deviation) was affected only by Press Speed.
Humphries, etal (1979) describe a process development study, the goal of which is to create an
automated procedure for measuring ammonia in body fluids. The method sensitivity was not
only maximized but it was also found that the manufacturing process was robust to the control
variables. A 15-run face-centered-cube design was used to study the three critical process
parameters: pH, buffer molarity and enzyme concentration. The response surface analysis
showed that buffer molarity had no effect over the range studied in the experiment.
The sensitivity of the method was maximized within the experimental region and the area around
the maximum was found to be very flat. This flatness showed that the sizeable variations in pH
and enzyme concentration had little effect on the sensitivity of the method indication that
variations in pH and enzyme concentration could be tolerated during the manufacturing of the
reagents associated with the method.
Kelly, etal (1997) describe a robustness study of an existing fermentation purification process.
The effects of small variations in five manufacturing process variables on the product recovery
(%) and purity (%) were studied using a 16-run half-fraction of a five- factor factorial design. It
was concluded that, over the ranges of the variables studied, the process was robust with respect
to purity. Recovery was a different story with tighter control needed for three variables in order
for the process to have the required performance.
21
An example of test method robustness is discussed in the following section. Robustness studies
involving environmental variables are discussed by Box, Hunter and Hunter (2005).
QbD in Test Method Development and Validation
A characteristic of good science is good data. Quality data are arguably more important today
than ever before. Data are used to develop products and processes, control our manufacturing
processes (Snee 2010) and improve products and processes when needed. Quality data also
reduces the risk of poor process performance and defective pharmaceuticals reaching patients.
Unfortunately test method evaluation appears to be an overlooked opportunity. In my experience
a large number of measurement systems are inadequate resulting in poor decisions regarding
product quality, process performance, costs, etc.
Measurement is a process that is developed, controlled and improved just like a manufacturing
process. Indeed, quality data are the product of measurement processes (Snee 2005). Quality by
Design (QbD), introduced by the FDA in 2005, is focused on the development, control and
improvement of processes. Data are central to QbD and in turn QbD concepts, methods and tools
can be used to develop, control and improve measurement processes (Borman, etal 2007;
Schweitzer, etal 2010) As a result QbD and test methods have a complementary relationship;
each can be used to improve the other.
This section discusses the concepts, methods and tools of QbD that have been successfully used
to design, control and improve measurement systems. Earlier it was mentioned that using QbD in
product and process development begins with the Quality Product Target Profile. Similarly using
QbD in the development of measurement systems begins with the development of the
“Analytical Target Profile. The specific approaches used are summarized in Table 8 and
discussed in the following paragraphs. The concepts and methods involved are be introduced and
illustrated with pharmaceutical and biotech case studies and examples.
Design of Experiments, an effective QbD tool is used in the development of test methods to
create the operability region for the method by first running a screening design to test the effects
of various candidate test method variables. There are typically a large number of variables tested
in the screening design to reduce the risk of missing any important variables. The variables found
to have the largest effects (both positive and negative) are studied in a subsequent optimization
experiment, the output of which is operating window for the method which serves the same
function as the Design Space for a product or process. As a result we refer to this as the “Test
Method Design Space”.
In a recent test method development project, 11 variables were studied in 24 runs using a
Plackett-Burman screening design. The four variables with the largest effects were evaluated
22
further in a 28-run optimization experiment producing the design space for the method. The next
step in the development was to assess the effects of raw material variations
Test Method Repeatability and Reproducibility is an important assessment once the method has
been developed initially. Thus is done with using a Gage Repeatability and Reproducibility study
referred to as a Gage R&R Study.
In the study 5-10 samples are evaluated by 2-4 analysts using 2-4 repeat tests sometimes
involving 2-4 test instruments. Output from such a study produces quantitative measures of
repeatability, reproducibility and measurement resolution. These statistics are then used to
evaluate the value of the method to be used for product release and process improvement. The
variance estimates obtained are also often used to design sampling plans to monitor the
performance of the process going forward.
Test Method Ruggedness. Sometimes we find that as a test method is used the observed
variation in the test results becomes too large. What do I do now you ask? One possibility is to
evaluate the measurement process / procedure for ruggedness (ASTM 1989, Wernimont 1985).
Measurement method is “rugged,” if it is immune to modest (and inevitable) departures from the
conditions specified in the method (Youden 1961).
Ruggedness (sometimes called robustness) tests study the effects of small variations in the how
the method is used. There are other sources of variation in a measurement method in addition to
instruments and analysts which are typically the subject of Gage R&R studies. Such variables
include raw material sources and method variables such as time and temperature. Ruggedness
can be evaluated using two-level fractional-factorial designs including Plackett-Burman designs
(Box, etal 2005: Montgomery 2013).
A test method is said to be rugged if none of the variables studied have a significant effect. When
significant effects are found a common fix is to rewrite the SOP to restrict the variation in the
variables to a range over which the variable will not have a great effect on the performance of the
test method.
Lewis, Mathieu and Phan-Tan-Luu (1999) conducted a robustness study of a dissolution test
method to determine the sensitivity of the dissolution measurement procedure to small changes
in its execution. A large batch of tablets was used to assure uniformity of dissolution. Eight
factors involved in the test procedure (five method variables, filter position, instrument and
analyst) were varied over a small range using a 12-run Plackett-Burman design.
The statistical analysis of the data showed that none of the variables had a significant effect on
the dissolution measurement. It was concluded that the test method was robust. The observed
method dissolution time had a standard deviation is 1.9 minutes and a relative standard deviation
of 10.7%.
23
This example illustrates the flexibility of the two-level designs (Plackett-Burman and fractional-
factorial) in conducting test method robustness studies. The eight variable studies were a mixture
of both qualitative variables (test method variables) and qualitative variables (filter position,
instrument and analyst). Studying a combination of qualitative and quantitative variables is a
common occurrence in test method robustness studies.
Process Variation Studies. Sometimes when the process variation is perceived to be too high it
is not uncommon to think that the measurement is the root cause. Sometimes this is the case but
often it is not. In such situations there are typically three source of variation that may contribute
to the problem: the manufacturing process and the sampling process as well as the test method
(Snee 1983).
In two instances that I’m aware of the sampling method was the issue. In one case the variation
was too high because the sampling procedure was not followed. When the correct method was
used the sampling variance dropped by 30%. In another case each batch was sampled 3 times.
When the process variance study was run sampling contributed only 6% of the total variance.
The Standard Operating Procedures were changed immediately to reduce the samples to 2 per
batch; thereby cutting sampling and testing costs by one-third. A study was also initiated to see if
one sample per batch would be sufficient.
Test Method Continued Verification. The FDA Process Validation Guidance calls for Continued
Process Verification which includes the test methods. An effective way to assess the long-term
stability of a test method is to periodically submit “blind control” samples (also referred to as
reference samples) from a common source for analysis along with routine production samples in
a way that the analyst cannot determine the difference between the production samples and the
control samples. Nunnally and McConnell (2007) conclude “…there is no better way to
understand the true variability of the analytical method”.
The control samples are typically tested 2-3 times (depending on the test method) at a given
point in time. The sample averages are plotted on a control chart to evaluate the stability
(reproducibility) of the method. The standard deviations of the repeat tests done on the samples
are plotted on a control chart to assess the stability of the repeatability of the test method.
Another useful analysis to perform is an analysis of variance of the control sample data and
compute the percent long-term variation which measures the stability of the test method over
time. Long term variation variance components < 30% are generally considered good with larger
valves suggesting the method may be having reproducibility issues (Snee and Hoerl 2012).
It is concluded that using QbD concepts, methods and tools improves test method performance
and reduces the risk of poor manufacturing process performance and defective pharmaceuticals
reaching patients. Risk is reduced as the accuracy, repeatability and reproducibility increases.
Reduced variation is a critical characteristic of good data quality as reduced variation results in
reduced risk.
24
Screening experiments followed by optimization studies is an effective way to design effective
test methods. A measurement process can be controlled using control samples and control chart
and analysis of variance techniques. Measurement quality can be improved using Gage
Repeatability and Reproducibility studies. Robust measurement systems can be created using
statistical design of experiments. Product variation studies that separate sampling and process
variation from test method variation is an effective way to determine the root cause of process
variation problems.
Getting Started
Using QbD creates a paradigm shift from the usual approach to development (Snee, etal 2008).
QbD thus represents a cultural change that must be addressed with change-management
techniques such as the eight-stage change model developed by Kotter (1996) and associated
change-management tools (Kotter 1996, Holman, etal 2007). The importance of change
management is not recognized by many organizations that deploy QbD and other improvement
techniques. As a result, the promise of the associated change initiative is seldom realized even in
minor terms.
The first step is to recognize that people, when first exposed to QbD will have legitimate
concerns. Typically, three types of barriers are encountered: Technical - It won’t work here;
Financial We can’t afford it; and Psychological It’s too painful to change; the organization is
not ready for this.
These legitimate concerns must be acknowledged and addressed. It is helpful to enable people to
recognize the value of QbD which includes: Faster speed to market, reduced manufacturing
costs, reduced regulatory burden and better allocation of resources.
People must also understand the trends in the world-wide business of Pharma and Biotech such
as: competition is tougher than ever before, products are more complex, and the use of QbD is
expanding throughout the industry. Organizations are recognizing the value of a focus on quality.
One of the best approaches to dealing with the concerns is to design the deployment of QbD in
such a way that it produces important results quickly. Nothing succeeds like success. Kotter’s
change model tells us that we need to “generate short-term wins” (see Step 6 of Kotter’s model).
Short-term wins demonstrate two things: 1) QbD works in our environment and 2) QbD can
produce significant results quickly.
The eight stages in Kotter’s change model and some illustrative suggestions for implementation
are shown in Table 9. Careful thought to introducing QbD to an organization can increase the
speed and effectiveness of the deployment. First it is better to “start small and think big”.
Introduce a few QbD building blocks at the beginning and then add others as the organization
seems ready to consider still other ideas.
25
A good place to start is the use of the “design space” followed by the “continued process
verification system”. In every case identify some “demonstration projects that will yield
important results quickly. Provide training to build needed skills and create the supporting
infrastructure including: Strategy, plans and goals; definitions of roles and responsibilities; and
communication processes.
Of critical importance is a system of periodic management reviews of the QbD initiative (what’s
working and what isn’t) and measurement of results. It is important to balance development
speed and risk with getting the right data at the right time in the right amount.
New Directions in Statistical Research on QbD
The Bayesian approach to design space construction proposed by Peterson (2008) and discussed
in Peterson and Lief (2010) is a major opportunity for additional statistical research. Design
space construction is very important to the fast and effective development of pharmaceutical and
biotech products and processes. Emphasis can be effectively placed on the development of
roadmaps for using the Bayesian approach. This is particularly important for pharmaceutical
scientists who are not familiar with the various statistical methods and software approached
involved. The concepts, methods and tools of Statistical Engineering should be helpful in the
development of such an approach (Hoerl and Snee 2010).
Another important topic for further research is the question of “scale up” - going from lab scale
to commercial scale. Statistical model results typically represent lab scale. Projecting the results
of statistical models to large scale commercial processes can be a risky process due to the
inherent risks associated with the extrapolation of empirical models. Scale up is often based in
large part on engineering fundamentals and process understanding. Emphasis can be effectively
placed on the development of roadmaps to successfully complete scale up including data
required and issues and pitfalls such work should be prepared to address.
Conclusion
As shown in other industries and more recently in the pharmaceutical industry, QbD is an
effective method for developing new products and processes. QbD enables effective technology
transfer and the optimization and improvement of existing processes. QbD works because it
fosters process understanding that is fundamental to the creation of the design and control spaces
and to sustain performance. The design space is critical to the success of QbD because it
produces the following: performance on target and within specification at minimal cost with
fewer defective batches and deviations; greater flexibility in process operation; and the ability to
optimize manufacturing operations without facing additional regulatory filings or scrutiny.
26
This process understanding enables the reduction of process variation and the creation of
process-control systems that are based on sound science and quality risk-management systems.
QbD works beyond development and manufacturing to include functions such as technology
transfer, change control, deviation reduction, and analytical methods development and
improvement. All work is a process, and QbD is an effective method for improving processes.
The use of QbD will no doubt broaden in the future, and its application in the life sciences is
almost without bounds.
References
Alaedini, P., R.D. Snee, and B.W. Hagen (2007) "Technology Transfer by DesignUsing
Lean Six Sigma to Improve the Process," Contract Pharma 9 (4), 49 (June 2007).
ASTM (1989) Standard Guide for Conducting Ruggedness Tests, E1169. American Society for
Testing and Materials, Philadelphia, PA.
Box, G. E. P. and K. B. Wilson (1951) “On the Experimental Attainment of Optimum Conditions”,
J. Royal Statistical Society, Series B, 13, 1-4
Berridge, J. C. (2009) “PQLI – What Is It?” Pharmaceutical Engineering, May/June 2009, 36-39.
Borman, P., M., etal (2007) “Application of Quality by Design to Analytical Methods”,
Pharmaceutical Technology, October 2007, 142-152.
Box, G. E. P. , J. S. Hunter and W. G. Hunter (2005), Statistics for Experimenters, 2nd Edition,
John Wiley and Sons, New York, NY, 345-353
Cornell, J. A. (2002) Experiments with Mixtures: Designs, Models and Analysis of Mixture Data,
3rd Edition, John Wiley and Sons, New York, NY.
Covey, S. R. (1989) The 7 Habits of Highly Effective People, Simon and Schuster, New York.
Derringer, G. and R. Suich (1980) “Simultaneous Optimization of Several Response Variables”,
J. Quality Technology, 12, 214-219, 251-253.
FDA (2004) PATA Framework for Pharmaceutical Development, Manufacturing and
Quality Assurance (Rockville, MD), 2004.
FDA (2011). “Guidance for Industry: Process Validation: General Principles and Practices“,
US Food and Drug Administration, Rockville, MD, January 2011.
Fisher, R. A. (1935) The Design of Experiments, Oliver and Boyd, London.
Humphries, B. A., R. D. Snee, M. Melnychuk, and E. J. Donegan (1979) “Automated Enzymatic
Assay for Plasma Ammonia”, Clinical Chemistry, 25, 26-30 (1979).
27
Hoerl, R. W. and R. D. Snee (2010) “Statistical Thinking and Methods in Quality Improvement:
A Look to the Future”, Quality Engineering, Vol. 22, No. 3, July-September 2010, 119-139.
Hoerl R. W. and R.D. Snee (2012) Statistical Thinking: Improving Business Performance,
2nd Edition, John Wiley and Sons, Hoboken, NJ.
Holman, P., T. Devane, and S. Cady (2007) The Change HandbookThe Definitive Resource on
Today's Best Methods for Engaging Whole Systems (Berrett-Koehler Publishers, San Francisco).
Hulbert, M. H, et al. (2008) "Risk Management in Pharmaceutical Product Development
White Paper Prepared by the PhRMA Drug Product Technology Group,"
J. Pharm. Innov. 3 (1), 227248.
Hussain, A. S. (2009) “Professor Joseph B. Schwartz’s Contributions to the FDA’s PAT and
QbD Initiatives”, Presented at the Joseph B. Schwartz Memorial Conference, University of the
Sciences, Philadelphia, PA, March 2009.
ICH (2005) Q8(RI) ICH Harmonized Tripartite Guideline: Pharmaceutical Development,
(Geneva, Switzerland, Nov. 10, 2005).
Juran, J. M. and J.A. DeFeo, Quality Control Handbook, 6th ed., McGraw-Hill, 2010,
New York, NY
Kelly, B., P. Jennings, R. Wright and C. Briasco (1997) “Demonstrating Process Robustness for
Chromatographic Purification of a Recombinant Protein”, BioPharm International, October 1997,
36-47.
Kotter, J. P. (1996). Leading Change, Harvard Business School Press, Boston, MA.
Lewis, G. A., D. Mathieu and R. Phan-Tan-Luu (1999) Pharmaceutical Experimental Design
(MarcelDekker, New York).
McCurdy, V., M. T. am Ende, F. R. Busch, J. Mustakis, R. Rose, and M. R. Berry (2010).
“Quality by Design Using Integrated Active Pharmaceutical Ingredient – Drug Product
Approach to Development”, Pharmaceutical Engineering, July/August 2010, 28-29.
Montgomery, D. C. (2013) Introduction to Statistical Quality Control, 7th Edition
John Wiley and Sons, New York, NY.
Montgomery, D. C. (2013), Design and Analysis of Experiments, 8th Edition,
John Wiley and Sons, New York, NY, Chapter 13.
Nunnally, B. K. and J. S. McConnell (2007) Six Sigma in the Pharmaceutical Industry: Understanding,
Reducing, and Controlling Variation in Pharmaceuticals and Biologics,
CRC Press, Boca Raton, FL
28
Peterson, J. J. (2008), "A Bayesian Approach to the ICH Q8 Definition of Design Space",
Journal of Biopharmaceutical Statistics, 18, 959-975.
Peterson, J. J. and K. Lief (2010) “The ICH Q8 Definition of the Overlapping Means and
Bayesian Predictive Approaches”, Statistics in Biopharmaceutical Research, Vol. 2, No. 2, 249-259.
Pfeiffer, C. G. (1988) "Planning Efficient and Effective Experiments," Mater. Eng, 3539,
May 1988.
Plackett, R. L. and J. P. Burman (1946) "The Design of Optimum Multifactorial Experiments",
Biometrika 33 (4), pp. 305-25.
PQRI (2006) Product Quality Research Institute (PQRI) Robustness Workgroup,
"Process RobustnessA PQRI White Paper," Pharm. Eng. 26(6), 111 (2006).
Schwartz, J. B., J. R. Flamholz and R. H. Press (1973) “Computer Optimization of Pharmaceutical
Formulations I: General Procedure: II Application in Troubleshooting”. J. Pharmaceutical Sciences,
July 1973 1165-70, September 1973, 1518-19
Schweitzer, M., M. Pohl, M. Hanna-Brown, P. Nethercote, P. Borman, G. Hansen, K. Smith
and J. Larew (2010) “Implications and Opportunities of Applying QbD Principles to
Analytical Measurements”, Pharma Tech, Feb 2010, 52-59.
Snee, R. D. (2005) “Are We Making Decisions in a Fog? The Measurement Process
Must Be Continually Measured, Monitored and Improved”, Quality Progress, December 2005, 75-77.
Snee, R. D. (2006) “Lean Six Sigma and Outsourcing—Don't Outsource a Process You Don't Understand,"
Contract Pharma 8 (8), 410.
Snee, R. D. (2007) “Use DMAIC to Make Improvement Part of How We Work”,
Quality Progress, September 2007, 52-54.
Snee, R.D. (2009a), "Quality by Design: Four Years and Three Myths Later,"
Pharma Processing 26 (1), 1416 (2009).
Snee, R. D. (2009b) “Building a Framework for Quality by Design”, Pharm. Technol. 33 (10),
web exclusive, (October 2009).
Snee, R. D. (2009c) “Raising Your Batting Average” Remember the Importance of
Strategy in Experimentation”, Quality Progress, December 2009, 64-68.
Snee, R. D. (2010) “Crucial Considerations in Monitoring Process Performance and Product Quality”,
Pharmaceutical Technology, October 2010, 38-40.
Snee, R. D. , W. J. Reilly, and C.A. Meyers (2008) "International Technology Transfer By Design,"
Int. Pharm. Ind. 1 (1), 410.
29
Snee, R. D. and E. C. Gardner, (2008) “Putting It All Together – Continuous Improvement is Better than
Postponed Perfection”, Quality Progress, October 2008, 56-59.
Snee, R. D. and R. W. Hoerl (2012) “Going on Feel: Monitor and Improve Process Stability to
Make Customers Happy”, Quality Progress, May 2012, 39-41
Snee, R. D. and R.W. Hoerl (2003) Leading Six SigmaA Step by Step Guide (FT Press, Prentice Hall,
New York, NY).
Snee, R. D. and R.W. Hoerl (2005) Six Sigma Beyond the Factory Floor Deployment Strategies for
Financial Services, Health Care, and the Rest of the Real Economy, Prentice Hall, New York.
Snee, R. D., P. Cini, J. J. Kamm and C. Meyers (2008). “Quality by Design - Shortening the Path to
Acceptance”, Pharmaceutical Processing, February 2008, 20-24.
Snee, R. D. and G. F. Piepel (2013) “Assessing Component Effects in Formulation Systems”,
Quality Engineering, Vol. 25:1, 46-53.
Wernimont, G. (1985) “Evaluation of Ruggedness of an Analytical Process”, In Use of Statistics to
Develop and Evaluate Analytical Methods, W. Spendley, Ed., AOAC, Arlington, VA, pp 78-82.
Yan, L. and M. Le-he (2007) “Optimization of Fermentation Conditions for P450 BM-3
Monooxygenase Production by Hybrid Design Methodology”, Journal of Zhejian University
SCIENCE B, 8(1): 27-32.
Youden, J. (1961) “Systematic errors in physical constants”. Physics Today, 14, No.9, 32-42.
30
Tables
Table 1: Descriptions of the Building Blocks of Quality by Design
Table 2. Characteristics of Process Understanding
Critical Process Parameters (Xs) that drive the process are known and used to
construct the process design space and process control approach.
Critical environmental and uncontrolled (Noise) variables that affect the Critical
Quality Attributes (Ys) are known and used to design the process to be insensitive to
these uncontrolled variations (robustness)
Robust measurement systems are in place and the measurement repeatability and
reproducibility is known for all Critical Quality Attributes (Ys) and Critical Process
Parameters (Xs)
Process capability is known
Process failure modes are known and removed or mitigated
Process control procedures and plans are in place
31
Table 3: Descriptions of Tools Used for Developing Process Understanding
32
Table 4. Process Understanding Leads to Improved Process Performance: Some Examples
Problem
New Process Understanding
Result of Improvements Based on
New Process Understanding
Batch release
takes too long
Batch record review system flow
improved. Source of review bottleneck
identified.
Batch release time reduced 35-55%
resulting in inventory savings of
$5MM and $200k/yr cost reduction
Low capacity not
able to meet
market demand
Yield greatly affected by media lot
variation. New raw material specifications
needed.
Yield Increased 25%
Batch defect rate
too high
Better mixing operation needed including:
methods and rate of ingredient addition,
revised location of mixing impeller,
tighter specs for mixing speeds and times
and greater consistency is blender set-up.
Defect rate significantly reduced
saving $750k/yr
Process
interruptions too
frequent
Root cause was Inadequate supporting
systems including, lack of spare parts,
missing batch record forms and lack of
standard operating procedures,
Process interruptions reduced 67%
saving $1.7MM/yr
Report error rate
too high
Report developer not checking spelling,
fact accuracy and grammar,
Error rate reduced 70%
Table 5: Summary of process performance index (Ppk) interpretation.
Rating
Capability Index
Excellent
More than 1.50
Good
1.33 to 1.50
Acceptable
1.00 to 1.33
Poor
Less than 1.00
33
Table 6: Opportunities for Using Process Robustness Studies
Stage of Development and Operation
Robustness Study
Process Development
Assessment of the robustness of process to variations in
control variables (Xs)
Manufacturing Process Operation
Assessing robustness of process to minor variations
in control variables (Xs) in the region of the process
control point (target)
Can be done for a new process or an existing process
Environmental Variable Robustness
Assessing robustness of the process to variables
outside the process such as: Ambient conditions,
process inputs and use of product by customer:
business or end user
Test method robustness
Evaluate effects of small variations in how the
method is used.
Table 7: Tablet Press Robustness Study to Identify Compression Pressure and Press Speed that
Will Maximize Tablet Dissolution and Minimize Tablet Dissolution Variation
Run
Order
Compression
Pressure
Press
Speed
Dissolution
(AVG)
Dissolution
(StdDev)
Dissolution
RSD (%)
1
350
160
83.12
2.14
2.57
2
150
160
81.54
2.40
2.94
3
250
280
96.05
3.73
3.88
4
150
260
80.38
6.18
7.68
5
390
210
69.32
6.08
8.77
6
250
140
94.81
1.14
1.20
7
250
210
96.27
3.59
3.73
8
250
210
94.27
6.37
6.76
9
110
210
70.76
4.03
5.70
10
350
260
83.71
7.10
8.48
34
Table 8. Quality by Design Methods for Creating and Improving Test Methods
Methods and Tools
Analysis Purpose
QbD Approach
Speed Up Method Development and Reduce Risk
Design of Experiments using Screening
and Optimization Experiments
Method Development including Creation of Test
Design Space
Gage Repeatability and Reproducibility
Studies
Improve Measurement Quality
Method Robustness Studies
Create Methods Robust to Small Variations in
How the Method is Used
Blind Control Sample
Continued Verification of Method Repeatability
and Reproducibility Over Time
Process Variation Studies
Assess Process Variation to Determine the
Relative Contributions of the manufacturing
process, sampling Procedure and Test Method to
the Observed Process Variation
Table 9. Kotter’s Eight Stages of Successful Change
Stage
Activity
Illustrative Example for XYZ Pharma
1
Establish a sense of urgency
QbD is essential for XYZ Pharma to get ahead
and stay ahead
2
Create a guiding coalition
Champions for QbD are identified and working
together
3
Develop a vision and strategy
What QbD will look like at XYZ Pharma and
what we will do to be successful
4
Communicate the change vision
Plan using variety of media
5
Empower employees for broad
based action
Aggressive results oriented QbD skill
development at all organizational levels
6
Generate short-term wins
Complete successful projects in 2-6 months
7
Consolidate gains and produce
more change
Start next wave of projects and create annual
plan
8
Anchor new approaches in the
culture
Champions, skilled practitioners and
infrastructure are developed
35
Figures
Figure 1: Building Blocks of Quality by Design
Figure 2: Process Schematic Showing Process Inputs, Control Variables Environmental
Variables and Outputs. Developing the Model Y=f(X) Enables Prediction of Future Process
Performance
36
Figure 3. Developing and Using Process Understanding
Figure 4. Routes to Process Understanding
37
Figure 5: Tools for Developing Process Understanding
Figure 6: Predictor Variable Spaces Knowledge, Design and Control
38
Figure 7: Contour Plots of Dissolution and Friability as a Function of Process Parameters 1
and 2
Figure 8: Design Space Comprised of the Overlap Region of Ranges for Friability and Dissolution
39
Figure 9: Design Space Region Where Product Will Be in Specifications
Figure 10: Developing the List of Candidate Variables (Xs)
40
Figure 11: Comparison of Experimental Environments
Figure 12: Design Space Yellow Region Based on OMA; Black Region Based on Bayesian
Predictive Approach
41
Figure 13: Framework example for monitoring process stability and capability for
Individual Batches.
Figure 14: Framework example for monitoring batch-to-batch variation over time.
42
Figure 15: Control chart showing batch tablet weight produced using two presses (A and
B). UCL is upper control limit and LCL is lower control limit
Figure 16: Control chart showing assay values of batches produced over three years. UCL
is upper control limit and LCL is lower control limit.
43
Figure 17: Process capability for a tablet weight of Ppk = 1.91. LSL is lower specification
limit and USL is upper specification limit.
Figure 18: Boxplots showing tablet hardness for two tablet presses (X and Y).
44
Figure 19: Tablet Press Robustness Study Contour Plot Shows that Desired Settings Are
Compression Pressure of 250 and Press Speed of 150 180.
About the author:
Ronald D. Snee, PhD is founder and president of Snee Associates, a firm dedicated to the successful
implementation of process and organizational improvement initiatives. He provides guidance to senior
executives in their pursuit of improved business performance using Quality by Design, Lean Six Sigma,
and other improvement approaches that produce bottom line results. He worked at the DuPont Company
for 24 years prior to initiating his consulting career. While at DuPont he served in a number of positions
including Statistical Consultant Manager and Manager of Clinical Statistics. He also serves as an Adjunct
Professor in the Pharmaceutical programs at Temple University and at Rutgers University. Ron received
his BA from Washington and Jefferson College and MS and PhD degrees from Rutgers University. He is
an academician in the International Academy for Quality and a fellow of the American Society of
Quality, the American Statistical Association, and the American Association for the Advancement of
Science. He has been awarded ASQ’s Shewhart and Grant Medals, and ASA’s Deming Lecture Award
and Dixon Statistical Consulting Excellence Award as well as more than 30 other awards and honors. He
is a frequent speaker and has published five books and more than 280 papers in the fields of performance
improvement, quality, management, and statistics. He can be reached at Ron@SneeAssociates.com
Copyright: 2014 Ronald D. Snee
... The US Food and Drug Administration initiated the QbD programs in the pharmaceutical and biotech industries in 2004 to develop carefully designed products, services, and processes, considering all aspects of their lifecycle. Furthermore, QbD had been used and proved to be effective in many other industries for more than 40 years [113]. ...
Article
Full-text available
The vast scope of 3D printing has ignited the production of tailored medical device (MD) development and catalyzed a paradigm shift in the health-care industry, particularly following the COVID pandemic. This review aims to provide an update on the current progress and emerging opportunities for additive manufacturing following the introduction of the new medical device regulation (MDR) within the EU. The advent of early-phase implementation of the Quality by Design (QbD) quality management framework in MD development is a focal point. The application of a regulatory supported QbD concept will ensure successful MD development, as well as pointing out the current challenges of 3D bioprinting. Utilizing a QbD scientific and risk-management approach ensures the acceleration of MD development in a more targeted way by building in all stakeholders’ expectations, namely those of the patients, the biomedical industry, and regulatory bodies.
... The QbD paradigm, imposed by the ICH guidelines, was defined as "a systematic approach to development that begins in predefined objectives and emphasises product and process understanding and process control based on sound science and quality risk management" [4]. The objective of this approach is to "build quality into the product" rather than "testing quality into the product" [7,8]. Thereby manufacturers can reduce the post-commercial filings for process changes, leading to greater flexibility in biopharmaceutical production and quality control. ...
Article
Full-text available
The quality by design approach was introduced to the biopharmaceutical industry over 15 years ago. This principle is widely implemented in the characterization of monoclonal antibody production processes. Anyway, the early process phase, namely the inoculum expansion, was not yet investigated and characterized for most processes. In order to increase the understanding of early process parameter interactions and their influence on the later production process, a risk assessment followed by a design of experiments approach was conducted. The DoE included the critical parameters methotrexate (MTX) concentration, initial passage viable cell density and passage duration. Multivariate data analysis led to mathematical regression models and the establishment of a designated design space for the studied parameters. It was found that the passage duration as well as the initial viable cell density for each passage during the inoculum expansion have severe effects on the growth rate and viability of the early process phase. Furthermore, the variations during the inoculum expansion directly influenced the production process responses. This carry‐over of factor effects highlights the crucial impact of early process failures and the importance of process analysis and control during the first part of mAb production processes.
Article
Full-text available
Quality by Design (QbD) is one of the most important tools for the implementation of Process Analytical Technology (PAT) in biopharmaceutical production. For optimal characterization of a monoclonal antibody (mAb) upstream process a stepwise approach was implemented. The upstream was divided into three process stages, namely inoculum expansion, production, and primary recovery, which were investigated individually. This approach enables analysis of process parameters and associated intermediate quality attributes as well as systematic knowledge transfer to subsequent process steps. Following previous research, this study focuses on the primary recovery of the mAb and thereby marks the final step toward a holistic characterization of the upstream process. Based on gained knowledge during the production process evaluation, the cell viability and density were determined as critical parameters for the primary recovery. Directed cell viability adjustment was achieved using cytotoxic camptothecin in a novel protocol. Additionally, the cell separation method was added to the Design of Experiments (DoE) as a qualitative factor and varied between filtration and centrifugation. To assess the quality attributes after cell separation, the bioactivity of the mAb was analyzed using a cell‐based assay and the purity of the supernatant was evaluated by measurement of process related impurities (host cell protein proportion, residual DNA). Multivariate data analysis of the compiled data confirmed the hypothesis that the upstream process has no significant influence on the bioactivity of the mAb. Therefore, process control must be tuned towards high mAb titers and purity after the primary recovery, enabling optimal downstream processing of the product. To minimize amounts of host cell proteins and residual DNA the cell viability should be maintained above 85% and the cell density should be controlled around 15 × 106 cells/ml during the cell removal. Thereby, this study shows the importance of QbD for the characterization of the primary recovery of mAbs and highlights the useful implementation of the stepwise approach over subsequent process stages.
Article
Full-text available
Quality by Design principles are well described and widely used in biopharmaceutical industry. The characterization of a monoclonal antibody (mAb) production process is crucial for novel process development and control. Yet, the application throughout the entire upstream process was rarely demonstrated. Following previously published research, this study marks the second step toward a complete process characterization and is focused on the effect of critical process parameters on the antibody production efficiency and quality of the process. In order to conduct the complex Design of Experiments approach with optimal control and comparability, the ambr®15 micro bioreactor platform was used. Investigated parameters included the pH and dissolved oxygen set points, the initial viable cell density (iVCD) as well as the N‐1 duration. Various quality attributes (e.g., growth rate, viability, mAb titer, and peak proportion) were monitored and analyzed using multivariate data analysis to evaluate the parameter effects. The pH set point and the initial VCD were identified as key process parameters with strong influence on the cell growth as well as the mAb production and its proportion to the total protein concentration. For optimization and improvement in robustness of these quality attributes the pH must be increased to 7.2, while the iVCD must be lowered to 0.2 × 106 cells/mL. Based on the defined design space, additional experiments verified the results and confirmed the intact bioactivity of the antibody. Thereby, process control strategies could be tuned toward high cell maintenance and mAb production, which enable optimal downstream processing.
Book
The pharmaceutical industry is under increasing pressure to do more with less. Drug discovery, development, and clinical trial costs remain high and are subject to rampant inflation. Ever greater regulatory compliance forces manufacturing costs to rise despite social demands for more affordable health care. Traditional methodologies are failing and the industry needs to find new and innovative approaches for everything it does. Six Sigma in the Pharmaceutical Industry: Understanding, Reducing, and Controlling Variation in Pharmaceuticals and Biologics is the first book to focus on the building blocks of understanding and reducing variation using the Six Sigma method as applied specifically to the pharmaceutical industry. It introduces the fundamentals of Six Sigma, examines control chart theory and practice, and explains the concept of variation management and reduction. Describing the approaches and techniques responsible for their own significant success, the authors provide more than just a set of tools, but the basis of a complete operating philosophy. Allowing other references to cover the structural elements of Six Sigma, this book focuses on core concepts and their implementation to improve the existing products and processes in the pharmaceutical industry. The first half of the book uses simple models and descriptions of practical experiments to lay out a conceptual framework for understanding variation, while the second half introduces control chart theory and practice. Using case studies and statistics, the book illustrates the concepts and explains their application to actual workplace improvements. Designed primarily for the pharmaceutical industry, Six Sigma in the Pharmaceutical Industry: Understanding, Reducing, and Controlling Variation in Pharmaceuticals and Biologics provides the fundamentals of variation management and reduction in sufficient detail to assist in transforming established methodologies into new and efficient techniques.
Article
The author is a consultant to the National Bureau of Standards on the statistical and mathematical design of experiments in physics, chemistry, and engineering.
Article
Article
With today's emphasis on product quality, and the ever critical need for fast market entry of new products, Design of Experiments (DOX) strategies are rapidly becoming recognized by industry as an invaluable resource. An experimental design represents a plan for deliberately changing input variables in order to evaluate their effects on the output variables. A variety of DOX strategies are available to obtain different kinds of information within the number of experimental runs possible. No alternative to DOX exists that is as effective for understanding the relationships among multiple input/output variables.