Article

Risk-based Methodology for Validation of Pharmaceutical Batch Processes

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Unlabelled: In January 2011, the U.S. Food and Drug Administration published new process validation guidance for pharmaceutical processes. The new guidance debunks the long-held industry notion that three consecutive validation batches or runs are all that are required to demonstrate that a process is operating in a validated state. Instead, the new guidance now emphasizes that the level of monitoring and testing performed during process performance qualification (PPQ) studies must be sufficient to demonstrate statistical confidence both within and between batches. In some cases, three qualification runs may not be enough. Nearly two years after the guidance was first published, little has been written defining a statistical methodology for determining the number of samples and qualification runs required to satisfy Stage 2 requirements of the new guidance. This article proposes using a combination of risk assessment, control charting, and capability statistics to define the monitoring and testing scheme required to show that a pharmaceutical batch process is operating in a validated state. In this methodology, an assessment of process risk is performed through application of a process failure mode, effects, and criticality analysis (PFMECA). The output of PFMECA is used to select appropriate levels of statistical confidence and coverage which, in turn, are used in capability calculations to determine when significant Stage 2 (PPQ) milestones have been met. The achievement of Stage 2 milestones signals the release of batches for commercial distribution and the reduction of monitoring and testing to commercial production levels. Individuals, moving range, and range/sigma charts are used in conjunction with capability statistics to demonstrate that the commercial process is operating in a state of statistical control. Lay abstract: The new process validation guidance published by the U.S. Food and Drug Administration in January of 2011 indicates that the number of process validation batches or runs required to demonstrate that a pharmaceutical process is operating in a validated state should be based on sound statistical principles. The old rule of "three consecutive batches and you're done" is no longer sufficient. The guidance, however, does not provide any specific methodology for determining the number of runs required, and little has been published to augment this shortcoming. The paper titled "Risk-based Methodology for Validation of Pharmaceutical Batch Processes" describes a statistically sound methodology for determining when a statistically valid number of validation runs has been acquired based on risk assessment and calculation of process capability.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Frederick Wiles categorized PPQ into two phases, phase I and phase II [10]. The purpose of phase I batches is to establish a preliminary estimate of process control and capability, and these batches are not released for commercial distribution until successful completion of this phase. ...
... Control strategy CQA's: Disintegration time, dissolution, content uniformity Preformulation studies: drug particle size, solubility, polymorphism, intrinsic dissolution and impurities Biopharmaceutics risk assessment Excipient selection: based on rationale, scientific understanding, drugexcipient compatibility Process selection: simple, economical, high predictability Impact of excipients: explored by DOE ADAM mode: >80% drug release in 15 min-high probability to pass BE Risk ranking -Biopharmaceutics risk-low -Risk due to formulation components-low Impact of material attributes on unit processes explored by DOE Critical material attributes identified Highly predictable process and scale up Design space identified Risk ranking -Interrelationship between material and process attributes explored -Low risk Drug substance particle size, polymorph Excipients particle size Number of revolutions of the blender Drug load in formulation Diluent ratio Lubrication concentration and lubrication time (1) in which a multiple of sigma (± 3σ) is replaced with a coverage factor (k 2 ) [10]. ...
... where USL and LSL are the upper and lower specification limits, respectively. The author used the Wald and Wolfowitz equation to calculate k 2 and proposed a Microsoft Excel-based method to estimate the statistical coverage (p) [10]. ...
Article
Purpose The FDA’s process validation guidance 2011 has rightly resulted in discontinuing the “one size fits all” practice. The guidance aligns process validation with quality by design and quality risk management guidelines. However, the process validation guidance has thrown a challenge with respect to determining the statistically appropriate number of batches for process performance qualification (PPQ) stage. This study reviews various approaches for estimating the number of PPQ batches, and their merits and limitations. Additionally, a knowledge factor-based method in which residual risk level is related to the knowledge factor by a probability scale is proposed. Methods and Results The risk-based methods assign a confidence level to unit processes based on the risk posed to critical quality attributes of the product. The level of product understanding and residual risk would determine the number of PPQ batches required for process validation. The knowledge factor-based method like other Bayesian methods provides an opportunity to incorporate knowledge gained during product/process development and scale up studies for estimating PPQ batch numbers. The number of batches required using this method are 6, 12, and 15, respectively, for low-, moderate-, and high-risk processes with corresponding knowledge factors of 0.1, 0.5, and 0.9. Conclusions Greater understanding and knowledge of product would reduce the requirement of PPQ batches remarkably. On the other hand, the higher residual risk level indicates knowledge gaps in product understanding; consequently, higher number of PPQ batches would be required to gain confidence in the product and the process before commercialization.
... This statement indicates a need to understand both within and between batch variability. Several recent articles have been published that discuss the challenge of justifying a statistical model for determining a sufficient number of batches (3)(4)(5). Bryder et al. provide an excellent overview of the issue and raises a call for discussion in their ISPE discussion paper (3). Wiles provides an example of a statistically sound method for determining when a valid number of batches have been acquired based on risk assessment and a calculation of process capability (5). ...
... Bryder et al. provide an excellent overview of the issue and raises a call for discussion in their ISPE discussion paper (3). Wiles provides an example of a statistically sound method for determining when a valid number of batches have been acquired based on risk assessment and a calculation of process capability (5). The method determines total number of required samples to attain a pre-determined confidence. ...
Article
Full-text available
The approach documented in this article reviews data from earlier process validation lifecycle stages with a described statistical model to provide the "best estimate" on the number of process performance qualification (PPQ) batches that should generate sufficient information to make a scientific and risk-based decision on product robustness. This approach is based upon estimation of a statistical confidence from the current product knowledge (Stage 1), historical variability for similar products/processes (batch-to-batch), and label claim specifications such as strength. The analysis is to determine the confidence level with the measurements of the product quality attributes and to compare them with the specifications. The projected minimum number of PPQ batches required will vary depending on the product, process understanding, and attributes, which are critical input parameters for the current statistical model. This new approach considers the critical finished product CQAs (assay, dissolution, and content uniformity), primarily because assay/content uniformity and dissolution as well as strength are the components of the label claim. The key CQAs determine the number of PPQ batches. This approach will ensure that sufficient scientific data is generated to demonstrate process robustness as desired by the 2011 FDA guidance.
... In United States [4]. ...
Article
Full-text available
The aim of the present work is to evaluate the critical process parameters and techniques and improve the quality and reduce cast and scaling up time. The pharmaceutical industry R & D refers to the process of successful progress from drug discovery to product development. The need to the target market are identified alternative product concept are generated and evaluate and single concept of selected for future development. The concept is a description of the from function and features a product and a usefully. The Scale up is the process as define to increasing the batch size or increasing different physical parameter the output of volume. The old process scale up techniques doesn’t involve studying the critical process parameters (Raw Process). The scale up batches industry to improve quality of the product in moving from Scale up batches/Exhibit Batches and validation batches. To improve the batches quality scale up of a process Transformation of small scale lab batches into commercial scale depended on experience and probability and involving the more technique. Due to this probability of success is less understanding of critical process parameters and process enables control of critical step process parameter during manufacturing and successful transformation from lab scale to Exhibit batches and commercial batches.
... In addition, to date there have been at least four research papers published on this subject. Wiles (2013) proposes a frequentist approach to continual recalculation of a process capability index after each new PPQ batch, proceeding until a lower confidence bound on the index exceeds a minimum capability criterion. This method does not prescribe a requisite sample size or a level of assurance that Stage II manufacturing will yield a "positive outcome". ...
... Using a frequentist approach, Wiles (2013) proposes continual recalculation of a process capability index after each new PPQ batch until a lower confidence bound on the index exceeds a minimum capability criterion. The method presented by Wiles does not provide for prospective determination of sample size nor does it offer a level of assurance that Stage II manufacturing will yield a "positive outcome." ...
Article
Validation of pharmaceutical manufacturing processes is a regulatory requirement and plays a key role in the assurance of drug quality, safety, and efficacy. The FDA guidance on process validation recommends a life-cycle approach which involves process design, qualification, and verification. The European Medicines Agency makes similar recommendations. The main purpose of process validation is to establish scientific evidence that a process is capable of consistently delivering a quality product. A major challenge faced by manufacturers is the determination of the number of batches to be used for the qualification stage. In this paper we present a Bayesian assurance and sample size determination approach where prior process knowledge and data are used to determine the number of batches. An example is presented in which potency uniformity data is evaluated using a process capability metric. By using the posterior predictive distribution, we simulate qualification data and make a decision on the number of batches required for a desired level of assurance.
Article
Full-text available
The pharmaceutical market has transformed into a global industry within the span of just a few decades. Many companies have sites across the globe, presenting challenges when it comes to harmonizing process validation approaches among sites. There are numerous solutions available to ensure effective process validation at sites within a global network. No matter the solution, a comprehensive process validation strategy is necessary to achieve success.
Article
Full-text available
The pharmaceutical market has transformed into a global industry within just a few decades. Many companies have sites across the globe, presenting challenges when it comes to ensuring process validation. There are numerous solutions companies available to ensure effective process validation at global sites. No matter the solution, however, a comprehensive process validation strategy is necessary for achieving success.
Article
A control chart is a graphical display of a product quality characteristic that has been measured or computed periodically from a process at a defined frequency. Control charts were developed by Walter Shewhart in 1920s and are still widely used in various industries. In this paper, we discuss the use of control charts to evaluate pharmaceutical manufacturing process variability. We first discuss different types of control charts followed by some key considerations for constructing a control chart for pharmaceutical manufacturing processes. We also share several illustrative case studies where both variable (continuous numeric data) control charts and attribute (categorical data or discrete numeric data) control charts are utilized to monitor pharmaceutical manufacturing process variation. Control charts are effective tools to detect the presence of special cause variation in the manufacturing process and to ascertain if the process has reached a state of statistical control. Control charts are also useful tools to monitor the routine commercial production and to continually confirm the state of statistical control. When the control chart detects the presence of special cause variation, continual improvements can be initiated to correct and/or prevent potential failures so that the process remains in a state of statistical control and ensure the product consistently complies with the regulatory standards. In turn, this can greatly facilitate transforming the pharmaceutical manufacture from the reactive troubleshooting paradigm to a proactive failure reduction or prevention paradigm.
Book
The authoritative classic-revised and updated for today's Six Sigma practitionersWhether you want to further your Six Sigma training to achieve a Black or Green Belt or you are totally new to the quality-management strategy, you need reliable guidance. The Six Sigma Handbook, Third Edition shows you, step by step, how to integrate this profitable approach into your company's culture.Co-written by an award-winning contributor to the practice of quality management and a successful Six Sigma trainer, this hands-on guide features:Cutting-edge, Lean Six Sigma concepts integrated throughoutCompletely revised material focused on project objectives Updated and expanded problem-solving examples using Excel and MinitabA streamlined format that puts proven practices at your fingertipsThe Six Sigma Handbook, Third Edition is the only comprehensive reference you need to make Six Sigma work for your company. The book explains how to organize for Six Sigma, how to use customer requirements to drive strategy and operations, how to carry out successful project management, and more. Learn all the management responsibilities and actions necessary for a successful deployment, as well as how to:Dramatically improve products and processes using DMAIC and DMADVUse Design for Six Sigma to create innovative products and processesIncorporate lean, problem-solving, and statistical techniques within the Six Sigma methodologyAvoid common pitfalls during implementationSix Sigma has evolved with the changing global economy, and The Six Sigma Handbook, Third Edition is your key to ensuring that your company realizes significant gains in quality, productivity, and sales in today's business climate.
Article
The problem of constructing tolerance limits for a normal universe is considered. The tolerance limits are required to be such that the probability is equal to a preassigned value β\beta that the tolerance limits include at least a given proportion γ\gamma of the population. A good approximation to such tolerance limits can be obtained as follows: Let xˉ\bar x denote the sample mean and s2s^2 the sample estimate of the variance. Then the approximate tolerance limits are given by xˉnχn,β2rsandxˉ+nχn,β2rs\bar x - \sqrt\frac{n}{\chi^2_{n,\beta}} rs \text{and} \bar x + \sqrt\frac{n}{\chi^2_{n,\beta}} rs where n is one less than the number N of observations, χn,β2\chi^2_{n,\beta} denotes the number for which the probability that χ2\chi^2 with n degrees of freedom will exceed this number is β\beta, and r is the root of the equation 12π1/Nr1/N+ret2/2dt=γ.\frac{1}{\sqrt{2\pi}} \int^{1/\sqrt{N} + r}_{1/\sqrt{N}-r} e^{-t^2/2} dt = \gamma. The number χn,β2\chi^2_{n,\beta} can be obtained from a table of the χ2\chi^2 distribution and r can be determined with the help of a table of the normal distribution.
Process Validation: General Principles and Practices
  • Fda Guidance
  • Industry
FDA. Guidance for Industry. Process Validation: General Principles and Practices. Office of Communications, Division of Drug Information, Silver Spring, MD, 2011.
American Society for Quality Control, Supplier Quality Requirements Task Force
Automotive Industry Action Group, American Society for Quality Control, Supplier Quality Requirements Task Force. Fundamental Statistical Process Control: Reference Manual. AIAG: Southfield, MI, 1991.