Preprint

Validating a Project Management Maturity Framework Based on the Emerging Best Practices for Successful Human Factors Projects that Require FDA Approval -Summary of Key Findings

Authors:
  • Successful Human Factors™
Preprints and early-stage research may not have been peer reviewed yet.
To read the file of this research, you can request a copy directly from the author.

Abstract

Previously, this author pioneered exploratory research regarding the concerns, underlying variables, key success factors, and best practices pertaining to Human Factors (HF) projects for medical devices and combination products that seek Food and Drug Administration (FDA) review. Because of this research, it was observed that the great majority of HF submissions to the FDA fail. An industry-focused project management (PM) maturity assessment tool was proposed, to enable much needed alignment and improvement. In this occasion, the overall architecture of the developed framework is briefly introduced for the purpose of testing/validating, and to try and answer the last of several questions that prompted this multiphase research: what is the average (and what would be the ideal) maturity level for more successful HF projects? The resulting average was 2.65, which corresponds to “Level 2 – Childhood” of the framework. It indicates a lack of standardization of the practices assessed. As per participants’ feedback, the tool was useful and could help improve the success of HF validation projects. Other interesting findings, recommendations for practice (adoption) and future research, are discussed. On that last note, the author hopes (and suggests) that key stakeholders leverage this work to get past the exploratory stage in which they have been stuck for the last decade (whys and wherefores have been already asked and answered). The results of this work prescribe that the next urgent step is to start developing metrics around the adoption and standardization of the identified set of emerging best practices and key success factors. We have heard it before: “You can’t improve what you don’t measure.” Well, here is an industry-focused, feasible starting point to do just that.

No file available

Request Full-text Paper PDF

To read the file of this research,
you can request a copy directly from the author.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Despite the widespread use of exploratory factor analysis in psychological research, researchers often make questionable decisions when conducting these analyses. This article reviews the major design and analytical decisions that must be made when conducting a factor analysis and notes that each of these decisions has important consequences for the obtained results. Recommendations that have been made in the methodological literature are discussed. Analyses of 3 existing empirical data sets are used to illustrate how questionable decisions in conducting factor analyses can yield problematic results. The article presents a survey of 2 prominent journals that suggests that researchers routinely conduct analyses using such questionable methods. The implications of these practices for psychological research are discussed, and the reasons for current practices are reviewed. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
"This paper advocates a validational process utilizing a matrix of intercorrelations among tests representing at least two traits, each measured by at least two methods. Measures of the same trait should correlate higher with each other than they do with measures of different traits involving separate methods. Ideally, these validity values should also be higher than the correlations among different traits measure by the same method." Examples from the literature are described as well as problems in the application of the technique. 36 refs. (PsycINFO Database Record (c) 2010 APA, all rights reserved)
Conference Paper
Since the FDA published guidance on the application of human factors engineering to medical devices and combination products, the concerns about the quality and success of human factors validation projects have put a strain on key stakeholders. Failed HF validation submissions can have serious negative impact not only on manufacturers and HF service providers, but also on the regulatory system and patients. Previously, we remarked on the need for alignment between key stakeholders, and strategies that increase the quality and success of HF validation projects. Leveraging the application of project management was recommended for that purpose. However, there is currently no research about characteristics, practices and critical success factors of these projects. An online survey instrument was developed tailored to this specific context to inform the development of an industry-focused project management maturity assessment tool (which will be Phase II of this research). In this opportunity, the high-level, preliminary findings are presented and briefly discussed. This effort contributes much needed literature regarding the current practices and factors that influence the quality and success of FDA HF validation projects.
Conference Paper
This work is Phase II of a research theme on the topic of human factors validation projects for medical devices and combination products. Initially, a review and analysis of the persisting concerns and also of the implications of failed FDA HF validation projects took place. One main problem delineated was that key stakeholders (namely procurers and providers of HF services) are lacking the necessary tools to adapt to current and future demands of a changing and evolving quality system regulation (QSR). Under the QSR, manufacturers are responsible for the assessment and control of critical suppliers, such as HF service providers. However, there is a need for tools that enable integration and alignment so that stakeholders can develop the necessary capabilities. To increase the quality and success of HF validation projects and help HF service providers meet the QSR, an industry-focused project management (PM) maturity assessment tool was proposed. Phase I consisted of a survey that gathered interesting information to help understand practices and key success factors in FDA HF validation projects. This Phase II summarizes the method and process followed to develop the PM maturity assessment tool. An overview and description of the tool and its resulting components is also presented.
Article
Recognizing the role of human factors engineering (HFE) in the development of medical devices and combination products that involve devices, the Food and Drug Administration (FDA) now requires human factors (HF) validations before market approval. Manufacturers are responsible for ensuring their products are safe and effective through the application of HFE. However, key stakeholders are still learning and developing capabilities to adapt to the regulatory component. Nonetheless, the lack of the corresponding HF capabilities hinders compliance with the FDA’s expectations, and though ultimate success. No known previous work has looked into FDA HF validation projects to assess the underlying factors and implications of failed submissions. Applying system dynamics (SD), a causal loop diagram (CLD) was developed. CLDs are useful for the exploration of the causal interactions among factors or variables, as well as the underlying feedback structure of a complex system. This work can serve to help manufacturers better understand the FDA’s HF requirement to enable overall product success. Further, with patient safety as a common goal, HF service providers (HFSPs) and regulators should be aware of the need to ensure the consistent quality of the HF element in premarket submissions.
Article
As part of a comprehensive Quality System Regulation (QSR), the human factors (HF) validation requirement by the Food and Drug Administration (FDA) is a relatively recent topic. Multiple issues and bottlenecks have emerged since the publication of the draft guidance in 2011. The scientific literature on the topic of ‘FDA HF validation requirement’ is mostly focused on HF methods to ensure success from that perspective. However, the development of across-the-board strategies that can address other critical factors is necessary. No previous scientific research has outlined and addressed the problems considering the QSR and the needs of key stakeholders. For that purpose, this effort presents a narrative review of how the HF requirement for medical devices and combination products developed, as well as the issues and the interventions that have taken place to address the bottlenecks. Some essential considerations such as notorious knowledge-based and process-based gaps are discussed. Similarly, because of the demands of a changing QSR, attention is brought to the need to align key stakeholders, namely manufacturers and HF service providers (HFSPs). Also, the development of an industry (HFSPs) maturity assessment tool and future research for that purpose are proposed.
Book
Despite criticism for their serious shortcomings, maturity models are widely used within organizations. The appropriate applications of these models can lead to organizational and corporate success. Developing Organizational Maturity for Effective Project Management is a critical scholarly publication that explores the successes and failures of maturity models and how they can be applied competently to leadership within corporations. Featuring coverage on a wide array of topics such as project management maturity, agile maturity, and organizational performance, this publication is geared toward professionals, managers, and students seeking current research on the application of maturity models to corporate success.
Article
As part of the development of a comprehensive strategy for structural equation model building and assessment, a Monte Carlo study evaluated the effectiveness of different exploratory factor analysis extraction and rotation methods for correctly identifying the known population multiple‐indicator measurement model. The exploratory methods fared well in recovering the model except in small sample sizes with highly correlated factors, and even in those situations most of the indicators were correctly assigned to the factors. Surprisingly, the orthogonal varimax rotation did as well as the more sophisticated oblique rotations in recovering the model, and generally yielded more accurate estimates. These results demonstrate that exploratory factor analysis can contribute to a useful heuristic strategy for model specification prior to cross‐validation with confirmatory factor analysis.
Article
Investigation of the structure underlying variables (or people, or time) has intrigued social scientists since the early origins of psychology. Conducting one's first factor analysis can yield a sense of awe regarding the power of these methods to inform judgment regarding the dimensions underlying constructs. This book presents the important concepts required for implementing two disciplines of factor analysis: exploratory factor analysis (EFA) and confirmatory factor analysis (CFA). The book may be unique in its effort to present both analyses within the single rubric of the general linear model. Throughout the book canons of best factor analytic practice are presented and explained. The book has been written to strike a happy medium between accuracy and completeness versus overwhelming technical complexity. An actual data set, randomly drawn from a large-scale international study involving faculty and graduate student perceptions of academic libraries, is presented in Appendix A. Throughout the book different combinations of these variables and participants are used to illustrate EFA and CFA applications. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
A general formula (α) of which a special case is the Kuder-Richardson coefficient of equivalence is shown to be the mean of all split-half coefficients resulting from different splittings of a test. α is therefore an estimate of the correlation between two random samples of items from a universe of items like those in the test. α is found to be an appropriate index of equivalence and, except for very short tests, of the first-factor concentration in the test. Tests divisible into distinct subtests should be so divided before using the formula. The index [`(r)]ij\bar r_{ij} , derived from α, is shown to be an index of inter-item homogeneity. Comparison is made to the Guttman and Loevinger approaches. Parallel split coefficients are shown to be unnecessary for tests of common types. In designing tests, maximum interpretability of scores is obtained by increasing the first-factor concentration in any separately-scored subtest and avoiding substantial group-factor clusters within a subtest. Scalability is not a requisite.
Article
Incl. app., bibliographical references, index, answers pp; 593-619
Article
An analytic criterion for rotation is defined. The scientific advantage of analytic criteria over subjective (graphical) rotational procedures is discussed. Carroll's criterion and the quartimax criterion are briefly reviewed; the varimax criterion is outlined in detail and contrasted both logically and numerically with the quartimax criterion. It is shown that thenormal varimax solution probably coincides closely to the application of the principle of simple structure. However, it is proposed that the ultimate criterion of a rotational procedure is factorial invariance, not simple structure—although the two notions appear to be highly related. The normal varimax criterion is shown to be a two-dimensional generalization of the classic Spearman case, i.e., it shows perfect factorial invariance for two pure clusters. An example is given of the invariance of a normal varimax solution for more than two factors. The oblique normal varimax criterion is stated. A computational outline for the orthogonal normal varimax is appended.
Article
Issues related to the validity and reliability of measurement instruments used in research are reviewed. Key indicators of the quality of a measuring instrument are the reliability and validity of the measures. The process of developing and validating an instrument is in large part focused on reducing error in the measurement process. Reliability estimates evaluate the stability of measures, internal consistency of measurement instruments, and interrater reliability of instrument scores. Validity is the extent to which the interpretations of the results of a test are warranted, which depends on the particular use the test is intended to serve. The responsiveness of the measure to change is of interest in many of the applications in health care where improvement in outcomes as a result of treatment is a primary goal of research. Several issues may affect the accuracy of data collected, such as those related to self-report and secondary data sources. Self-report of patients or subjects is required for many of the measurements conducted in health care, but self-reports of behavior are particularly subject to problems with social desirability biases. Data that were originally gathered for a different purpose are often used to answer a research question, which can affect the applicability to the study at hand. In health care and social science research, many of the variables of interest and outcomes that are important are abstract concepts known as theoretical constructs. Using tests or instruments that are valid and reliable to measure such constructs is a crucial component of research quality.
Introduction to the Architecture of the CMMI ® Framework
CMMI Architecture Team. (2007). Introduction to the Architecture of the CMMI ® Framework. In Technical Note. Carnegie Mellon University. https://apps.dtic.mil/dtic/tr/fulltext/u2/a471060.pdf
Understanding the main phases of developing a maturity assessment model
  • T De Bruin
  • M Rosemann
  • R Freeze
  • U Kulkarni
de Bruin, T., Rosemann, M., Freeze, R., & Kulkarni, U. (2005). Understanding the main phases of developing a maturity assessment model. Maturity Assessment Model, 11. https://eprints.qut.edu.au/25152/
Project Management Maturity & Value Benchmark
  • Pm Solutions
PM Solutions. (2014). Project Management Maturity & Value Benchmark. In PM Solutions Research. https://www.pmsolutions.com/articles/PM_Maturity_2014_Research_Report_FINAL.pdf
Ahead of the Curve: Forging a FutureFocused Culture
  • Pmi
PMI. (2020). Ahead of the Curve: Forging a FutureFocused Culture. In Pulse of the Profession.
Current Portfolio, Programme, and Project Management Practices
  • Pwc
PwC. (2012). Current Portfolio, Programme, and Project Management Practices. In Insights and Trends. https://www.pwc.com.tr/en/publications/arastirmalar/pages/pwc-global-project-management-report-small.p df
Exploratory or Confirmatory Factor Analysis
  • D D Suhr
Suhr, D. D. (2006). Exploratory or Confirmatory Factor Analysis [Paper 200-31]. Proceedings of the Thirty-First Annual SAS Users Group International Conference. https://doi.org/10.1002/da.20406
Using multivariate statistics
  • B G Tabachnick
  • L S Fidell
Tabachnick, B. G., & Fidell, L. S. (2012). Using multivariate statistics (6th ed.). In New York: Harper and Row. https://doi.org/10.1037/022267