DataPDF Available

The use of product information along its entire lifecycle, a practical framework for continuous development Borgia, Fanciullacci, Franchi, Tucci final

Authors:
The use of product information along its entire lifecycle: a
practical framework for continuous development
O. Borgia*, N. Fanciullacci*, S. Franchi*, M. Tucci*
*Department of Industrial Engineering, University of Florence, Viale Morgagni 40-44, 50134, Firenze, Italy
in collaboration with Ferrari S.p.a., Via Abetone Inferiore, 19, 41053 Maranello (MO), Italy
(orlando.borgia@unifi.it, nelson.fanciullacci@unifi.it, serena.franchi@stud.unifi.it, mario.tucci@unifi.it)
Abstract: With the development of IT technologies a lot of information can be collected during the product
lifecycle. These pieces of information can be useful and can drive inspiration to redesign a product in order to
increase its performance and reliability. Nevertheless most of the companies trace their products only during the
design and the production phases without putting any effort to collect data during the middle and the end of life (e.g.
use, maintenance, service, disposal etc.). The aim of this study is to develop an analytic framework to integrate and
analyze data collected along all phases of a product lifecycle; this is done to give continuous feedback to the
designers in order to enhance the product performance and reliability. A common database format was set to collect
data from different departments. A systematic graphical and statistical procedure to analyze data was implemented;
the use of well known statistical inference methodologies as individual value plot, scatterplot, boxplot, distribution
fitting, analysis of variance, regressions, control charts made up a framework able to give important feedback
information to designers. All of these techniques have been integrated into a software that allows a well defined and
systematic exchange of information in compliance with a product lifecycle management approach. This tool is
currently used by the manufacturer and allows to find correlations between various experimental measurements,
product criticalities, anomalous behavior and processes out of control. The transformation of the proposed
framework in a software tool led to time saving and higher quality and standardized reports.
Keywords: Reliability, PLM, closed loop PLM, single item PLM
1.Introduction
The constant demand for efficiency from production and
commercialization phases has led companies to invest on
methodologies and frameworks aimed at a continuous
development along the whole product lifecycle.
The process of unceasing upgrading and redesigning of a
product is fundamental, especially for companies that: (i)
operate in high competitive sectors where the product
must be superior than that of competitors, (ii) produce
goods that need to evolve constantly in order to bridge
the technology gap and increase performance and (iii)
follow the philosophy of reliability growth (O’Connor et
al., 2012).
For these types of companies it is fundamental to collect
data relative to the product not only during design and
production phases (e.g. BOM, CAD and CAM, SPC
reports etc.) but also during the remaining of its lifecycle.
According to Jun et al., 2007 design and production
belong to the product beginning of life (BOL), logistics,
distribution, use, service and maintenance belong to
product middle of life (MOL), while reverse logistics,
disassembly, failure diagnosis, remanufacturing and
disposal make up the product end of life (EOL).
The aim of the product lifecycle management (PLM) is to
connect “various product stakeholders over the entire
lifecycle of the product from concept to retirement”.
Moreover, PLM establishes a set of tools and
technologies that provide a shared platform for
collaboration among product stakeholders and streamlines
the flow of information along all the stages of product life
cycle“, (Ameri and Dutta, 2005). In recent years, PLM
becomes established also thanks to the development and
the decreasing price of IT resources, such as embedded
systems, wireless connections, remote control etc.. These
technologies allow to collect, elaborate and broadcast
information during the whole product lifecycle
(Saaksvuori and Immonen, 2008; Stark, 2011).
PLM has been widely discussed in literature and there are
many case studies in the industrial field. The majority of
these papers, however, refers to mass products where
information is mainly concentrated in the BOL. In most
cases, in fact, companies are not able to collect data from
MOL and EOL (Niemann et al., 2008) or they cannot find
the right methodology to efficiently analyze and extract
useful information from them (Ameri and Dutta, 2005).
Scarce are the case studies published where data relative to
the final stages of product life (maintenance and disposal)
are also used (Yang et al., 2007). These pieces of
information are relevant and very important especially to
determine the in-field product reliability performance
(Yang, 2007). In fact, the difference between the in-house
and in-field reliability can be significant; faults data
collected directly in field are fundamental to grasp the real
reliability performance (De Carlo et al., 2013; Persona et
al., 2009; Pham, 2005). In addition, with products that
involve mechanical systems it is useful to get information
concerning wear, operating hours, state of the machine
etc. while the product is in use in order to implement
operations such as diagnostics, prognostics and condition
based maintenance (Gulledge et al., 2010;
Venkatasubramanian, 2005). From the disposal phase
several pieces of information are useful as well (failure
time, failure modes, number of faults etc.), but scarcely
used by the manufacturer (Yang et al., 2007). These are
needed not only for reliability estimation but also to
extract wear indicators, to elaborate and validate damage
models. Other parameters that can give information about
the state of health of the product at the end of its life can
be extracted too (Parlikad and McFarlane, 2007). All these
information about system health and failures are used to
better organize maintenance operations (Kiritsis et al.,
2003; Lee et al., 2008).
Igba et al., 2013; Jun et al., 2007; Kiritsis et al., 2003 show
how a closed loop PLM can help to create a systematic
data flow to redirect information from MOL and EOL in
order to provide knowledge to redesign or upgrade a
product. In literature little attention has been given to
analyze case studies where feedback information from
MOL and EOL is systematically acquired and converted
to knowledge useful to design for a new product release.
The aim of this study was to create a set of analytic
procedures (framework) that could be able to analyze
MOL and EOL data and to give feedback results
supporting the redesign phase. A case study of a
continuously evolving small-series product where redesign
is intended to improve its reliability, will be introduced.
To implement the systematic analysis of data from MOL
and EOL, a software was developed in order to integrate
data collected during the whole product lifecycle.
2.Case study
In this paper we analyze the case study of a company that
produce complex mechanical systems in small series. The
manufacturer belongs to the type identified at point (ii) in
chapter 1, whose products need to be continuously
upgraded and redesigned in order to increase their
reliability and performance. The product is composed by
electrical and thermal subsystems and it includes: an high
power battery, control units, hot and cold fluids (water,
air, oil) and their cooling systems, under pressure circuits,
rotating parts and bearings. The peculiarity of this case
study is that each product has its signature and therefore
can be considered different from previous releases (an
individual or a single item).
In the context where the manufacturer operates, fault
occurrence has disastrous effects (similarly to aerospace
industry). Once a new component is introduced into the
project, it is fundamental to test its behavior with bench
tests in order to obtain the approval for the in-field use. If
a certain number of tests with the new product release are
successful it becomes the new product referential. If tests
give negative results (failure occurs, or performance is not
satisfactory) the new component is redesigned and tested
again until it passes the approval test.
Therefore there is the need to exchange experimental data
between the testing and validation phase and the design
phase through a continuous, systematic and bidirectional
data flow. The information coming from the field is, in
this case study, abundant and high quality because of the
high number of sensors employed (temperature, pressure,
flow, vibration, etc.). In addition, the manufacturer has
already a well-structured measurement procedure applied
during the BOL where quality is checked before and after
the assembly of the final product. Many measures are also
performed at the end of life (whether the product has
stopped due to failure or retirement) in order to monitor
the wear of the components, their mechanical and
electrical properties after use.
Faults are monitored and diagnostic analysis such as
geometric measurements, penetration tests, analysis of
materials etc. are performed.
Each single component of the bill of materials is tracked
from the beginning to the end of its life.
All these experimental data are collected by separate
company’s departments and placed in different databases
with different formats.
With such data abundance and quality it is necessary to
create a well-structured and systematic framework that
integrates analytic tools in order to extract information
and transform it to knowledge useful to increase
performance and reliability of the product. The
framework proposed in the next chapter integrates
statistical analysis and an automated procedure to analyze
data from all the company’s departments.
Figure 1: the information flow along the whole product lifecycle. In red the information loop that lead to redesign the new
prototype
Concept Basic design
Final
Engineering and
manufacturing
Operate (Test
bench, end user)
Testing and
validation
Design review
(Reliability &
Performance)
On-line
monitoring
Failure
reporting, lack
of performance
Product
conformity
Innovation
Prototyping
engineering
Detailed
design
Disposal
(disassembly)
Hard/soft
failure
reporting
3.Framework
In order to support data analysis and monitoring
requested by designers a framework is introduced. The
aim of the framework is to collect and streamlines a series
of graphical and statistical tools for data analysis, which
can be used in series or in parallel according to the
analysts needs.
The described process allows to feed with the right data
the loops between BOL, MOL and EOL in order to
support the redesign of the product. From another
perspective, the conversion into knowledge of feedback
data, lead the EOL phase of a prototype to coincide with
BOL of its new release as show in Figure 1.
The proposed framework consists of several steps that
will be explained below.
The first phase consists in doing an automatic
standardization of the file format shared by all the
stakeholders; an interchangeable and easy to share format
was designed using Visual Basic® and MSSQL®.
In the second phase the analyst can plot data using several
types of graphs. The aim is to highlight any possible
abnormal behaviour, deviation from baselines, processes
out of control, outlier etc.. Graphical tools are an easy but
effective way to identify non-standard state of the system,
of the production line, of the measurement procedure,
sensor malfunction, causal relation among variables, etc..
by each type of operator (designers, workers, engineers,
managers etc.).
The third phase allows to statistically quantify any
anomaly detected in the precedent phase, and look for
correlation models among variables (e.g. the relationship
between the material of a component and its state of wear
at the end of its life). The tools used in this phase are both
graphical and analytical. The latter are techniques of
statistical inference by which hypothesis on average,
standard deviation, median etc. are tested with the chosen
confidence interval (in this case 95%). Dependency and
causal relations can be investigated through the afore
mentioned methodologies and the outcomes can be used
to identify design directions to improve the product.
These types of analysis can be time consuming because of
the sample size and the number of variables involved.
Moreover it can be very hard to find the optimal model in
a limited time. To solve this problem an heuristic
algorithm was also implemented into the framework and
the operator can choose which methodology to use for
correlation analysis. In these cases, methodologies typical
of optimization problems are preferred; in fact, they don’t
always converge to the best result but allow to obtain the
best solution among those analysed till the moment the
algorithm is stopped.
The final step consists in the organization of a clear,
standardized and intelligible output that can be easily used
to produce a report of the analysis performed. This
document is ready to be forwarded to other product
stakeholders to disseminate the knowledge and the
emerged results.
4.Results
To get the necessary feedback to implement a closed loop
strategy it is necessary to create a framework designed and
shared with all the actors that may look at analysis reports
and results (H.-B. Jun et al., 2007; Hong-Bae Jun et al.,
2007).
For this purpose a data analysis software that includes and
organizes the various stages of the framework described in
the previous chapter, was implemented (figure 2). The
software was developed using MatLab® and was divided
into different sections. In main window on the left area
there is the definition of the variables while on the right
area the user can choose which type of analysis to
perform.
Figure 2: interface of the software that include the
proposed framework.
The acquisition process implemented in a section of the
software generates a sheet file that contains discrete and
continuous numerical data (sample size, geometrical
measures, product layouts, ID, etc ...). The sheet file is
stored in a local database created to standardize data.
To perform the graphical analysis described in phase two,
control charts (figure 3), individual value plot (figure 4)
and box plot (figure 5) were implemented. They can be
used to monitor the temporal trend of variables
highlighting possible deviations or out of control
processes. When using control charts lower control limit
and upper control limit can be determined by default by
MatLab® rules or defined by the user. An example of
Control Chart is shown in figure 3. In this chart on the
horizontal axis there is the subgroup temporal sequence
while on the vertical axis there is the value of the measure
for each subgroup. Points are connected by a blue line
representing the analysed data while the red lines
represent the upper and lower control limit.
Individual Value Plot with Box Plot (Figure 4) are also
used to monitor trends of measurements. In this case red
dots represent individual values of the elements of the
subgroups. Tolerance limits relating to the analysed
measures are shown in green.
Figure 3: Example of a control chart.
Figure 4: Example of Individual Value Plot used to monitor
the trend of measurement of subgroups.
If there are values that show an abnormal behaviour
compared to the rest of the population is necessary to
investigate the source of the deviation using statistical
methodologies. To analyse elements or subgroups that
have distinctive characteristics compared to the rest of the
population, graphical tools such as Box Plot (figure 5),
individual value plot (figure 4 and 6), 2D or 3D
histograms and statistical tests such as one-way or n-way
ANOVA were implemented. Figure 4, 5 and 6 horizontal
and vertical axis represents factors/treatments and the
value of the variable monitored, respectively.
IVP values are drawn in red and the size of the dot
depend of the frequency. The dashed line connects the
average value of each factor. In particular, analysis of
variance (ANOVA) with other test of hypothesis allows to
check if there is a statistically significant difference
between subgroup measures within a certain confidence
interval, typically 95%. These tests are introduced and
used to explore the nature of the data and explain the
variance highlighted in each subgroup. ANOVA
hypothesis statements and assumptions are tested and a
concise and easy-to-interpret output is produced. Using
this type of format each user is able to understand which
model fits the data better.
Figure 5: Example of Box Plot which can highlight the
presence of sub-populations.
Figure 6: Example of Individual Value Plot which can
highlights the presence of sub-populations.
Another software section performs the search of linear
and nonlinear correlations. All possible regression models
are generated and classified using the corrected AIC as the
goodness of fit estimator. This parameter is considered
better than R-square adjusted since it allows to overcome
the overfitting problem (Akaike, 1974). The tool well
satisfies the need to obtain clear results in extremely short
times using graphical representation of the results through
summary tables, 2D and 3D scatterplots (figure 7). This
graphs help the user to identify the presence of a
correlation between two analysed variables. The
experimental data is displayed in red and the estimated
regression line, calculated with the least squares method,
is displayed in blue.
Figure 7: Regression plot with the linear interpolation line.
Also, to check the assumptions of ANOVA and
regression plots relative to residuals distributions,
histograms, homoscedasticity etc. are produced (figure 8).
On the top left section of the figure there is the
distribution of residuals while on the bottom left sector
there is the Histogram of residuals. On the top right there
is the trend of residuals in function of the estimated
variable and on the bottom right there is the residual trend
in function of the data sample.
Figure 8: Example of Residual Plot.
As anticipated, two ways to seek the best regression model
were implemented into the software. The first one
enumerates all the possible linear and nonlinear
combinations of each predictors including variable
interaction. This procedure can be highly time consuming.
To make the analysis effective and to obtain the results in
a shorter time, genetic algorithms were also implemented.
In particular, each regression model can be easily
represented with a chromosome composed by a string of
binary numbers. Within the chromosome the true value
indicates the presence of a coefficient in the regression
model while false value represents the lack of it.
Combining two chromosomes a feasible solution is always
found and no constrain is needed. In this way both
methodologies can be implemented into the software
within a user friendly interface where just few parameters
need to be set (regression order and number of
coefficients).
In order to understand which methodology is more
efficient in function of the number of variables considered
and of the number of records a comparison of the
elaboration times was performed. Results show (figure 9)
how genetic algorithms (blue line) are faster (ordinate)
than the enumerative solution (red line) independently by
the complexity of the model (abscissa).
Figure 9: Comparison between processing time of genetic
algorithms (blue) and the enumeration method (red).
5.Conclusions
The purpose of this paper was to analyse data from all the
phases of the product lifecycle so that they can be used as
knowledge to redesign the product in order to improve its
performance and its reliability. The case study of a
company that produces complex mechanical systems was
considered. The manufacturer follows a rigorous
measurement procedure before and after the assembly
phase, during the use, and at the end of the single item
life. Therefore data are high in number and quality and are
stored in different databases with different formats.
For this purpose a framework was developed; its format
was discussed with all operators involved during the
product lifecycle. The framework establishes a series of
standardized analytical procedures to be implemented in
accordance with closed loop product lifecycle
management (H.-B. Jun et al., 2007; Hong-Bae Jun et al.,
2007; Kiritsis, 2011).
Furthermore, the data flow can be used to improve the
product reliability performance, proactively (designing for
reliability) or reactively (using field feedback as a baseline
information to solve technical problems). The
development of the proposed systematic approach is
necessary to have the correct timing between design and
test phase, and to have the proper reactivity to manage
and implement corrective actions (for example after
abnormal behaviour of a product).
To support the proposed framework a software that can
automate and speed up all the analyses specified in the
framework was developed.
With the proposed framework and the developed
software it was possible to standardize data from various
departments and use them effectively to monitor and
redesign the product.
This methodology was successfully introduced along the
product lifecycle in accordance with the requirements of
the company and with the actors involved.
The tool brought benefits in terms of results quality
because of the structured format of the output produced
by the software.
The time spent to get the feedback from a different
product stakeholder and the time needed to exchange
information between them was reduced by about 60%
with respect to the precedent procedure.
Bibliography
Akaike, H., (1974). A new look at the statistical model
identification. IEEE Trans. Autom. Control 19,
716723. doi:10.1109/TAC.1974.1100705
Ameri, F., Dutta, D., (2005). Product Lifecycle
Management: Closing the Knowledge Loops.
Comput.-Aided Des. Appl. 2, 577590.
doi:10.1080/16864360.2005.10738322
De Carlo, F., Tucci, M., Borgia, O., Fanciullacci, N.,
(2013).Service demand forecasting through the
systemability model: a case study. Int. J. Eng.
Technol., vol. 5, no. 5. 4312-4319.
Gulledge, T., Hiroshige, S., Iyer, R., (2010). Condition-
based Maintenance and the product
improvement process. Comput. Ind., Trends and
Challenges in Production and Supply Chain
Management 61, 813832.
doi:10.1016/j.compind.2010.07.007
Igba, J.E., Alemzadeh, K., Gibbons, P.M., Friis, J., (2013).
A Framework for Optimizing Product
Performance Through Using Field Experience of
In-Service Products to Improve the Design and
Manufacture Stages of the Product Lifecycle.
Azevedo, A. (Ed.), Advances in Sustainable and
Competitive Manufacturing Systems, Lecture Notes in
Mechanical Engineering. Springer International
Publishing, pp. 1527.
Jun, H.-B., Kiritsis, D., Xirouchakis, P., (2007). Research
issues on closed-loop PLM. Comput. Ind. 58, 855
868. doi:10.1016/j.compind.2007.04.001
Jun, H.-B., Shin, J.-H., Kiritsis, D., Xirouchakis, P., (2007).
System architecture for closed-loop PLM. Int. J.
Comput. Integr. Manuf. 20, 684698.
doi:10.1080/09511920701566624
Kiritsis, D., (2011). Closed-loop PLM for intelligent
products in the era of the Internet of things.
Comput.-Aided Des. 43, 479501.
doi:10.1016/j.cad.2010.03.002
Kiritsis, D., Bufardi, A., Xirouchakis, P., (2003). Research
issues on product lifecycle management and
information tracking using smart embedded
systems. Adv. Eng. Inform., Intelligent Maintenance
Systems 17, 189202.
doi:10.1016/j.aei.2004.09.005
Lee, S.G., Ma, Y.-S., Thimm, G.L., Verstraeten, J., (2008).
Product lifecycle management in aviation
maintenance, repair and overhaul. Comput. Ind.,
Product Lifecycle Modelling, Analysis and Management
59, 296303. doi:10.1016/j.compind.2007.06.022
Niemann, J., Tichkiewitch, S., Westkämper, E., (2008).
Design of Sustainable Product Life Cycles.
Springer Science & Business Media.
O’Connor, P.D.T., O’Connor, P., Kleyner, A., (2012).
Practical Reliability Engineering. John Wiley & Sons.
Parlikad, A.K., McFarlane, D., (2007). RFID-based
product information in end-of-life decision
making. Control Eng. Pract., Special Issue on
Manufacturing Plant Control: Challenges and Issues
INCOM 2004 11th IFAC INCOM’04 Symposium
on Information Control Problems in Manufacturing 15,
13481363.
doi:10.1016/j.conengprac.2006.08.008
Persona, A., Sgarbossa, F., Pham, H., (2009). Systemability
function to optimisation reliability in random
environment. Int. J. Math. Oper. Res. 1, 397417.
Pham, H., (2005). A new generalized systemability model.
Int. J. Perform. Eng. 1, 145.
Saaksvuori, A., Immonen, A., (2008). Product lifecycle
management. Springer Science & Business Media.
Stark, J., (2011). Product Lifecycle Management, Springer
London.
Venkatasubramanian, V., (2005). Prognostic and
diagnostic monitoring of complex systems for
product lifecycle management: Challenges and
opportunities. 14th European Symposium on
Computer Aided Process Engineering ESCAPE-14
The 14th European Symposium on Computer Aided
Process Engineering 29, 12531263.
doi:10.1016/j.compchemeng.2005.02.026
Yang, G., (2007). Life Cycle Reliability Engineering. John
Wiley & Sons.
Yang, X., Moore, P.R., Wong, C.-B., Pu, J.-S., Kwong
Chong, S., (2007). Product lifecycle information
acquisition and management for consumer
products. Ind. Manag. Data Syst. 107, 936953.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
This publication discusses the evolution of CAD, CAM, and CAE tools through product data management systems into today's product lifecycle management (PLM), followed by a review of the characteristics and benefits of PLM. Current practices and potential applications of PLM in aviation maintenance, repair and overhaul (MRO) are discussed through case studies, two of which were from the authors' experience.
Article
Full-text available
Competitive success of manufacturing firms is by and large determined by the success of the products they introduce to the market. This is why companies continuously try to improve the efficacy of their product realization process. Product Lifecycle Management (PLM) is a business solution which aims to streamline the flow of information about the product and related processes throughout the product’s lifecycle such that the right information in the right context at the right time can be made available. Yet, few organizations are positioned to reap the true benefits of PLM. One major reason for this is a lack of clear understanding of what PLM is, its core features and functions, and its relationship to the myriad of current software tools. This paper aims to do that and also elaborates on the role of PLM as a knowledge management system.
Article
Full-text available
Companies competing in an increasingly competitive market must ensure the production of goods with excellent performance, able to satisfy their customers and which have low manufacturing and management costs. It is in this context that companies have, in recent years, invested in research and development and have upgraded their reliability and maintenance functions. In many cases, the maintenance engineers have attempted to predict the reliability of the products, at least for evaluating the number of warranty repairs to be performed. This approach is on the one hand, extremely appropriate but, on the other, must face the difficulties of making laboratory test in conditions often radically different from those that the products meet during their normal operation. Frequently, the reliability estimation, coming from experimental test (in-house) are different from those obtained by the analysis of the service data (in-field). The former are executed in laboratory with standardized, controlled and repeatable conditions, while the latter are affected by random environmental and operating conditions. In the field of household appliances, this is so true that the conditions of use may vary even from country to country. There are some approaches that allow to assess the reliability performance of a system starting from the results of experimental tests performed in a laboratory. One of these was proposed some years ago and is called systemability. In this study, it was applied, for the first time, this approach to the field of household appliances. In addition, we wanted to try to identify the parameters that allowed to distinguish two different European markets. In fact the in-field data come from two different countries and could be considered a great opportunity to validate the correlation model. In fact, it was possible to investigate the effects of two different environmental condition sets (costumer behaviours, , market issues, logistics, etc.) on the reliability performances of a product population that has been manufactured in the same industrial plant. One of the most important outcomes of the Systemability model was the capacity to predicts two different in-field reliability performances relative to two different markets in contrast with the classic methodology that uses the same in-house reliability data without considering environmental effects. The initial stage of modeling was followed by a second validation phase, which gave satisfactory results. The overall outcomes were very positive and they have allowed us to focus some improvements in maintenance management that will lead to greater effectiveness of the method in the coming years.
Article
This second volume moves beyond a general introduction to product lifecycle management (PLM) and its principal elements to provide a more in-depth analysis of the subjects introduced in Volume 1 (21st Century Paradigm for Product Realisation). Providing insights into the emergence of PLM and the opportunities it offers, key concepts such as the PLM Grid and the PLM Paradigm are introduced along with the main components of PLM and the associated characteristics, issues and approaches. Detailing the 10 components of PLM: objectives and metrics; management and organisation; business processes; people; product data; PDM systems; other PLM applications; facilities and equipment; methods; and products, it provides examples and best practices. The book concludes with instructions to help readers implement and use PLM successfully, including outlining the phases of a PLM Initiative: development of PLM vision and strategy; documentation of the current situation; description of future scenarios; development of implementation strategies and plans; implementation and use. The main activities, tasks, methods, timing and tools of the different phases are also described.
Chapter
In this chapter we consider the basic functionality of product lifecycle management systems and the adaptation of their functions to the creation and use of product data in the basic business processes of the company. Furthermore, the chapter examines the use of product lifecycle management in the various functions of the industrial enterprise.
Book
In today's manufacturing and service industries PLM - Product Lifecycle Management - is an essential means to cope with the challenges of more demanding global competition. New, more complex and more configurable products must be introduced to markets faster than ever before. In order to cope with this, companies form large collaborative networks. In these collaborative networks product information must be transferred between companies in electronic form, smoothly and with high levels of information security. This is the first English-language book on PLM that introduces the reader to the basic terms and fundamentals of PLM. It provides a solid foundation for starting a PLM development project and further gives ideas and examples of how PLM can be utilized in various industries. In addition, it offers an insight into how PLM can assist in creating new business opportunities and in making real eBusiness possible. © 2008, 2005, 2003 Springer-Verlag Berlin Heidelberg. All rights are reserved.
Article
For many component sub-systems which make up the individual elements of a larger product system, the optimization of their performance in the system becomes more difficult through design modifications and/or manufacturing process improvements alone. The authors argue this can be improved if adequate field performance data has been fed back to the early stages of the product lifecycle. This paper presents a framework for an inclusive lifecycle approach to optimizing product performance through the effective use of field experience and knowledge to improve the design and manufacturing of sub-systems. The problem is presented alongside a taxonomic and captious review of literature of relevant subject areas, followed by a case study using wind turbine sub-system components as a basis to support the investigation. A framework is then developed through the combination of systems thinking and continuous improvement tools, applied to the conventional product lifecycle. The findings of the investigation indicate that sub-system performance can be improved through the accumulation of knowledge fed back to the design and manufacture stages of the product lifecycle using information from in-service product performance. The approach would be useful to practitioners and academics with an interest in applying an inclusive and holistic approach to product lifecycle management. This framework is particularly useful for companies that produce and/or operate systems whose sub-systems are manufactured by different suppliers.
Article
In this paper, we present a new mathematical function, called systemability, by introducing the uncertainty of the operating environments as a random variable for predicting the reliability of systems in the field. Numerical calculations for several system configurations such as parallel, series, and k-out-of-n, are given to illustrate the results.
Book
What is Reliability Engineering? Why Teach Reliability Engineering? Why Do Engineering Products Fail? Probabilistic Reliability Repairable and Non-Repairable Items The Pattern of Failures with Time (Non-Repairable Items) The Pattern of Failures with Time (Repairable Items) The Development of Reliability Engineering Courses, Conferences and Literature Organizations Involved in Reliability Work Reliability as an Effectiveness Parameter Reliability Programme Activities Reliability Economics and Management Questions Bibliography