PreprintPDF Available

DATA-DRIVEN APPROACH TO ESTIMATE MAINTENANCE LIFE CYCLE COST OF ASSETS

Authors:
Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

Different participants in the supply chain of an industrial asset, from original equipment manufacturer (OEM) to owner/operator (O/O), know more than others about significant aspects of the asset. Sharing of information between these participants is necessary to most effectively manage a product or asset for all stakeholders involved. In particular, one type of data generated about an asset during its lifecycle is maintenance data. Field maintenance data collected over the usage of a product provides valuable information about its failure patterns and performance in different operating contexts that can benefit all. However, maintenance data by itself typically has data quality issues and needs to be understood and processed in order for information of value to be extracted and used. In this article, we present a case study of how maintenance data from the CMMS/EAM can be processed to return information that can be used to benefit everyone in the supply chain.
Content may be subject to copyright.
DATA-DRIVEN APPROACH TO ESTIMATE MAINTENANCE LIFE CYCLE COST OF ASSETS
Sarah Lukens
GE Digital
Roanoke, VA, USA
Matt Markham
GE Digital
Roanoke, VA, USA
Manjish Naik
GE Digital
Roanoke, VA, USA
Marc Laplante
GE Digital
Roanoke, VA, USA
ABSTRACT
Different participants in the supply chain of an industrial asset, from original equipment manufacturer (OEM) to
owner/operator (O/O), know more than others about significant aspects of the asset. Sharing of information between these
participants is necessary to most effectively manage a product or asset for all stakeholders involved. In particular, one type
of data generated about an asset during its lifecycle is maintenance data. Field maintenance data collected over the usage
of a product provides valuable information about its failure patterns and performance in different operating contexts that can
benefit all. However, maintenance data by itself typically has data quality issues and needs to be understood and processed
in order for information of value to be extracted and used. In this article, we present a case study of how maintenance data
from the CMMS/EAM can be processed to return information that can be used to benefit everyone in the supply chain.
INTRODUCTION
Lifecycle costs for industrial assets can mean different things from different perspectives of the supply chain. Product lifecycle
management (PLM) is the activity of effectively managing products across the product lifecycle (which spans phases from design, to
production, logistics, and maintenance to disposal/obsolescence of a product) (1). For the users of an industrial asset, lifecycle cost
analysis (LCC) measures the total cost of ownership (TCO), taking many different factors into account from the stages of an asset’s
lifespan, such as initial costs, annual operating and maintenance costs, and decommissioning costs. These two perspectives are
complimentary, but how information is used depends on the stakeholder.
Different stakeholders in the supply chain include the original equipment manufacturer (OEM), the owner/operator (O/O) of the
asset, as well as middle parties such as dealers and suppliers. In practice, there is often an asymmetry of information between the
different participants, which is characterized by each of the participants knowing more than others about significant aspects of the asset.
For instance, OEM knows more about the design characteristics and performance capabilities about the equipment they manufacturer.
The dealers knows more about parts and services as well as local and regional dynamics that affect the sale of new whole goods as well
as overhaul records. The O/O will know more about how the fleet was operated, for how long, and the conditions under which it was
operated, including dispatch and production data, utilization rate, scheduled and unscheduled downtime. Additionally, the O/O will
know more about how the fleet was serviced and maintained including parts consumption and labor.
Effective PLM depends on collaboration among the different participants in the supply chain and the sharing of information from
the data. OEMs can benefit from understanding the gap between how a product is intended to be used, and how it is actually used by
the O/O. Information about an individual asset in its operating context throughout its useful life can inform the product lifecycle for the
OEM such as for improving designs in new products or versions, improving the quality of product production, and for creating and
validating pricing structures. From an O/O perspective, benefits of collectively using information include reducing unplanned
downtime, optimal planned downtime, incremental capacity utilization, and improved certainty about the TCO. From a dealer
perspective, benefits include increase in parts and services revenue as well as increased opportunity for managed services.
Maintenance work order data contains information about failure patterns and maintenance activities of an asset through its lifecycle.
This data has the potential to generate actionable intelligence as well as field usage information which can be useful for all members of
the supply chain. However, simply sharing raw data alone will not give anyone these rewards. Adaptable work processes in which
actionable information can be shared across the supply chain effectively need to be developed (2) (3). To extract actionable information,
the gap between the raw collected maintenance data and actionable insights needs to be addressed.
This paper discusses challenges in the maintenance data and proposes data-driven analytical approaches for addressing these
challenges in order to extract relevant information from maintenance work orders that can shared between the different participants of
the supply chain. This paper focuses specifically on estimating the annual costs around an asset from historical data and understanding
the costs and reliability from different failure modes from observed field data. We illustrate these concepts with a simple case study
comparing simulated life cycle costs between two comparable asset models in a system, which demonstrates both the challenges that
need to be considered to extract actionable insights as well as showing what this information could look like and how it could benefit
all stakeholders.
BACKGROUND - FIELD DATA FOR ANNUAL MAINTENANCE LIFECYCLE COSTS
Annual lifecycle costs include the costs of maintenance (both corrective and proactive), production losses from downtime, and other
regularly occurring activities that could incur costs or lost opportunity to produce. Maintenance data from Enterprise Asset Management
(EAM) or Computerized Maintenance Management systems (CMMS) contains information about work tasks such as planning,
scheduling, and reporting (4). The information in the CMMS/EAM contains records of all maintenance activities and costs across asset
fleets, but challenges from directly using this information arise due data quality and consistency issues. Discussions of different data
quality challenges are well reviewed in (5) (6) (7) (8) (9) (10) (11). Some key data quality challenges relevant to this study include
missing breakdown indicator (unknown which events are failure events), missing and inconsistent failure modes, and the unstructured
nature of manufacturer and model nomenclature across a large registry.
In our case study, we show how historical maintenance data can be used to estimate annual lifecycle costs and the considerations
and assumptions made along the way. The GE Asset Answers database aggregates work history data from many industrial facilities
around the world by asset type, manufacturer, as well as many other characteristics. This data is anonymized and made available to
subscribers who can compare themselves against peer data. This effort is part of the effort to develop ways that the maintenance data
in Asset Answers can be made more valuable to all participants, and show how information sharing benefits all. We specifically compare
two similar manufacturer and model of centrifugal pumps from different peer data, evaluate the data quality, use natural language
processing to predict which events are failures and to structure the unstructured text. We use this information to estimate metrics to
inform a system reliability model and run a Monte Carlo simulation to compare annual lifecycle costs by risk events. A similar workflow
was used in (12) to estimate the contribution of a certain category of component failures on system reliability. To protect proprietary
information, all variables have been anonymized and age has been scaled.
CASE STUDY
Out of over 65,000 repair events for over 6,200 centrifugal pumps at 22 different companies over a 4 year period of time from the
Asset Answers database, we identified 2 comparable makes and models, AIC pumps and RELIABLE pumps. 8,000 repair records were
identified against these two models. The next step is to identify which of these repair events are a failure. This process and
considerations are described in (13), where we use a classifier in the GE Digital APM commercial software package which predicts if a
repair event was a failure or not. Of the 8,000 repair events, about 5,800 of them were predicted as failure events. For this dataset,
common themes among the repairs that were not predicted as failure events were either some undeterminable text, or routine procedures
such as inspect, service, or service. A few examples are shown in Table 1.
Table 1 Example work order descriptions demonstrat ing failure classification of repair events for centrif ugal pumps
Work description
Is A Failure?
Seal is leaking badly
True
Block valve is broken open and inoperable
True
00120-Pump 1 Work Request
False
Check impeller size
False
Once we had isolated which maintenance events corresponded to failures, text mining was utilized to characterize the failure mode
information. From this dataset, we characterized failure events by maintainable item and failure mechanisms and used text matching to
extract the information. Different types of approaches for structuring unstructured text are very well described in (14), and have been
compared and studied in (15) (16). Challenges that arise with extracting failure information from maintenance work orders include
naturally occurring class imbalance (certain components or failure events are going to happen at greater frequency than others), the
possibility of multiple correct labels per observation, and the challenges of the transactional text such as misspellings and abbreviations.
In our case study, the objective is to use the structured data to estimate annual life cycle costs from field data by different risk events.
We use a system reliability simulation to estimate the annual lifecycle costs throughout a 10-year period of time. The simulation tool
used is a Monte-Carlo system reliability simulation in the GE APM Reliability commercial software package. We build a simple system
reliability (reliability-availability-maintainability, or RAM) model to illustrate how these processes can be used as part of larger
manufacturing process and other more detailed factors can also be incorporated. The block diagrams are shown in Figure 2.
Figure 1 System reliability scenario models for the two pumps. The system is simplistic but illustrates the framework, and we assume the risks are
all on the Pump subsystem
Reliability factors are incorporated using the failure mode information extracted from the data. Availability factors come from the
unplanned downtime over the course of the 10-year simulation due to these failure modes. Maintainability is modeled using the time-
to-repair (TTR) distribution estimated from the field data. The Society of Maintenance and Reliability (SMRP) defines TTR as the time
needed to restore an asset to its full operational capabilities after a failure (17). We estimate TTR as the difference between the
maintenance start and maintenance completion date on the work order, as an estimate of when the maintenance work was done. TTR
distribution is typically right skewed, and we model it using the lognormal distribution.
To map the RAM simulation results to the financial implications, we use costs per repair event estimated from the data and assume
a user-specified production loss quantity. We use the average repair cost from the maintenance work orders as measures of unplanned
fixed corrective costs. We make the user-specified assumption that any loss of pump function will interrupt production that is valued at
$5,000 per 24 hour day to get estimates of the lost production losses.
We assume both the cost to repair and the time to repair do not vary between the two pump models, but do vary by the failure mode.
We assume that these factors are more specific to the site-specific maintenance and reliability practices than to the asset make and model
but do depend on the nature of the failure. Make and model independent estimated parameters used in our simulation are in Table 2(a).
We identified that the 3 most common failure mode groupings to use as risk events. The common risk events were seal failures,
valve failures, and bearing failures, accounting for 46% of the total failures. For each population (for the 2 make and models), 2-
parameter Weibull distribution parameters were estimated using the probability distribution fitting tool in GE APM Reliability Analytics
with maximum likelihood estimation. Estimated shape
𝛽
and scale
𝜂
parameters are summarized in Table 2(b).
Table 2 Extracted parameters from field data for use in estimating 10 year asset lifecycle costs for two comparable pump models. (a)
Maintainability measures dependent on failure mode, but we assume independent across the different asset make and model numbers, (b)
comparison of estimated reliability (Weibull distribution) parameters between the two asset make and models.
Measure/parameter
Seal failure
Valve failure
Average Corrective Work Cost (USD)
$4096
$2557
MTTR (days)
1.15
1.13
TTR Distribution - 𝜇
1.26
0.82
TTR Distribution - 𝜎
1.6
1.8
(a) Make and model independent estimated metrics and distribution parameters for RAM simulation
Failure mode
Parameter
AIC PUMP
RELIABLE PUMP
Seal Failures
Shape 𝛽
0.68
0.58
Scale
𝜂
(days)
397
213
Bearing failure
Shape 𝛽
0.88
1.24
Scale 𝜂 (days)
582
400
Valv e failu re
Shape 𝛽
0.71
0.71
Scale 𝜂 (days)
424
633
(b) Make and model dependent estimated metrics and distribution parameters for RAM simulation
Simulating the RAM model over 10 years produces an estimate of the cost of unreliability per year for each vendor scenario. We
ran 1,000 iterations. The cost of unreliability in this scenario is a combination of the unplanned corrective costs and the lost production
costs (Table 3). We apply the Net Present Value (NPV) functions using an initial investment of $0 and a discount rate of 10%. The results
of the NPV represents the sequence of cash flows in today’s dollars and shows AIC Pumps is projected to incur lower costs over a 10
year period of time than RELIABLE pumps. Estimated annual costs can be used with other information, such as comparison to the
purchase price. For example, if AIC pumps has a higher acquisition price, this information can be used to justify its purchase to the
O/O.
Table 3 Annual cost of unreliabilit y as a sum of production losses
Year
AIC PUMP
RELIABLE PUMP
2018
$7,088
$10,057
2019
$130,462
$160,024
2020
$118,755
$148,669
2021
$125,740
$158,307
2022
$127,608
$157,620
2023
$129,818
$142,474
2024
$121,852
$141,726
2025
$120,593
$138,163
2026
$115,824
$146,259
2027
$121,992
$129,293
2028
$108,041
$134,457
TOTAL
$1,227,773
$1,467,049
NPV
$764,159
$919,263
In this simulation, the costs are driven by the lost production, which is determined by the availability. We can compare the corrective
costs as well as the total downtime by failure mode for the two scenarios in Figure 3. Figure 3 shows that the unreliability and
unavailability incurred by seal failures is worse for RELIABLE pumps. However, the corrective cost from bearing failures is worse for
AIC pumps, but there is greater total unplanned downtime for RELIABLE pumps. Which pump model will incur the most cost annually
depends on the production losses.
Figure 2 Comparison of 10-year risks from simulation model for 2 pump models (a) Comparison of corrective costs over 10 year
by failure mode (b) Total unplanned downtime. Production losses are driven by unplanned downtime. AIC pumps have greater costs
with valve and seal failures than Reliability pumps, but more downtime associated with seal failures and bearing failures. The cost
trade-off of unreliability is determined largely by the production losses.
CONCLUSIONS
The case study in this paper illustrates steps and processes that can be used to process field maintenance data into actionable
information. The outputs from the study were comparative metrics for different failure modes for different models, as well as the output
from simulation models that can be used to make decisions.
Decisions often made based on upfront costs, but they have long lasting impacts on the life cycle cost of the system. RAM modeling
has been available to support these decisions for many years, but it has traditionally relied on the information available within
organizations or from published reference materials. Asset Answers provides a wealth of data that can be used to build high quality
reliability models of operating cost based on actual field performance data.
BIBLIOGRAPHY
1. Stark, J. (2015). Product lifecycle management. Springer, Cham, 1-29.
2. Prajogo, D., & Olhager, J. (2012). Supply chain integration and performance: The effects of long-term relationships,
information technology and sharing, and logistics integration. International Journal of Production Economics, 514-522(1).
3. Li, J., Tao, F., Cheng, Y., & Zhao, L. (2015). Big data in product lifecycle management. The International Journal of
Advanced Manufacturing Technology, 667-684.
4 Gulati, Ramesh and Smith, Ricky. (2013). Maintenance and Reliability Best Practices Second Edition. Industrial Press, Inc.,
New York.
5. Lukens, S., Naik, M., Hu, X., Doan, D. S., & Abado, S. (2017). The role of transactional data in prognostics and health
management work processes. Proceedings of the Annual Conference of the Prognostics and Health Management Society. St.
Petersburg, FL, 517-528.
6. Meeker, W. Q., & Hong, Y. (2014). Reliability meets big data: opportunities and challenges. Quality Engineering, 102-116.
7. Hodkiewicz, M., Kelly, P., Sikorska, J., & Gouws, L. (2006). A framework to assess data quality for reliability variables.
Engineering Asset Management. Springer, London. 137-147.
8. Koronios, A., Lin, S. & Gao. (2005). A data quality model for asset management in engineering organisations. Proceedings
of the 10th International Conference on Information Quality (ICIQ), Cambridge, MA, 27-51.
9. Lin, S., Gao, J., Koronios, A., & Chanana (2007). Developing a data quality framework for asset management in engineering
organisations. International Journal of Information Quality, 100-126.
10. Lukens, S. & Markham, M. (2018). Data science approaches for addressing RCM challenges. SMRP Conference
Proceedings, Orlando, FL.
11. Naik, M. & Saetia, K. (2018). Improving Data Quality By Using Best Practices And Cognitive Analytics. SMRP Conference
Proceedings, Orlando, FL.
12. Hodkiewicz, Melinda, Batsioudis, Z., Radomiljac, T., and Ho, Mark T.W. (2017). Why autonomous assets are good for
reliability - the impact of 'operator-related component' failures on heavy mobile equipment reliability. PHM Society Conference, St.
Petersburg, FL.
13. Lukens, S. & Markham, M. (2018). Data-driven application of PHM to asset strategies. Proceedings of the Annual
Conference of the Prognostics and Health Management Society, Philadelphia, PA.
14. Hodkiewicz, M., & Ho, M. T. W. (2016). Cleaning historical maintenance work order data for reliability analysis. Journal
of Quality in Maintenance Engineering, 146-163(2).
15. Sexton, T., Hodkiewicz, M., Brundage, M. P., & Smoker, T. (2018). Benchmarking for Keyword Extraction Methodologies in
Maintenance Work Orders. PHM Society Conference, Philadelphia, PA.
16. Sexton, T., Brundage, M. P., Hoffman, M., & Morris, K. C. (2017). Hybrid datafication of maintenance logs from AI-assisted
human tags. Big Data (Big Data), 2017 IEEE International Conference on.
17. SMRP Best Practices. (2017). Society for Maintenance & Reliability Professionals (SMRP) Atlanta, GA.
... Such structured information enables the consistent evaluation of reliability metrics (Gunay, Shen, & Yang 2019;Hodkiewicz et al. 2016), analyses requiring failure mode levels of granularity in the data such as Weibull analysis (Sexton, Hodkiewicz, Brundage & Smoker, 2018) and estimates of maintenance time durations by types of maintenance actions (Navinchandran, Sharp, Brundage, & Sexton, 2019). Applications for reliability decision making which use these calculated measures include system reliability simulations and Reliability-Availability-Maintainability (RAM) analysis (Seale, Hines, Nabholz, Ruvinsky, Eslinger, Rigoni, & Vega-Masionet, 2019;Lukens, Markham, Naik, & Laplante, 2019;Hodkiewicz, Bastioudis, Radomiljac, & Ho, 2017). Applications related to developing and measuring the effectiveness of a maintenance strategy include quantifying failure mode and effects analysis (FMEA) (Yang, Shen, Chen, & Gunay, 2018) and integration of data with Reliability Centered Maintenance (RCM) (Lukens & Markham, 2018;Sikorska & Hammond, 2007). ...
Conference Paper
Full-text available
The recent explosion of advancements in natural language processing (NLP) are encouraging in the industrial sector for leveraging the volumes of unstructured, technical data that currently sit unused. However, results from direct application of many NLP pipelines to technical text often fail to address the business needs of industrial companies. One requirement for satisfactory performance is an effective representation of the unstructured text in a form which contains the information required for an application task. We know of no standard methodology for evaluating word representations for technical text tailored to industry needs. In this paper, we propose guidance and methods for evaluating the performance of word representations for industrial use-cases.
Conference Paper
Full-text available
A Maintenance has largely remained a human-knowledge centered activity, with the primary records of activity being text-based maintenance work orders (MWOs). However, the bulk of maintenance research does not currently a empt to quantify human knowledge, though this knowledge can be rich with useful contextual and system-level information. e underlying quality of data in MWOs o en su ers from mis-spellings, domain-speci c (or even workforce speci c) jargon , and abbreviations, that prevent its immediate use in computer analyses. erefore, approaches to making this data computable must translate unstructured text into a formal schema or system; i.e., perform a mapping from informal technical language to some computable format. Keyword spo ing (or, extraction) has proven a valuable tool in reducing manual e orts while structuring data, by providing a systematic methodology to create computable knowledge. is technique searches for known vocabulary in a corpus and maps them to designed higher level concepts, shi ing the primary e ort away from structuring the MWOs themselves, toward creating a dictionary of domain speci c terms and the knowledge that they represent. e presented work compares rules-based keyword extraction to data-driven tagging assistance, through quantitative and qualitative discussion of the key advantages and disadvantages. is will enable maintenance practitioners to select an appropriate approach to information encoding that provides needed functionality at minimal cost and e ort.
Conference Paper
Full-text available
The goal of this presentation is to describe practical opportunities for data science and discuss how data-driven analytics can be embedded in work processes to help reliability engineers start an asset performance management (APM) initiative. Data-driven analytics can address many limitations with executing traditional processes such as through the potential to augment human subjectivity with quantitative observations, fuse partial information from different data sources into one workflow, and present new meaningful ways to look at data. We describe practical opportunities for data science to augment traditional approaches towards creating effective asset strategies and provide a few case study examples. Abstract There are many benefits from implementing data science initiatives in industrial facilities, such as realizing potentials from reducing unplanned downtime and increased asset efficiency. Many industrial companies would like to take advantage of data-driven technologies and machine learning algorithms to meet their business objectives, but identifying what methods are out there and how they can be leveraged to satisfy business goals can be a daunting challenge. A classical approach to improving asset performance is to begin with a Reliability Centered Maintenance (RCM) program supported by failure modes and effects analysis (FMEA). However, there are many challenges and limitations to traditional RCM in which data-driven analytics embedded in these work processes can help overcome and/or automate. On the other hand, the use of data-driven approaches introduces new challenges surrounding available data, data quality, and identifying numerical methods that are scalable across large datasets. In this paper, we describe practical opportunities for data science and discuss how data-driven analytics can be embedded in work processes to help the reliability engineer. We discuss how data-driven analytics can augment workflows for how to get started with an asset performance management (APM) initiative with respect to an RCM/FMEA approach towards creating effective asset strategies, and provide a few case study examples. Presenter biography Dr. Sarah Lukens lives in Roanoke, Virginia and is a Data Scientist at GE Digital. Her interests are focused on data-driven modeling for reliability applications by combining modern data science techniques with current industry performance data. This work involves analyzing asset maintenance data and creating statistical models that support asset performance management (APM) work processes using components from natural language processing, machine learning, and reliability engineering. Dr. Lukens completed her Ph.D. in mathematics in 2010 from Tulane University with focus on scientific computing and numerical analysis with applications in fluid-structure interaction problems in mathematical biology. Prior to joining Meridium in 2014, she conducted post-doctoral research at the University of Pittsburgh and the University of Notre Dame building data-driven computational models forecasting infectious disease spread and control.
Conference Paper
Full-text available
There are many benefits from implementing a prognostics and health management (PHM) initiative in an industrial facility, such as realizing potentials from reducing unplanned downtime and increased asset efficiency. Many industrial companies would like to take advantage of PHM technologies and algorithms to meet their business objectives, but identifying how to get started can be a daunting challenge. The classical approach is to begin with a Reliability Centered Maintenance (RCM) program supported by failure modes and effects analysis (FMEA) where all possible failure modes, their risks, and mitigating actions are evaluated in the context of asset function. In this framework, application of PHM technologies is viewed as a maintenance strategy effective at mitigating certain failure modes in specific cases that are both feasible and cost-effective. However, there are many challenges and limitations to traditional RCM where data-driven analytics embedded in these work processes can help overcome and/or automate. On the other hand, the use of data-driven approaches introduces new challenges surrounding available data, data quality, and identifying numerical methods that are scalable across large datasets. In this paper, we present a case study applied to historical maintenance data for identifying and prioritizing where to start a PHM initiative, and discuss the work processes and various challenges encountered when embedding data analytics in classical reliability approaches.
Conference Paper
Full-text available
Analytics supporting prognostics and health management (PHM) work processes traditionally leverage time-series data to monitor component states and predict fault progressions in order to positively impact performance related to safety, profitability and risk management. Developing analytical models for the purpose of monitoring is asset-specific and assumes that the data is captured and accessible. In practice, monitoring assets in real-time is reserved for highly critical assets, while all assets have transactional data stored in enterprise asset management (EAM) systems. This paper reviews methods for measuring transactional data quality and for measuring asset performance metrics and health indicators from historical maintenance records that can be used in PHM initiatives. Data from both transactional sources and from machine-measured sources should be used together to derive a complete picture of the maintenance strategies and actions in an industrial site.
Conference Paper
Full-text available
One of the main challenges of applying AI to certain datasets derives from the datasets themselves being un-structured, unclear, and ambiguous. Furthermore, the insights that are to be gained reflect the quality of the data itself; if the data is skewed, so will be the insights. This problem is not unique to AI technology. People looking back at logs of past events often struggle to understand what was recorded, and to put together a timeline amongst a range of actors. AI technology can help humans sort the data out, but it does not provide the same insight often found in the background knowledge of human participants. This contextual weakness has made unstructured data hard to process. In our work, we have studied typical manufacturing maintenance logs to explore whether and how we can apply AI technologies to gain more insight from this-often vast and under-used-data-source. Our approach combines AI techniques for NLP, machine learning, and statistical processing with human contextual knowledge to quickly develop structured semantics reflecting unique datasets.
Article
Full-text available
Recently, “Big Data” has attracted not only researchers’ but also manufacturers’ attention along with the development of information technology. In this paper, the concept, characteristics, and applications of “Big Data” are briefly introduced first. Then, the various data involved in the three main phases of product lifecycle management (PLM) (i.e., beginning of life, middle of life, and end of life) are concluded and analyzed. But what is the relationship between these PLM data and the term “Big Data”? Whether the “Big Data” concept and techniques can be employed in manufacturing to enhance the intelligence and efficiency of design, production, and service process, and what are the potential applications? Therefore, in order to answer these questions, the existing applications of “Big Data” in PLM are summarized, and the potential applications of “Big Data” techniques in PLM are investigated and pointed out.
Article
This study examines the maintenance records for components necessary for the comfort and safety of the operators of heavy mobile equipment. The results show that air conditioners, ladders, driver’s seats and mirrors and other required operator-related components can have a significant impact on an asset’s reliability. Analysis was conducted on 10 years of work orders for five identical 1400HP shovels and three identical 1470HP shovels. The results suggest that removing operator-related components contribute to a 15% decrease in the number of work orders and an 8% increase in reliability. In an autonomous asset these components would not be required. The key to this analysis is a rule-based expert system used to clean more than ten thousand work orders and allocate events to specific sub-systems with associated failure modes. While the mining industry has moved to autonomous haul trucks and drills, there are as yet no autonomous shovels. For manufacturers looking at the business case for these units, the availability of data on the reliability increase from removing the operator-related components will be valuable information.
Article
Purpose – The purpose of this paper is to identify quality issues with using historical work order (WO) data from computerised maintenance management systems for reliability analysis; and develop an efficient and transparent process to correct these data quality issues to ensure data is fit for purpose in a timely manner. Design/methodology/approach – This paper develops a rule-based approach to data cleansing and demonstrates the process on data for heavy mobile equipment from a number of organisations. Findings – Although historical WO records frequently contain missing or incorrect functional location, failure mode, maintenance action and WO status fields the authors demonstrate it is possible to make these records fit for purpose by using data in the freeform text fields; an understanding of the maintenance tactics and practices at the operation; and knowledge of where the asset is in its life cycle. The authors demonstrate that it is possible to have a repeatable and transparent process to deal with the data cleaning activities. Originality/value – How engineers deal with raw maintenance data and the decisions they make in order to produce a data set for reliability analysis is seldom discussed in detail. Assumptions and actions are often left undocumented. This paper describes typical data cleaning decisions we all have to make as a routine part of the analysis and presents a process to support the data cleaning decisions in a repeatable and transparent fashion.
Article
Supply chain integration is widely considered by both practitioners and researchers a vital contributor to supply chain performance. The two key flows in such relationships are material and information. Previous studies have addressed information integration and material (logistics) integration in separate studies. In this paper, we investigate the integrations of both information and material flows between supply chain partners and their effect on operational performance. Specifically, we examine the role of long-term supplier relationship as the driver of the integration. Using data from 232 Australian firms, we find that logistics integration has a significant effect on operations performance. Information technology capabilities and information sharing both have significant effects on logistics integration. Furthermore, long-term supplier relationships have both direct and indirect significant effects on performance; the indirect effect via the effect on information integration and logistics integration.