ArticlePDF Available

Abstract and Figures

The subject of the paper is the issue of diagnosis and evaluation the capability of a production process for meeting quality requirements by products being the result of this process. In the article, there were presented the theoretical assumptions of the method enabling to evaluate the state of a production process on the basis of the so-called process state measurements set (e.g. process parameters, diagnostic signals and events). Hamster method enables o construct a certain set of affiliation functions for a specific group of process state measurements, and then generate an evaluation for the quality of a process by a single indicator. In the paper, the scheme of actions for a one-dimensional case was presented.
Content may be subject to copyright.
HAMSTER METHOD OF PROCESS STATE EVALUATION
Agnieszka Kujawi ska, Adam Hamrol,
Maria Pi aci ska, Micha Rogalewicz
Pozna University of Technology
Institute of Mechanical Technology
Abstract. The subject of the paper is the issue of diagnosis and evaluation
the capability of a production process for meeting quality requirements by
products being the result of this process.
In the article, there were presented the theoretical assumptions of the method
enabling to evaluate the state of a production process on the basis of the so-
called process state measurements set (e.g. process parameters, diagnostic
signals and events).
Hamster method enables o construct a certain set of affiliation functions for a
specific group of process state measurements, and then generate an
evaluation for the quality of a process by a single indicator.
In the paper, the scheme of actions for a one-dimensional case was presented.
Keywords. Process state evaluation, quality, Hamster method.
1. INTRODUCTION
In these fields, as well as in every decision-making activity of a human being,
it is important to be able to discover, understand and solve problems. Managers
point out, that they have to pay an increasing attention to the management of
making decisions which consider both operational and strategic activities of a
company. The success of actions undertaken as a result of these decisions is the
effect of the following factors: the knowledge of rules and mechanisms which
govern production processes; ability to react quickly to a specific process state and
anticipate its future states; the choice of the best decisions for a given group of
criteria.
When taking decisions one needs to take certain assumptions. According to
the GI-GO rule (garbage in  garbage out) taking wrong assumptions leads to
wrong considerations, wrong conclusions and actions. Low effectiveness of
decisions is always harmful for an enterprise because it may disorganize its
functioning and undermine workers trust. Therefore, it is important what tools for
supporting the decision-making process are at ones disposal, and what data are
taken into account when analyses are making.
You created this PDF from an application that is not licensed to print to novaPDF printer (http://www.novapdf.com)
18
These tools are widely understood quantity methods whose theoretical
foundations were provided by different branches of mathematics. They allow for
analysis and, often, evaluation of the situation on the basis of incomplete
knowledge about a phenomenon (wide spectrum of statistical methods). They
include: forecasting methods, methods of discriminational analysis, factor analysis,
group analysis, multi-dimensional regression, multi-dimensional scaling, queuing
methods and many other. These methods are also helpful in describing relations
between observed phenomena; they often allow for constructing forecasts with
different time horizons [3].
Well-known solutions from the field of production management pay a lot of
attention to various techniques and statistical tools serving to reduce the risk of
taking missed decisions. The statistical methods which are most frequently used
managing the production processes are: Statistical Process Control (SPC),
Measurement System Analysis (MSA) and Design of Experiment (DOE). In
literature, one can also find descriptions of utilizing in analysis data which come
from production workstations as well as concluding, and what follows, taking
decisions with artificial intelligence (AI) techniques (e.g. artificial neuron
networks, genetic algorithms, fuzzy logic, etc.). On the one hand, a narrow
specialization of given applications of AI techniques makes it difficult to work out
a uniform model of production process state evaluation, on the other hand, utilizing
a given technique (for instance neuron networks) requires knowledge from a
particular field with the aim of correct learning of networks and interpreting their
results. The problem is also a difficult with implementing theoretically developed
solutions in a dynamic production environment [3, 4, 6].
2. PROCESS STATE EVALUATION
The question of evaluating the state of production processes - because of
quality criteria - is often discussed in scientific literature. In reference to the
processes of making machine parts, the quality requirements boil down mostly to
requirements considering the precision of a product with respect to its size and
shape as well as outer layer. These features must ensure unfailing functioning of a
product over the period determined by a designer and builder. One of the means
leading to effective fulfillment of these requirements while maintaining the
economic effectiveness of production is controlling quality over the whole cycle of
a products life. Controlling quality is based on utilizing data created during the
widely understood quality control. It involves active and dynamic (adaptive)
control of production processes in all production stages: product concept,
designing , technical preparation for production, making, and usage. Until recently,
the approach to the problem of ensuring quality involved monitoring and
undertaking controlling actions after completing consecutive stages of production
You created this PDF from an application that is not licensed to print to novaPDF printer (http://www.novapdf.com)
19
(e.g. technical control carried out after finishing operations). Modern quality
control is carried out constantly, already during the realization of production
processes. It has character of emergency actions, interventions, and its aim is
operational assurance of the required quality of making. These actions may
involve, among others, the replacement of a tool, correcting process parameters,
tightening control criteria, etc. One of the basic issues conditioning the correctness
of quality control in a company is the ability to utilize the data generated at various
positions where decisions are made.
Figure 1. Sources of information for actions related to quality control (own preparations).
In general, these decisions may be taken on the basis of information obtained
(Fig. 1): from direct measurement of processed parts (measurement made after
completing an operation), from measurement of signals coming from phenomena
accompanying a process (e.g. force signal, temperature, vibrations  measurement
realized on-line), from observations of events taking place during the realization of
a process (process operators failure, machine defect).
Information from the measurement of made parts may be obtained within the
framework of: 100% control, statistical acceptance control or statistical process
control.
Despite having advanced measurement techniques, using friendlier and
extensive software for processing control results, the realization of efficient
information feedback between the process of making and other elements of the
system still poses large practical problems. In many cases, data from a control
carried out in the production course are wasted they only serve for the current
regulation of a process, they are often collected to satisfy consignees requirements
about the so-called quality records. In this respect, statistical techniques of
quality control arouse high expectations; however they have their weaknesses. For
example, process control charts and other indexes of quality potential do not
provide information on the causes of process distortions as well as tips concerning
You created this PDF from an application that is not licensed to print to novaPDF printer (http://www.novapdf.com)
20
corrective actions. It is caused by the fact that potential capabilities of statistical
process control tools are not fully utilized. In literature, there is also no information
indicating which implemented effective solutions would allow for using
information from the control of parameters, features, events, and signals in the
evaluation of process state. According to authors, it results from the lack of an
effective method of concluding about a process on the basis of such a rich and
varied data set which is required for a complex description of a production process.
3. RESEARCH PROBLEM
The problem connected with concluding about the process on the basis of a
set of known features is of classification nature. Methods of classifying objects
have been developed for many years by scientists representing many disciplines
such as signals recognition (e.g. speech, ECG), images recognition (e.g. writing,
face), diagnostic systems, control systems, learning systems, and systems for
eliciting knowledge from data bases. The analysis of literature in this area, made it
possible for authors to draw the following conclusions [2, 3, 4]:
The first one is the observation that the foundation in all classification methods
is, in order to build a correct classifier, a learning set, which is often incomplete,
uncertain, and approximate. A wrong choice (selection) of a learning set may
lead to an incorrect evaluation of process state. It results in the necessity to
develop methods resistant to incompleteness of information, and even to
occurring inconsistencies of a description.
Learning in the context of artificial intelligence and automatics is not
understood in a traditional way. The learning process of a system is supposed
to achieve results being based on fragmentary knowledge, facilitate
improvement, create new concepts and conclude inductively. For example,
according to Herbert Simon (1983): Learning means changes in the system
which are of adaptive character in this sense that next time they allow the
system to perform the same task or similar tasks more effectively. On the other
hand, Donald Michie (1991) understands the learning system in the following
way: The learning system utilizes external empirical data in order to create
and update the foundations for the improved operation on similar data in the
future and ex pressing these foundations in understandable and symbolic form.
The creators of the machine learning discipline highlight that it should lead to
specific aims such as: creating new concepts, detecting unknown regularities in
data, forming decision rules, adopting new concepts and structures with the help
of generalization and analogy, modifying, generalizing and specifying data,
acquiring knowledge through interaction with environment as well as forming
knowledge which would be understood for the human being.
You created this PDF from an application that is not licensed to print to novaPDF printer (http://www.novapdf.com)
21
Another observation is that a problem in the methods of process evaluation is
the right choice of the language for describing a process  the data set on the
basis of which an evaluation is made. Most frequently, the process evaluation
methods utilize data of a specific type  either of quality or quantity.
The last observation is that a problem in methods of classifying a process
security state is the knowledge about the process which changes in time  a
description generated on the basis of a given learning sample, after some time
may prove to be inefficient and require revision in order to adapt a classifying
algorithm to changeable data. Hence, a method should give a possibility to
adapt to environment through dynamic modification, allowing for correct
functioning in changeable conditions.
It is necessary to say that in spite of rapid development of methods and
systems of classification, they are still to some extent dependent on a human being.
The very process of designing a classifying system requires a human being to
determine ways of acquiring knowledge and how to represent it. Apart from the
stage of creating a model, the following problems occur:
too small or too big dependence of the system on environment in which it is
placed, which might lead to incomplete analysis of data or wrong
interpretations,
reliability and correctness of generated conclusions,
incomplete or partially inconsistent data,
the lack of identification of domain limitations may lead to far-fetched
generalizations and wrong conclusions.
The results of theoretical works and practical research prove impossibility to
build a fully universal and optimal evaluation (classification) system, and indicate
that the same research methods may differ in efficiency when applied to different
practical problems.
The aim of authors research is working out a method to classify the
production process state. It was assumed that the developed method will:
generate results and messages which will be understandable for a human, i.e.
expressible in description and mental model assumed by him,
have the possibility to learn constantly i adapt to constantly changing
production conditions,
give possibility to analyze quality and quantity data, irrespectively of their
distribution type,
be possible to be applied for practically every production process,
be capable of explaining every new case,
be effective for large data sets.
You created this PDF from an application that is not licensed to print to novaPDF printer (http://www.novapdf.com)
22
4. HAMSTER METHOD
The Hamster Method is part of the expert system which is being created
within the research project financed by : polish Ministra of Science and Higher
Education, no 1988/T02/2009/37. The method makes it possible to determine the
so-called process security function which will allow for evaluating the security
level of a process as well as forecasting its state.
Figure 2. The concept of the method of process state evaluation with the use of process
security function (own preparations).
The concept of the method of process security state evaluation with the use of
affiliation function is presented in Fig. 2. Activities connected with looping for the
process security function will generally consist of 4 stages:
reducing data obtained at a production workplace and its surroundings,
analyzing result in multi-dimensional space,
analyzing data distribuiton,
determining process security function.
In the method two diagnostics models proposed by Hamrol were used [1]:
one-dimensional model in the form of explicit function
multi-dimensional model in the form of artificial neuron network.
You created this PDF from an application that is not licensed to print to novaPDF printer (http://www.novapdf.com)
23
In one-dimensional model the way of determining the affiliation function goes
as follows:
The quality of making is evaluated on the basis of a set of features which are
given a grade in the form of an integer k from the set 1, , K. For example 1 
fail grade, 2  average grade, 3  excellent. This scale is brought to the
evaluation 1
1
K
kk from the interval 0, 1 . (Step 1)
Diagnostic signals correlated with a specific feature can be measured and
appropriate statistical measures can be determined for them, e.g. arithmetic
mean. The determined measure correlated with a specific feature was called
Process State Measure (MSP). (Step 2)
M-realizations of a process are carried out. In specific time intervals selected
diagnostic signals are measured for which MSP (n 1, ..., N) are measured. The
results of MSP measurements are scaled to the interval 0, 1 .
After completing each process realization the process result is evaluated on a
point scale.
Vector of Process State (WSP) is created (Fig 3)
Figure 3. WSP vector (own preparations).
Process
Process state
measurment
MSP1MSP N
MSPn
Quality of making
k
Evaluation of
quality of
making
MSP 11MSP1
N
MSP1nk1
MSP 21MSP2
N
MSP2nk2
MSPM1MSPMN
MSPMnkM
..
WSP
You created this PDF from an application that is not licensed to print to novaPDF printer (http://www.novapdf.com)
24
The values for each MSPn are ordered and grouped in a stemplot. For each
MSPn, distribution of grades k in each interval is determined. On this basis
affiliation coefficients vk
(p) are determined. They inform about the probability of
receiving the grade k if MSPn is within the interval p. These coefficients are
determined from relation:
)p(
)p(
k
)p(
kL
L
v
where L(p) the number of process realizations in which MSPn is in the p
interval
Lk
(p) the number of process realization along with a grade and measurement
MSPn in the p-tym interval
Then, the function F(MSP) is created:
K
1k
)p(
k
kp v)MSP(F
The function acquires two values within <0,1>. It makes it possible to estimate
an expected process grade for a given MSPn value.
For the determined values expected on the basis of F(MSP) function, a
continuous function is determined in the form:
1 1
( ) [ ( )]
2
n
F MSP arctg g MSP
In the easiest case the g(MSP) function is a linear function.
Step 1: Quality evaluating [1]
0
0,1
0,2
0,3
0,4
0,5
0,6
0,7
0,8
0,9
1
0 0,2 0,4 0,6 0,8 1
- 1
- 2
- 3
MSPn (Parameter 1)
MSPn
(Parametr2)
Notes
You created this PDF from an application that is not licensed to print to novaPDF printer (http://www.novapdf.com)
25
Step 2: Process State Measure (MSP) [1]
0
2
4
6
8
10
Lk
p
1 2 3 4 5
0 0,2 0,4 0,6 0,8 1
L3= 20
L2
3= 10
L3
3= 5
L1
3= 5
MSPn-P1
Interval p
0
2
4
6
8
10
12
Lk
p
1 2 3 4 5
0 0,2 0,4 0,6
Process evaluation k:
- 3
- 2
- 1
0,8 1
MSP2 P2
Interval
p
Step 3: Affiliation coefficients vk
(p) [1]
0
0,2
0,4
0,6
0,8
1
p
k
1 2 3 4 5
0,0 0,2 0,4 0,6 0,8 1
MSP1-P1
Interval p
1
3=5/20
2
3=10/20
3
3=5/20
0
0,2
0,4
0,6
0,8
1
1
kp
2 3 4 5
0,0 0,2 0,4 0,6 0,8 1
Interval p
MSP
n
-P2
Step 4: Function F(MSP) [1]
0 1,00,90,80,70,60,5 0,4 0,3 0,2 0,1
0
0,2
0,4
0,6
0,8
MSPn-P1
F(MSP)
1
1
0
calculated values
experimental values
0,2
0,4
0,6
0,8
MSPn-P2
F(MSP)
2
1
0 0,1 0,2 0,3 0,4 0,5 0,6 0,7 0 ,8 0,9 1
It is the shape of function for MSP which changes according a specific pattern
in relation to a feature being diagnosed. The MSP characteristics is correlated with
a feature being diagnosed. It is not possible to make a precise evaluation of a
process on the basis of a measured MSPn. However, if the F(MSP) value is close to
1, then it means that meeting quality expectations is nearly certain. For the F(MSP)
value close to 0 the chance of meeting quality requirements is minimum.
You created this PDF from an application that is not licensed to print to novaPDF printer (http://www.novapdf.com)
26
5. SUMMARY
Nowadays the effective quality control of manufacturing has to be based on
data coming from two basic sources: quality inspection of manufactured parts and
also from measurement and observation of manufacturing process, whereas one of
the key issues conditioning the correctness of this process is the ability to use data
generated on various stands to make decisions. The data describes manufacturing
process which allows to infer about the probability of obtaining, as a result of the
running process, parts that meet quality requirements.
The Hamster method is the possibility of the solutions application in the
manufacturing environment (there was the attention paid on easiness of defining
the decision makers preferences and easiness of results acceptation in the
proposed methods). The issue being elaborated is a part of more comprehensive
research project consisting in elaborating the expert system of process quality.
REFERENCES
[1] A. Hamrol: Process diagnostic as a means of improving the efficiency of quality
control, Production Planning & Control, 2000, Vol. 11, No. 8.
[2] Hou T.-H., Liu W.-L., Lin L., (2003), Intelligent remote monitoring and diagnosis of
manufacturing processes using an integrated approach of neural networks and rough
sets, Journal of Intelligent Manufacturing, Vol. 14, 2, 239-253.
[3] Huang C.-C., Fan Y.-N., Tseng T.-L., Lee C.-H., Chuang H.-F., (2008), A hybrid data
mining approach to quality assurance of manufacturing process, IEEE International
Conference on Fuzzy Systems, nr 4630465, 818-825.
[4] Papakostas N., Mourtzis D., (2007), An approach for adaptability modeling in
manufacturing  analysis using chaotic dynamics, CIRP Annals  Manufacturing
Technology, Vol. 56, 1, 491-494.
[5] Tseng T.-L., Jothishankar M.C., Wu T., (2004), Quality control problem in printed
circuit board manufacturing  An extended rough set theory approach, Journal of
Manufacturing Systems, Vol. 23, 1, 56-72.
[6] Zhou C., Nelson P.C., Xiao W., Tirpak T.M., Lane S.A.,( 2001), An intelligent data
mining system for drop test analysis of electronic products, IEEE Transactions on
Electronics Packaging Manufacturing, Vol. 24, 3, 222231.
You created this PDF from an application that is not licensed to print to novaPDF printer (http://www.novapdf.com)
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
In this paper, a novel approach for modeling the adaptability of a manufacturing system is introduced. A mathematical model for quantifying the adaptability of a manufacturing system is discussed and real manufacturing data are used for its evaluation. A set of tools, including maximal Lyapunov exponents and bifurcation diagrams are then presented and applied in order to analyze the behavior of a manufacturing system under different operational policies and parameters. The objective of this work is twofold: a) to quantify the ability of a manufacturing system to adapt to demand by using minimal manufacturing data and b) to demonstrate that different operational policies for adaptability in a manufacturing system may be analyzed, by using non-linear and chaotic dynamics tools.
Conference Paper
Full-text available
Quality assurance (QA) is a process employed to ensure a certain level of quality in a product or service. One of the techniques in QA is to predict the product quality based on the product features. However, traditional QA techniques have faced some drawbacks such as heavily depending on the collection and analysis of data and frequently dealing with uncertainty processing. In order to improve the effectiveness during a QA process, a hybrid approach incorporated with data mining techniques such as rough set theory (RST), fuzzy logic (FL) and genetic algorithm (GA) is proposed in this paper. Based on an empirical case study, the proposed solution approach provides great promise in QA.
Article
This paper presents a new heuristic algorithm, called extendedrough set theory, for reduct selection in rough set theory (RST) applications. This algorithm is efficient and quick in selecting reducts especially if the problem size is large. The algorithm is able to derive the rules and identify the most significant features simultaneously, which is unique and useful in solving quality control problems. A detailed comparison between traditional statistical methods, the RST approach, and the extended RST approach is presented. The developed algorithm is applied to an industrial case study involving quality control of printed circuit boards (PCBs). The case study addresses one of the common quality problems faced in the PCB manufacturing, namely, solder ball defects. Several features that cause solder ball defects were identified and the features that significantly impact the quality were considered in this case study. Two experiments with equal and unequal weights were conducted and the results were compared. The end result of the extended RST investigation is a set of decision rules that shows the cause for the solder ball defects. The rules help to discriminate the good and bad parts to predict defective PCBs. A large sample of 3,568 PCBs was used to derive the set of rules. Results from the extended RST are very encouraging compared to statistical approaches. The rules derived from the data set provide an indication of how to effectively study this problem in further investigations. This paper forms the basis for solving many other similar problems that occur in manufacturing and service industries.
Article
The new requirements for products quality and manufacturing process productivity can not be met by using traditional methods. This paper presents a new approach to improving the efficiency of quality control in machining. The method is especially suitable for quality control with the use of Shewhart's control charts (SCC). It is based on processing signals that accompany machining processes (e.g. acoustic emission, force, vibrations) with the use of fuzzy reasoning and neural network. The on-line measured signals are transformed through the process safety function (PSF) which is developed on the basis of two quantities specially defined for this method: the process quality of performance (QOP) and the process state vector (PSV). It is finally used for estimation of the process safety index (PSI). Inference models developed in this work can easily be identified and modified under workshop circumstances.
Article
This research develops a methodology for the intelligent remote monitoring and diagnosis of manufacturing processes. A back propagation neural network monitors a manufacturing process and identifies faulty quality categories of the products being produced. For diagnosis of the process, rough set is used to extract the causal relationship between manufacturing parameters and product quality measures. Therefore, an integration of neural networks and a rough set approach not only provides information about what is expected to happen, but also reveals why this has occurred and how to recover from the abnormal condition with specific guidelines on process parameter settings. The methodology is successfully implemented in an Ethernet network environment with sensors and PLC connected to the manufacturing processes and control computers. In an application to a manufacturing system that makes conveyor belts, the back propagation neural network accurately classified quality faults, such as wrinkles and uneven thickness. The rough set also determined the causal relationships between manufacturing parameters, e.g., process temperature, and output quality measures. In addition, rough set provided operating guidelines on specific settings of process parameters to the operators to correct the detected quality problems. The successful implementation of the developed methodology also lays a solid foundation for the development of Internet-based e-manufacturing.
Article
Drop testing is one common method for systematically determining the reliability of portable electronic products under actual usage conditions. The process of drop testing, interpreting results, and implementing design improvements is knowledge-intensive and time-consuming, and requires a great many decisions and judgments on the part of the human designer. To decrease design cycles and, thereby, the time to market for new products, it is important to have a method for quickly and efficiently analyzing drop test results, predicting the effects of design changes, and determining the best design parameters. Recent advances in data mining have provided techniques for automatically discovering underlying knowledge from large amounts of experimental data. In this paper, an intelligent data mining system named decision tree expert (DTE) is presented and applied to drop testing analysis. The rule induction method in DTE is based on the C4.5 algorithm. In our preliminary experiments, concise and accurate conceptual design rules were successfully generated from drop test data after incorporation of domain knowledge from human experts. The data mining approach is a flexible one that can be applied to a number of complex design and manufacturing processes to reduce costs and improve productivity