Computers & Industrial Engineering

Published by Elsevier BV

Print ISSN: 0360-8352

Articles


RFID-enabled realtime manufacturing for automotive part and accessory suppliers[J
  • Conference Paper

August 2010

·

123 Reads

·

·

·

Automotive part and accessory manufacturers (APAMs) at the lower tiers of the automotive vertical have been following leading vehicle assemblers in adopting RFID (Radio Frequency Identification) and ubiquitous computing technologies, aiming to alleviate their advanced manufacturing systems. RFID-enabled real-time traceability and visibility facilitate the implementation of advanced strategies such as Just In-Time (JIT) lean/responsive manufacturing and mass customization (MC). Being typically small and medium sized, however, APAMs are faced up with business and technical challenges which are summarized by the so-called “three high problems”. They are high cost, high risk and high level of requirement for technical skills. Based on a series of industrial field studies, this paper establishes an innovative service-oriented business model for overcoming the “three high Problems” based on the concept of Product Service Systems (PSS) and RFID gateway technology.
Share

A hybrid approach of genetic algorithms and local optimizers incell loading

February 1999

·

29 Reads

In this paper, a potential application of evolutionary programming to cell loading is discussed. The objective is to minimize the number of tardy jobs. The proposed approach is a hybrid three-phase approach: 1) evolutionary programming is used to generate a job sequence, 2) a classical scheduling rule is used to assign jobs to the cells, and 3) Moore's algorithm is applied to the jobs assigned to each cell independently. Experimentation results show the impact of number of cells and the strategy adapted on the number of tardy jobs found. The results also indicate that hybrid GA-local optimizer approach improves the solution quality drastically. Finally, it has been also shown that GA alone can duplicate the performance of the hybrid approach with increased population size and number of generations

A unified congestion control strategy in ATM networks

June 1994

·

19 Reads

A new congestion control strategy is presented, that addresses the current conflict of using the 1-bit cell loss priority field in asynchronous transfer mode (ATM) cells for both service-oriented and congestion-oriented marking. With the new scheme, the call acceptance control, usage parameter control, and space priority functions of an ATM network will be more effective in accommodating different traffic mixtures. Consequently, the population of low priority cells can be controlled, and more efficient interworking of ATM and traditional subnetworks is possible. Through simulations the benefits of this new priority strategy over the traditional congestion control mechanism are shown

Optimal expansion of competence sets with multilevel skills

August 2010

·

34 Reads

The purpose of competence set expansion is to find an optimal expansion process at minimal cost and then obtain the required competence set from the acquired competence set to solve a problem. Several models have been proposed to address the competence set expansion problem of only a single decision maker or multiple decision makers without considering multilevel skills. However, a practical competence set expansion model should involve multiple decision makers and multilevel skills. This study therefore discusses an optimal expansion model of incorporating competence sets of group decision makers with multilevel skills. The proposed method not only obtains the optimal competence set expansion of all decision makers with maximal total benefits but obtains all optimal alternatives of the competence set expansion model. Numerical examples are presented to illustrate the usefulness of the proposed method.

Intelligent optimization method for large-scale steady-statesystems with fuzzy parameters

November 1992

·

17 Reads

A description of large-scale steady-state systems with fuzzy parameters (LSSFP) is given, and a novel intelligent optimization method (IOMFP) for LSSFP and several theorems about noninferior solution and the convergence of IOMFP are proposed. The effectiveness of IOMFP is illustrated by simulation results for some typical problems. The simulation results show that by using IOMFP the amount of calculation is sharply reduced and human heuristic knowledge is sufficiently applied. Owing to the introduction of an intelligent inference and decision machine in the search process, the applicability of the algorithm is greatly improved

Cell formation in group technology: A new approach

April 1987

·

15 Reads

In the U.S.A., machine-component cluster formation is considered a poor alternative to classification and coding as a planning tool for Cellular Manufacture. This paper introduces a heuristic procedure, the Occupancy Value method, for identifying clusters in a machine-component matrix created from route card data. A unique feature of this method is that it progressively develops block diagonalization starting from the northwest corner of the matrix. The flexibility of the procedure is illustrated in detail through a small example. Another large matrix is analyzed to demonstrate the inherent simplicity of the method. Further extensions of this method to implement the manufacturing cells from these initial clusters are discussed.

The application of simulation techniques to information systems analysis

December 1982

·

21 Reads

This paper describes the application of a computer simulation model as a tool to aid in the analysis and design of an information system. Unique to the study is the adaptation of an existing production control simulation model for the purpose of analyzing flows in a hierarchical computer network.The existing simulation model and its method of adaptation are presented. The results of the simulation analysis are utilized by a newly developed file allocation algorithm in the design of a hierarchical information system. The study was performed under close cooperation with several steel manufacturers and initiated under NSF grant ATA 73-07822 AO1.

Total quality management: an approach and a case study

December 1990

·

81 Reads

For the last decade, American companies have been playing catch-up in the area of quality and productivity. Japanese companies and other foreign competitors have moved into markets that were once dominated by American companies, by producing higher quality products. The problem to date in the U.S. has obviously not been the lack of resources or documentation on quality and improvement programs, but the misdirection of these programs and the lack of total management commitment. Total Quality Management (TQM) is seen as an effective method that will accomplish the task of higher quality levels, and increased productivity.The purpose of Total Quality Management is to implement a process that is long term and continuous, in which all of management participates in establishing continuous improvement initiatives throughout the organization, beginning with their own function in the organization. TQM integrates the fundamental techniques and principles of Quality Function Deployment, Taguchi Methods, Statistical Process Control, Just-In-Time, and existing management tools into a structured approach. The primary objective of this approach is to incorporate quality and integrity into all functions at all levels of the organization.This paper examines the TQM process, philosophy, concepts, attributes and how it can be used to develop a “quality-based” culture. The paper also examines the introduction and implementation of the TQM process at an electronic's manufacturer.

Application of group technology for design data management

January 1998

·

17 Reads

Group Technology (GT) as a manufacturing philosophy plays a major role in design standardization, manufacturing cell layouts, process planning, purchasing, and manufacturing technology systems design. One of the most effective ways to use GT is to facilitate significant reductions in design time and effort. A design engineer faced with the task of developing a new part design must either start from scratch or obtain an existing drawing from the files and make the necessary changes to conform to the requirements of the new part. The problem of finding a similar design is quite difficult and time consuming for large engineering departments. The objectives of this research are to develop a classification and coding system for rapid design retrieval of all design data pertaining to the manufacturing of machine tools. This paper proposes and details five steps in the development of group technology databases: 1) data collection, 2) data classification, 3) data analysis, 4) data coding, and 5) data querying. This paper also develops a software prototype written in C language called Interactive Design Retrieval System (IDRS) which assists in an efficient design retrieval process. The proposed prototype also facilitates efficient design data management. In conclusion, an assessment and extension of the results are provided.

STARC 2.0: An improved pert network simulation tool

December 1991

·

33 Reads

This paper discusses the recent improvements made to the STARC (stochastic time and resource constraints) simulation shareware. STARC, first developed in 1984, is a PERT network simulation tool. It addresses the stochastic time and resource constraints typically encountered in PERT network analysis. The activity time modeling approach used by STARC is discussed. The paper also presents the expert-level heuristic used by STARC to prioritize activities for resource allocation during a scheduling process. An illustrative example of a simulation run with STARC is presented. The program is available as a shareware from the author.

A 3-opt Based Simulated Annealing Algorithm for Vehicle Routing Problems. Computers & Industrial Engineering, 21(1-4), 635-639

December 1991

·

204 Reads

Simulated Annealing is combined with the 3-opt heuristic to solve the vehicle routing problem. The results are encouraging; two examples out of three large size problems gave results as good as the best known 3-opt solution. Preliminary results with the heuristic algorithm are presented.

Fuzzy nonlinear goal programming using genetic algorithm. Comput Ind Eng 22(1-2):39-42

October 1997

·

20 Reads

Goal programming(GP) is a powerful method which involves multiobjectives and is one of the excellent models in many real-world problems. The goal programming is to establish specific goals for each priorty level, formulate objective functions for each goal, and then seek a solution that minimize the deviations of these objective functions from their respective goals. Often, in real-world problems the objectives are imprecise(or fuzzy).Recently, genetic algorithms are used to solve many real-world problems and have received a great deal of attention about their ability as optimization techniques for multiobjective optimization problems. This paper is attempt to apply these genetic algorithms to the goal programming problems which involve imprecise(or fuzzy) nonlinear information. Finally, we try to get some numerical experiments which have multiobjectives, and imprecise nonlinear information, using goal programming and genetic algorithm.

Performance of decomposition methods for complex workshops under multiple criteria. Computers and Industrial Engineering, 33(1-2), 261-264

October 1997

·

10 Reads

We evaluate the performance of decomposition procedures for scheduling complex job shops such as semiconductor testing facilities with respect to several different scheduling criteria. We find that schedules developed to minimize maximum lateness also perform well with respect to makespan and total tardiness, and experience only minor degradation in number of tardy jobs.

Optimal determination of warranty region for 2D policy: A customers' perspective

May 2006

·

27 Reads

Warranty is treated by the manufacturer as a marketing strategy that creates better customer satisfaction, which finally helps to get hold of a bigger market share. The cost incurred towards this service is termed as warranty cost, which is a function of warranty policy and region, product quality and reliability, and the customers' usage pattern. Under the 2D warranty policy, there exists numerous iso-cost regions for any specified value of warranty cost. In this article, we propose a methodology to determine the optimal region when customers' utility is measured by the length of warranty coverage time. It is believed that the results will help the manufacturer to provide improved customer service.

A two-level search algorithm for 2D rectangular packing problem

August 2007

·

244 Reads

In this paper, we propose a two-level search algorithm to solve the two-dimensional rectangle packing problem. In our algorithm, the rectangles are placed into the container one by one and each rectangle should be packed at a position by a corner-occupying action so that it touches two items without overlapping other already packed rectangles. At the first level of our algorithm, a simple algorithm called A0 selects and packs one rectangle according to the highest degree first rule at every iteration of packing. At the second level, A0 is itself used to evaluate the benefit of a CCOA more globally. Computational results show that the resulted packing algorithm called A1 produces high-density solutions within short running times.

Automatic classification of block-shaped parts based on their 2D projections

July 1999

·

7 Reads

This paper presents a classification scheme for 3D block-shaped parts. A part is block-shaped if the contours of its orthographic projections are all rectangles. A block-shaped part is classified based on its partitioned view-contours, which are the result of partitioning the contours of its orthographic projections by visible or invisible projected line segments. The regions and their adjacency in a partitioned view-contour are first converted to a graph, then to a reference tree, and finally to a vector form, with which a back-propagation neural network classifier can be trained and applied. The proposed back-propagation neural network classifier is in a cascaded structure and has advantages that each network can be limited to a small size and trained independently. Based on the classification results on their partitioned view-contours, parts are grouped into families that can be in one of the three levels of similarity. Extensive empirical tests have been performed; the pros and cons of the approach are also investigated.

Automatic form feature recognition and 3D part reconstruction from 2D CAD data

October 1994

·

35 Reads

In this paper, a method which can recognize form features and reconstruct 3D part from 2D CAD data automatically is proposed. First, we use the divide-and-conquer strategy to extract the vertex-edge data from each 2D engineering drawing of IGES format. Then, a set of production rules are developed to facilitate the form feature matching process. A new structure of form feature adjacency graph (FFAG) is devised to record the related attibutes of each form feature. Finally, to avoid the combinatorial subparts composition problem, a sweeping operation and volumetric intersection approach is used to rapidly reconstruct the remaining 3D objects. The last reconstructed 3E object is used as the base of the FFAG. All the recognized form features in the FFAG can be classified as depression or protrusion features on the 3D part base. The FFAG structure can be easily transformed into CSG/DSG structure, which is readily integrated with the downstream CAPP/CAM systems. A detailed example is provided to illustrate the feasibility and effectiveness of the proposed system.

Feature-based process plan generation from 3D DSG inputs

July 1994

·

16 Reads

A method that can automatically generate a process plan from 3D input data for a prismatic part is proposed. The 3D input data are the machining features represented in the DSG tee model [1]. A DSG tree is a special case of CSG tree in which all geometric operations are of difference type.The proposed method can (a) transfer the DSG-represented input data into refined machinable features, (b) determine the machinability of features, and (c) determine the cutting directions of all features of a prismatic part. This information is used to determine the machines, tools, and machining sequences that are required to manufacture a part. The NC paths are also generated. Computer simulations are provided to illustrate the proposed method. The major contribution of this paper is that the process plan of a CSG represented prismatic part can be generated automatically through the proposed method. It, therefore, extends the application domain of the CSG tree model and fully integrates CAD/CAM systems.

Automatic sequence of 3D point data for surface fitting using neural networks

August 2009

·

170 Reads

·

·

·

[...]

·

In this paper, a neural network-based algorithm is proposed to explore the sequence of the measured point data for surface fitting. In CAD/CAM, the ordered data serves as the input to fit smooth surfaces so that a reverse engineering system can be established for 3D sculptured surface design. The geometry feature recognition capability of back-propagation neural networks is also explored. Scan number and 3D coordinates are used as the inputs of the proposed neural networks to determine the curve which a data point belongs to and the sequence number of the data point on the curve. In the segmentation process, the neural network output is segment number; while the segment number and sequence number on the same curve are the outputs when sequencing those points on the same curve. After evaluating a large number of trials, an optimal model is selected from various neural network architectures for segmentation and sequence. The neural network is successfully trained by the known data and validated the unexposed. The proposed model can easily adapt for new data measured from the same part for a more precise fitting surface. In comparison to Lin et al.’s [Lin, A. C., Lin, S.-Y., & Fang, T.-H. (1998). Automated sequence arrangement of 3D point data for surface fitting in reverse engineering. Computer in Industry, 35, 149–173] method, the presented algorithm neither needs to calculate the angle formed by each point and its two previous points nor causes any chaotic phenomenon of point order.

A hybrid optimization/simulation approach for a distribution network design of 3PLS

August 2006

·

126 Reads

Third party logistics service providers (3PLs) are playing an increasing role in the management of supply chains. Especially in warehousing and transportation services, a number of clients expect for 3PLs to improve lead times, fill rates, inventory levels, etc. Hence, these 3PLs are under pressure to meet various clients’ service requirements in a dynamic and uncertain business environment. As a result, 3PLs should maintain an efficient distribution system of high performance competitive advantages. In this paper, we propose a hybrid optimization/simulation approach to design a distribution network for 3PLs in consideration of the performance of the warehouses. The optimization model uses a genetic algorithm to determine dynamic distribution network structures. Subsequently, the simulation model is applied to capture the uncertainty in clients’ demands, order-picking time, and travel time for the capacity plans of the warehouses based on service time. The approach is applied to an example problem for examining its validity.

A distributed shifting bottleneck heuristic for complex job shops. Computers and Industrial Engineering, 49, 673-680

November 2005

·

59 Reads

In this paper, we consider distributed versions of a modified shifting bottleneck heuristic for complex job shops. The considered job shop environment contains parallel batching machines, machines with sequence-dependent setup times and reentrant process flows. Semiconductor wafer fabrication facilities are typical examples for manufacturing systems with these characteristics. The used performance measure is total weighted tardiness (TWT). We suggest a two-layer hierarchical approach in order to decompose the overall scheduling problem. The upper (or top) layer works on an aggregated model. Based on appropriately aggregated routes it determines start dates and planned due dates for the jobs within each single work area, where a work area is defined as a set of parallel machine groups. The lower (or base) layer uses the start dates and planned due dates in order to apply shifting bottleneck heuristic type solution approaches for the jobs in each single work area. We conduct simulation experiments in a dynamic job shop environment in order to assess the performance of the heuristic. It turns out that the suggested approach outperforms a pure First In First Out (FIFO) dispatching scheme and provides a similar solution quality as the original modified shifting bottleneck heuristic.

A genetic algorithm for the optimisation of assembly sequences. Computers & Industrial Engineering 50: 503-527

August 2006

·

229 Reads

This paper describes a Genetic Algorithm (GA) designed to optimise the Assembly Sequence Planning Problem (ASPP), an extremely diverse, large scale and highly constrained combinatorial problem. The modelling of the ASPP problem, which has to be able to encode any industrial-size product with realistic constraints, and the GA have been designed to accommodate any type of assembly plan and component. A number of specific modelling issues necessary for understanding the manner in which the algorithm works and how it relates to real-life problems, are succinctly presented, as they have to be taken into account/adapted/solved prior to Solving and Optimising (S/O) the problem. The GA has a classical structure but modified genetic operators, to avoid the combinatorial explosion. It works only with feasible assembly sequences and has the ability to search the entire solution space of full-scale, unabridged problems of industrial size. A case study illustrates the application of the proposed GA for a 25-components product.

Data mining techniques for improved WSR-88D rainfall estimation

September 2002

·

132 Reads

The main objective of this paper is to utilize data mining and an intelligent system, Artificial Neural Networks (ANNs), to facilitate rainfall estimation. Ground truth rainfall data are necessary to apply intelligent systems techniques. A unique source of such data is the Oklahoma Mesonet. Recently, with the advent of a national network of advanced radars (i.e. WSR-88D), massive archived data sets have been created generating terabytes of data. Data mining can draw attention to meaningful structures in the archives of such radar data, particularly if guided by knowledge of how the atmosphere operates in rain producing systems.The WSR-88D records digital database contains three native variables: velocity, reflectivity, and spectrum width. However, current rainfall detection algorithms make use of only the reflectivity variable, leaving the other two to be exploited. The primary focus of the research is to capitalize on these additional radar variables at multiple elevation angles and multiple bins in the horizontal for precipitation prediction. Linear regression models and feedforward ANNs are used for precipitation prediction. Rainfall totals from the Oklahoma Mesonet are utilized for the training and verification data. Results for the linear modeling suggest that, taken separately, reflectivity and spectrum width models are highly significant. However, when the two are combined in one linear model, they are not significantly more accurate than reflectivity alone. All linear models are prone to underprediction when heavy rainfall occurred. The ANN results of reflectivity and spectrum width inputs show that a 250-5-1 architecture is least prone to underprediction of heavy rainfall amounts. When a three-part ANN was applied to reflectivity based on light, moderate to heavy rainfall, in addition to spectrum width, it estimated rainfall amounts most accurately of all methods examined.

Integrating ISO 9000 with HACCP programs in seafood processing industry

October 1998

·

21 Reads

A computerized quality assurance program is developed using Microsoft Access Office 97. The program integrates the elements of the ISO 9001 quality assurance program with the basic principles of a HACCP (Hazard Analysis and Critical Control Point) program. Since many food processing companies are now trying to implement a HACCP system as well as an ISO 9000 quality assurance system, the computer program offers a data base that allows the comparison between the two systems. A generic model for seafood products is developed.

Univariate modeling and forecasting of monthly energy demand time series using abductive and neural networks

May 2008

·

204 Reads

Neural networks have been widely used for short-term, and to a lesser degree medium and long-term, demand forecasting. In the majority of cases for the latter two applications, multivariate modeling was adopted, where the demand time series is related to other weather, socio-economic and demographic time series. Disadvantages of this approach include the fact that influential exogenous factors are difficult to determine, and accurate data for them may not be readily available. This paper uses univariate modeling of the monthly demand time series based only on data for 6 years to forecast the demand for the seventh year. Both neural and abductive networks were used for modeling, and their performance was compared. A simple technique is described for removing the upward growth trend prior to modeling the demand time series to avoid problems associated with extrapolating beyond the data range used for training. Two modeling approaches were investigated and compared: iteratively using a single next-month forecaster, and employing 12 dedicated models to forecast the 12 individual months directly. Results indicate better performance by the first approach, with mean percentage error (MAPE) of the order of 3% for abductive networks. Performance is superior to naı¨ve forecasts based on persistence and seasonality, and is better than results quoted in the literature for several similar applications using multivariate abductive modeling, multiple regression, and univariate ARIMA analysis. Automatic selection of only the most relevant model inputs by the abductive learning algorithm provides better insight into the modeled process and allows constructing simpler neural network models with reduced data dimensionality and improved forecasting performance.

Neural network based model for abnormal pattern recognition of control charts

January 1999

·

43 Reads

In the past years, artificial neural networks were used for pattern recognition of control charts with an emphasis on recognizing specific abnormal patterns of control charts. This paper proposes an artificial neural network based model, which contains several back propagation networks, to both recognize the abnormal control chart patterns and estimate the parameters of abnormal patterns such as shift magnitude, trend slope, cycle amplitude and cycle length, so that the manufacturing process can be improved. Numerical results show that the proposed model also has a good recognition performance for mixed abnormal control chart patterns (e.g. a pattern with trend and cycle characteristics).

Using novelty detection to identify abnormalities caused by mean shifts in bivariate processes

March 2003

·

127 Reads

Non-random (abnormal) behaviour indicates that a process is under the influence of special causes of variation. Detection of abnormal patterns is well established in univariate statistical process control (SPC). Various solutions including heuristics, traditional computer programming, expert systems and neural networks (NNs) have been successfully implemented. In multivariate SPC (MSPC), on the other hand, there is a clear need for more investigations into pattern detection. Bivariate SPC is a special case of MSPC where the number of variates is two and is studied here in terms of identification of shift patterns. In this work, an existing NN classification technique—known as novelty detection (ND)—whose application for MSPC has not been reported is applied for pattern recognition. ND successfully detects non-random bivariate time-series patterns representing shifts of various magnitudes in the process mean vector. The investigation proposes a simple heuristic approach for applying ND as an effective and useful tool for pattern detection in bivariate SPC with potential applicability for MSPC in general.

Consortia partnerships: Linking industry and academia

September 1995

·

12 Reads

A model is described for research consortium partnership formation, management and technology transfer. Industrial engineering strategies and computer technology utilization applications are addressed. Actual examples are presented and analyzed. Projections are made regarding the future role of consortia partnerships in advancing technology and fostering linkages between industry and academia.

Integration of new computer technologies in an industrial engineering academic department

December 1987

·

13 Reads

The increasing application of computer has affected the types of skills expected from new engineering graduates. This paper describes the steps undertaken to incorporate modern computer-based technologies in the curriculum of a typical industrial engineering program. The effort involved the inception of a new laboratory and the development of a sequence of three new courses.

Development of a strategic research plan for an academic department through the use of quality function deployment

September 1993

·

90 Reads

Quality Function Deployment (QFD) is a popular approach for formalizing the process of listening to “the voice of the customer,” and assigning responsibilities to members of an organization in an effort to respond effectively to customer needs, QFD is being used by the Department of Industrial Engineering at Mississippi State University to help identify key customers for departmental research efforts, to identify and track the research needs of those customers, to fashion a comprehensive strategic plan for departmental research activities based on customer needs, to deploy various research functions and responsibilities to specific faculty members or groups, and to track research performance relative to goals. This approach appears to be an excellent means of formalizing the process of strategic research planning.

A teaching method for accelerating the reinforcement learning and tuning of fuzzy rules

September 1995

·

9 Reads

A teaching method to accelerate the reinforcement learning and tuning of fuzzy rules is presented. The method condenses a human expert's experience into a little pieces of knowledge, and guides the fuzzy rule learning process with such knowledge. Compared to the learning method without teaching, the learning rate is increased about one order by this method.

Learning and adaptation of a policy for dynamic order acceptance in make-to-order manufacturing

February 2010

·

49 Reads

Order acceptance under uncertainty is a critical decision-making problem at the interface between customer relationship management and production planning of order-driven manufacturing systems. In this work, a novel approach for simulation-based development and on-line adaptation of a policy for dynamic order acceptance under uncertainty in make-to-order manufacturing using average-reward reinforcement learning is proposed. Locally weighted regression is used to generalize the gain value of accepting or rejecting similar orders regarding attributes such as product mix, price, size and due date. The order acceptance policy is learned by classifying an arriving order as belonging either to the acceptance set or to the rejection set. For exploitation, only orders in the acceptance set must be chosen for shop-floor scheduling. For exploration some orders from the rejection set are also considered as candidates for acceptance. Comparisons made with different order acceptance heuristics highlight the effectiveness of the proposed ARLOA algorithm to maximize the average revenue obtained per unit cost of installed capacity whilst quickly responding to unknown variations in order arrival rates and attributes.

Reevaluating producer's and consumer's risks in acceptance sampling

April 1996

·

51 Reads

This paper presents improved methods for measuring a consumer's and producer's risk in acceptance sampling. We define Bayesian risks for both the consumer and producer. A Bayesian consumer's risk is defined as the probability that a lot which is accepted will contain more than a designated level of defectives, as opposed to the traditional measure of consumer's risk: the probability that a lot which contains a designated number of defectives will be accepted. A Bayesian producer's risk is defined as the probability that a lot which is rejected will contain less than a specified level of defectives, as opposed to the traditional measure: the probability that a lot which contains a designated number of defectives will be rejected. We conduct sensitivity analyses to examine the response of these risk measures to changes in the probability distribution on the number of defectives in the lot and to the variance of these distributions. We conclude that Bayesian consumer's risk gives better information to the decision maker than does a conventional consumer's risk and should be considered the preferred measure. We give practical equations for assessing these risks.

Economically-based acceptance sampling plans

December 1989

·

25 Reads

The most significant advancement in the field of quality control have occurred since the 1920's with the development of statistical quality contol. A major area of statistical quality control is acceptance sampling, in which the decision to either accept or reject a lot is based on the result of a sample from that lot.This paper is a review of literature relevant to the economic design of attributes sampling plans. It surveys the classical works in the field, as well as later work that has been built upon these. Also included are approaches very different from the original work.

Single versus hybrid time horizons for open access scheduling

February 2011

·

48 Reads

Difficulty in scheduling short-notice appointments due to schedules booked with routine check-ups are prevalent in outpatient clinics, especially in primary care clinics, which lead to more patient no-shows, lower patient satisfaction, and higher healthcare costs. Open access scheduling was introduced to overcome these problems by reserving enough appointment slots for short-notice scheduling. The appointments scheduled in the slots reserved for short-notice are called open appointments. Typically, the current open access scheduling policy has a single time horizon for open appointments. In this paper, we propose a hybrid open access policy adopting two time horizons for open appointments, and we investigate when more than one time horizon for open appointments is justified. Our analytical results show that the optimized hybrid open access policy is never worse than the optimized current single time horizon open access policy in terms of the expectation and the variance of the number of patients consulted. In nearly 75% of the representative scenarios motivated by primary care clinics, the hybrid open access policy slightly improves the performance of open access scheduling. Moreover, for a clinic with strong positive correlation between demands for fixed and open appointments, the proposed hybrid open access policy can considerably reduce the variance of the number of patients consulted.

Demand forecasting of high-speed Internet access service considering unknown time-varying covariates

February 2008

·

100 Reads

In order to forecast the demand for information and communication services, it is important to consider not only intrinsic variables representing service characteristics but also unknown time-varying variables such as marketing policy. However, in many cases, information such as a company’s internal marketing policy is not available. This study proposes a negative exponential growth curve model that incorporates unobservable time-varying covariates by reversely estimating the unknown covariates. The proposed approach is then applied to technological forecasting of high-speed Internet access services provided by a telecommunication corporation in Korea.

Development of a computerized system for fall accident analysis and prevention

July 1995

·

18 Reads

This study's objective was to develop a computerized system entitled “SAFECON” (SAFE CONstruction). The system's purpose is to analyze fall accidents and fall protection in industrial construction industries. SAFECON is composed of three modules: fall accident analysis, fall protection analysis, and accident cost and scenario analysis. The fall accident analysis module consists of a fault tree analysis system for fall accident analysis and fall hazard analysis. The fall protection module consists of a rule based expert system to aid in the selection of climbing and fall protection and the mode of training. The accident cost and scenario analysis stores and retrieves the necessary accident to identify areas where accidents are taking place. EXSYS and FOXBASE Plus are SAFECON's two software environments. Eleven cases from OSHA Fatal Facts have been used to evaluate the fall accident analysis module. Results show that SAFECON matched the OSHA investigation result with an accuracy of 82%. Fourteen “real world” cases have also been used to evaluate the fall protection module and indicated that SAFECON matched human experts' recommendations with a similarity percentage of 93, 86, 79 and 93% in climbing protection, lifeline system fall protection, lanyard fall protection, and body support fall protection, respectively.

Fig. 1. Comparison of mean walking speeds as a function of tilt of the corridor. (a) Corridor, trim; (b) corridor, heeling. 
Fig. 2. System configuration of IMEX. 
Table 2 Evacuation time for different exit widths
Fig. 3. Designing process of IMEX using UML. 
Table 3 Evacuation time for different numbers of exits

+6

Establishing the methodologies for human evacuation simulation in marine accidents
  • Article
  • Full-text available

July 2004

·

672 Reads

Recently, many lives are lost around the world due to passenger ships accidents. The International Maritime Organization (IMO) developed guidelines in May 1999 for the evacuation analysis of ro–ro passenger ships to prevent loss of life in maritime accidents. However, IMO considered these guidelines only as an interim measure and allocated 3 years for their improvement and further development as very limited experience and data were available. In this paper, the requirements of IMO and current research works for evacuation from ship are reviewed. Also the applicable evacuation models are presented and several experimental methods to obtain data of human behavior with consideration to ship list and dynamics are evaluated. In addition, the features of evacuation model, which is being developed by KRISO, are finally given.
Download

An intelligent mechanism for lot output time prediction and achievability evaluation in a wafer fab

February 2008

·

28 Reads

An intelligent mechanism is constructed in this study for lot output time prediction and achievability evaluation in a wafer fabrication plant (wafer fab), which are critical tasks to the wafer fab. The intelligent mechanism is composed of two parts, and has three intelligent features: example classification, artificial neural networking, and fuzzy reasoning. In the first part of the intelligent mechanism, a hybrid self-organization map (SOM) and back propagation network (BPN) is constructed to predict the output time of a wafer lot. According to experimental results, the prediction accuracy of the hybrid SOM–BPN was significantly better than those of many existing approaches. In the second part of the fuzzy system, a set of fuzzy inference rules (FIRs) are established to evaluate the achievability of an output time forecast, which is defined as the possibility that the fabrication on the wafer lot can be finished in time before the output time forecast. Achievability is as important as accuracy and efficiency, but has been ignored in traditional studies. With the proposed mechanism, both output time prediction and achievability evaluation can be concurrently accomplished.

Achieving better coordination through revenue sharing and bargaining in a two-stage supply chain

August 2009

·

185 Reads

Coordination is essential for improving supply chain wide performance. In this paper, we focus on a two-stage supply chain consisting of one supplier and one retailer, and in the chain, the retailer’s profit is sensitive to the supplier’s lead time, which is influenced by the supplier’s target inventory level. The coordination between the two parties is achieved through revenue sharing and bargaining in such a way that their respective profit is better than that resulted from a decentralized optimization. The key contract parameter, the revenue-sharing fraction, along with the maximum amount of monetary bargain space, is obtained under explicit and implicit information, respectively. Numerical illustrations of the contracts for various scenarios are also given.

A mixed integer programming model for acquiring advanced engineering technologies

March 1993

·

11 Reads

This paper presents a mixed integer programming model (MILP) which can aid industrial engineering analysts and managers determine the “best” long term strategies for acquiring advanced engineering technology capabilities. This comprehensive model is designed and tested using the results of actual engineering field studies.

A strategic MIGP model for acquiring advanced technologies

March 1997

·

19 Reads

This paper presents a mixed integer, goal programming model (MIGP) which can aid engineering technology managers and analysts in determining the most desirable long term strategies for acquiring advanced technology capabilities under conflicting managerial, technical, and financial objectives. Computer run times for various “mixes of objectives” are presented for both a CRAY supercomputer and a Pentium personal computer. Numerous research extensions are offered.

Acquiring advanced engineering technologies under conditions of performance improvement

March 2003

·

15 Reads

To remain competitive, firms need to develop long-term strategies for acquiring and using advanced engineering and manufacturing technologies. In addition, technology managers are under increasing pressure to produce better results, with less time and risks, and with fewer resources. A resulting trend is a greater use of external relationships and resources to achieve the needed technological accomplishments with greater efficiency. However, there are numerous alternatives for obtaining internally and/or externally the personnel and equipment components of advanced technologies. A mathematical model which identifies the optimal means for acquiring the components for advanced engineering technology requirements, allows the engineering technology resources to be internal or external to the firm, and allows the personnel and equipment to have realistic nonlinear performance improvement (learning) or decay capabilities should be of interest and use to both technology management researchers and practicing engineering/technology managers. The nonlinear integer programming model described herein provides this capability. A comprehensive and realistically based example problem is provided and an optimal solution is obtained and compared with the results of more common, but also more expensive, solutions. Numerous future research extensions are offered.

An experimental analysis of critical factors in automatic data acquisition through bar coding

December 1988

·

5 Reads

The technology of bar coding has been in existence for nearly forty (40) years but has only recently found much application in modern industry. This fact is attributable in part to the evolution of the bar coding symbology itself (of which there are at least 16 in use today), but to a larger extent to the technological advances that have greatly improved the ability to both print and read bar coded symbols. There remain, however, a number of critical factors that are thought to impact the success or failure of a bar code system.

A knowledge-based expert system for technology acquisition in small and medium scale manufacturing organizations

September 1994

·

6 Reads

Consistent and reliable decision making for technology acquisition in small and medium scale manufacturing organizations is vitally important since these firms are the backbone of national economies, both in developed and developing countries. Because of their flexibility, small and medium scale firms are successful in adopting new technologies. However, a careful analysis should be conducted in technology acquisition decisions. Since these decisions require special type of knowledge and expertise, expert system, as an important tool of computerized decision making, can overcome these multidimensional difficulties.This paper proposes a knowledge-based approach making use of issues such as sales, processes, costs and general policies, in decision processes for technology acquisition by small and medium scale manufacturing organizations in the developing environments.

The selection of microcomputers for industrial data acquisition

December 1980

·

10 Reads

The continued decline in the cost of computer hardware has led to new applications of computer technology in data acquisition systems. This paper is concerned with the proper selection of small scale computers for these applications. The first section of the paper compares microcomputers with two likely alternatives in data acquisition systems—minicomputers on the high end of the scale and hard-wired logic on the low end. In the second section the criteria for selecting microcomputers are discussed. Two case studies illustrating the application of these criteria are analyzed in the concluding section.

A genetic algorithm methodology for data mining and intelligent knowledge acquisition

September 2001

·

58 Reads

Data mining is a process that uses available technology to bridge the gap between data and logical decision making. The terminology itself provides a promising view of a systematic data manipulation for extracting useful information and knowledge from the high volume of data. Numerous techniques are developed to fulfill this goal. Implement data mining in an organization would impact every aspect and requires both hardware and software development. This paper outlines a series of discussions and description for data mining and its methodology. First, the definition of data mining along with the purposes and growing needs for such a technology is presented. A six-step methodology for data mining is then presented. Finally, steps from the methodology are applied in a case study to develop a GA-Based system for intelligent knowledge discovery for machine diagnosis.

Applications of automatic identification technologies across the I.E. curriculum

September 1995

·

14 Reads

The prevailing focus on enterprise-wide integration rather than a narrow focus on the effectiveness of functional areas places more stringent requirements for data integrity as enabler for effective coordination of the activities required to carry out the functions necessary for attaining the organizational mission. Automatic Identification (Auto ID) technologies provide an avenue for attaining a totally integrative enterprise management. For this reason, it is imperative that the I.E. curriculum, under the coverage of systems approach to design and operation, be adequately infused the various Auto. ID technologies and concepts as they apply to the vast areas of interest in the field. These technologies are critical to the success of a number of key areas of particular interest to the industrial engineer, including automated manufacturing systems, distribution, inventory control and other computer facilitated activities related to enterprise integration. This paper focuses on bar-codes, and the approach that has been taken at the I.E. department at North Carolina A&T State University to achieve integration throughout the curriculum.

Data mining approaches for modeling complex electronic circuit design activities

March 2008

·

72 Reads

A printed circuit board (PCB) is an essential part of modern electronic circuits. It is made of a flat panel of insulating materials with patterned copper foils that act as electric pathways for various components such as ICs, diodes, capacitors, resistors, and coils. The size of PCBs has been shrinking over the years, while the number of components mounted on these boards has increased considerably. This trend makes the design and fabrication of PCBs ever more difficult. At the beginning of design cycles, it is important to estimate the time to complete the steps required accurately, based on many factors such as the required parts, approximate board size and shape, and a rough sketch of schematics. Current approach uses multiple linear regression (MLR) technique for time and cost estimations. However, the need for accurate predictive models continues to grow as the technology becomes more advanced. In this paper, we analyze a large volume of historical PCB design data, extract some important variables, and develop predictive models based on the extracted variables using a data mining approach. The data mining approach uses an adaptive support vector regression (ASVR) technique; the benchmark model used is the MLR technique currently being used in the industry. The strengths of SVR for this data include its ability to represent data in high-dimensional space through kernel functions. The computational results show that a data mining approach is a better prediction technique for this data. Our approach reduces computation time and enhances the practical applications of the SVR technique.

Embedded system used for classifying motor activities of elderly and disabled people

August 2009

·

129 Reads

Our modern societies are confronted to a new growing problem: the global ageing of population. In order to find ways to encourage elderly people to live longer in their own home, ensuring the necessary vigilance and security at the lowest cost, some tele-assistance systems are already available commercially. This paper presents an embedded prototype able to detect automatically the falls of elderly people while monitoring their motor activities. The classification algorithm using an artificial neural network, the communication and location capabilities of this system are specifically highlighted. In the last part, some experimental results and social issues stemming from Gerontologic Institute Ingema are discussed.

Top-cited authors