According to the feature of neuron self-learning, a neuron based
adaptive controller is proposed for the position servo control of CNC
machine tools. The algorithm has the outstanding characteristic that the
identification of system models is no longer necessary. Moreover, the
simplicity of calculation makes the algorithm very suitable for
practical engineering application. A simulation research is also
outlined. The control algorithm is applied to real-time control of the
position servo system of a CNC machine tool for the first time and
excellent experimental results illustrate the high effectiveness of the
neuron-based controller proposed
To analyze the operation of an arbitrary AGV system under selected vehicle routing strategies, we present a simulation model that can handle multiple system layouts, a varying number of AGVs, and a varying number of pedestrians moving around the system. We introduce a dynamic vehicle routing strategy based on hierarchical simulation that operates as follows: at the time of each routing decision for an AGV in the main simulation, subsimulations are spawned for each of a varying number of alternative routes; and the performance observed in these subsimulations is then used to make the routing decision in the main simulation. A case study illustrates the advantages of this strategy
A new genetic algorithm-based method is applied to the
optimization of cutting conditions and selection of cutting tools in
multi-pass turning operations (MPTOs). A comprehensive optimization
criterion for MPTOs is developed and used as the objective function
integrating the contributing effects of all major machining performance
measures in all passes. A new methodology for the allocation of total
depth of cut in MPTOs is also developed. The effect of progressive tool
wear in optimization processes for MPTOs is included in the current
work. Presented case studies demonstrate the application of the new
methodology for optimal allocation of the total depth of cut as well as
optimization of cutting conditions and the selection of cutting tool
inserts, and offer a comparison between optimization processes with and
without the effect of tool wear in all passes
Assembly line job sequencing establishes the order in which jobs are processed by an assembly line. This research focuses on job sequencing methods for assembly lines with work stations that receive the same fixed job sequence, are coupled together so that there is no work-in-process storage, and are balanced so that jobs move continuously between them at a constant rate. Automobile assembly lines frequently have these characteristics. Simple analytical principles are derived that aid in evaluating trade-offs between different job sequencing objectives. Included are set-up cost objectives related to operations performed on every job but in different choices (e.g. automobile painting) and capacity utilization objectives related to operations not performed on every job (e.g. installing vinyl roofs). These principles provide the basis for a job sequencing method that can be applied either by hand or with the help of a simple sorting routine.
We consider production lines consisting of a series of machines
separated by finite buffers. The processing time of each machine is
deterministic and all the machines have the same processing time. All
machines are subject to failures. As usually the case for production
systems we assume that the failures are operation dependent. Moreover,
we assume that the time to failure and the time to repair are
exponentially distributed. To analyze such systems, an efficient
decomposition procedure has been proposed by Gershwin and al. In
general, this method provides fairly accurate results. There are however
cases for which the accuracy of this decomposition method may not be so
good. This is the case when the reliability parameters (mean times to
failure and mean times to repair) of the different machines have
different order of magnitudes. Such a situation may be encountered in
real production lines. The purpose of this paper is to propose an
improvement of Gershwin's original decomposition method that provides
accurate results even in the above mentioned situation. The basic
difference between the decomposition method presented in this paper with
that of Gershwin is that the times to repair of the equivalent machines
are modeled as generalized exponential distributions instead of
exponential distributions. This allows us to use a two-moment
approximation instead of a one-moment approximation of the repair time
distributions of these equivalent machines. The new method is presented
in the context of the continuous flow model. However, it is readily
applicable to the synchronous model
Foster's brewery at Yatala have developed a G2-based real-time utilities consumption model (UCM) of its plant. The UCM is used to estimate the instantaneous load of key utilities at the brewery. The contributions of individual equipment to the load profile can be estimated providing an unprecedented level of transparency into areas that are (in a metered sense) unmeasured. A case study is included that demonstrates the application of the UCM for load identification for the extensive refrigeration system that is critical for the brewery operations.
In this paper the presence of delay in a job shop is addressed. We show that delay is an important consideration in many manufacturing systems that are modelled as continuous flow processes. A heuristic control policy for a job shop with delays is then derived using theoretical arguments and approximations. The policy has a particularly simple form which can be readily extended to complex systems. Our simulation experiments on simple systems show that the control works significantly better than simple hedging point controls, especially when delays are large.
This paper deals with the possibility to study the behavior of a large class of Petri net: weighted T-systems (WTS). We recall recent results which propose to linearize WTS on a marked graph (MG), in order to obtain a model which can be analyzed using tropical algebra like (max, +) or (min, +). In the last part, we present a transformation to extend the previous results. The idea is to introduce synchronization in the model, this kind of transition enrich the model and gives the opportunity to apply the different results
A major hurdle in the development of intelligent robots is that we still do not possess efficient computational and representational methodologies for emulating knowledge and expectation driven behavior so basic to human cognition and problem solving. Even if we use techniques such as geometric modeling for representing objects in the robot world, we are still lacking in methods for linking such representations with sensory feedback. In this paper, we have proposed the use of intermediate representations - we call them sensor-tuned representations - for linking CSG based solid modeling with sensory information. We also discuss how sensor-tuned representations are constructed from range data and how object recognition can be done with sensor-tuned representations. Finally, we show results of manipulation experiments produced by the current implementation of the system.
One of the fundamental problems in operations management is determining the optimal investment in capacity. Capacity investment consumes resources and the decision, once made, is often irreversible. Moreover, the available capacity level affects the action space for production and inventory planning decisions directly. In this paper, we address the joint capacitated lot sizing and capacity acquisition problem. The firm can produce goods in each of the finite periods into which the production season is partitioned. Fixed as well as variable production costs are incurred for each production batch, along with inventory carrying costs. The production per period is limited by a capacity restriction. The underlying capacity must be purchased up front for the upcoming season and remains constant over the entire season. We assume that the capacity acquisition cost is smooth and convex. For this situation, we develop a model which combines the complexity of time-varying demand and cost functions and of scale economies arising from dynamic lot-sizing costs with the purchase cost of capacity. We propose a heuristic algorithm that runs in polynomial time to determine a good capacity level and corresponding lot sizing plan simultaneously. Numerical experiments show that our method is a good trade-off between solution quality and running time.
This paper addresses the problem of scheduling economic lots in a multi-product single machine environment. A mixed integer non-linear programming formulation is developed which finds the optimal sequence and economic lots. The model takes explicit account of initial inventories, setup times, allows setups to be scheduled at arbitrary epochs in continuous time and models backorders. To solve the problem we develop a hybrid approach, combining a genetic algorithm and linear programming. The approach is tested on a set of instances taken from the literature and compared with other approaches. The experimental results validate the quality of the solutions and the effectiveness of the proposed approach.
The paper presents an ant colony optimization metaheuristic for collaborative
planning. Collaborative planning is used to coordinate individual plans of
self-interested decision makers with private information in order to increase
the overall benefit of the coalition. The method consists of a new search graph
based on encoded solutions. Distributed and private information is integrated
via voting mechanisms and via a simple but effective collaborative local search
procedure. The approach is applied to a distributed variant of the multi-level
lot-sizing problem and evaluated by means of 352 benchmark instances from the
literature. The proposed approach clearly outperforms existing approaches on
the sets of medium and large sized instances. While the best method in the
literature so far achieves an average deviation from the best known
non-distributed solutions of 46 percent for the set of the largest instances,
for example, the presented approach reduces the average deviation to only 5
In a mixed-model assembly line dierent models of a common base product can be manufactured in intermixed production sequences. A famous solution approach for the resulting short-term sequencing problem is the so called level scheduling problem, which aims at evenly smoothing the material requirements over time in order to facilitate a just-in-time supply. However, if materials are delivered in discrete quantities, the resulting spreading of material usages implies that issued cargo carriers of a respective material remain at a station for a longer period of time. In practical applications with plenty materials required per station, this procedure might lead to bottlenecks with respect to the scarce storage space at stations. This paper investigates level scheduling under the constraint that the induced part usage patterns may not violate given storage constraints. The resulting sequencing problem is formalized and solved by suited exact and heuristic solution approaches.
The mixed-model sequencing problem is to sequence different product models launched down an assembly line, so that work overload at the stations induced by direct succession of multiple labor-intensive models is avoided. As a concept of clearing overload situations, especially applied by Western automobile producers, a team of cross-trained utility workers stands by to support regular workforce. Existing research assumes that regular and utility worker assemble side-by-side in an overload situation, so that processing speed is doubled and the workpiece can be finished inside a station's boundaries. However, in many real-world assembly lines the application of utility workers is organized completely different. Whenever it is foreseeable that a work overload will occur in a production cycle, a utility worker takes over to exclusively execute work whereas the regular worker omits the respective cycle and starts processing at the successive workpiece as soon as possible. The paper investigates this more realistic sequencing problem and presents a binary linear program along with a complexity proof. Then, different exact and heuristic solution procedures are introduced and tested. Additional experiments show that the new model is preferable from an economic point of view whenever utility work causes considerable setup activities, e.g., walking to the respective station.
A number of design issues must be resolved when the kanban method is implemented, including the determination of the number of cards to use, the size of the kanban-lots, and the machine allocation. Objectives may include the reduction of work-in-progress, the reduction of makespan or cycle time and the overall improvement of shop floor planning and control. This paper reports on results obtained using optimization models of the kanban method in an assembly shop to determine the optimal number of kanban cards. Numerical results illustrate the relationship between makespan of an order and the number of kanban cards and indicate that there is, in practice, an upper bound to the number of cards that should be used with a given kanban-lot size.
Mixed-model assembly lines are widely used in a range of production settings, such as the final assembly of the automotive and electronics industries, where they are applied to mass-produce standardized commodities. One of the greatest challenges when installing and reconfiguring these lines is the vast product variety modern mixed-model assembly lines have to cope with. Traditionally, product variety is bypassed during mid-term assembly line balancing by applying a joint precedence graph, which represents an (artificial) average model and serves as the input data for a single model assembly line balancing procedure. However, this procedure might lead to considerable variations in the station times, so that serious sequencing problems emerge and work overload threatens. To avoid these difficulties different extensions of assembly line balancing for workload smoothing, i.e., horizontal balancing, have been introduced in the literature. The paper on hand introduces a multitude of known and yet unknown objectives for workload smoothing and systematically tests these measures in a comprehensive computational study. The results suggest that workload smoothing is an essential task in mixed-model assembly lines and that some (of the newly introduced) objectives are superior to others.
In a recent paper, Chaudhuri and Bhattacharyya propose a methodology combing Quality Function Deployment (QFD) and Integer
Programming framework to determine the attribute levels for a Conjoint Analysis (CA). The product planning decisions, however,
are typically taken one to two years before the actual launch of the products. The design team needs some flexibility in improving
the Technical Characteristics (TCs) based on minimum performance improvements in Customer Requirements (CRs) and the imposed
budgetary constraints. Thus there is a need to treat the budget and the minimum performance improvements in CRs as flexible
rather than rigid. In this paper, we represent them as fuzzy numbers instead of crisp numbers. Then a fuzzy integer programming
(FIP) model is used to determine the appropriate TCs and hence the right attribute levels for a conjoint study. The proposed
method is applied to a commercial vehicle design problem with hypothetical data.
KeywordsQuality function deployment-Conjoint analysis-Fuzzy integer programming
In this paper we study a semiconductor packaging line at IBM Bromont. At the line, modules are assembled and then tested in a Burn-in oven. The Burn-in oven is a batch processing station. We outline a procedure to determine order release scheduled and lot sizes for the various work stations in the line, such that total manufacturing lead time is minimized. The internal parameters of the procedure are set by simulation experiments and by heuristics. Sensitivity analysis is carried out to determine the robustness of the procedure with respect to various external parameter settings.
We present a finite capacity production scheduling algorithm for an integrated steel company located in Belgium. This multiple-objective optimization model takes various case-specific constraints into account and consists of two steps. A machine assignment step determines the routing of an individual order through the network while a scheduling step makes a detailed timetable for each operation for all orders. The procedure has been tested on randomly generated data instances that reflect the characteristics of the steel company. We report promising computational results and illustrate the flexibility of the optimization model with respect to the various input parameters.
Robust design has been widely recognized as a leading method in reducing
variability and improving quality. Most of the engineering statistics
literature mainly focuses on finding "point estimates" of the optimum operating
conditions for robust design. Various procedures for calculating point
estimates of the optimum operating conditions are considered. Although this
point estimation procedure is important for continuous quality improvement, the
immediate question is "how accurate are these optimum operating conditions?"
The answer for this is to consider interval estimation for a single variable or
joint confidence regions for multiple variables.
In this paper, with the help of the bootstrap technique, we develop
procedures for obtaining joint "confidence regions" for the optimum operating
conditions. Two different procedures using Bonferroni and multivariate normal
approximation are introduced. The proposed methods are illustrated and
substantiated using a numerical example.
This research involves the combination of spare parts management and reverse logistics. At the end of the product life cycle, products in the field (so called installed base) can usually be serviced by either new parts, obtained from a Last Time Buy, or by repaired failed parts. This paper, however, introduces a third source: the phase-out returns obtained from customers that replace systems. These returned parts may serve other customers that do not replace the systems yet. Phase-out return flows represent higher volumes and higher repair yields than failed parts and are cheaper to get than new ones. This new phenomenon has been ignored in the literature thus far, but due to increased product replacements rates its relevance will grow. We present a generic model, applied in a case study with real-life data from ConRepair, a third-party service provider in plant control systems (mainframes). Volumes of demand for spares, defects returns and phase-out returns are interrelated, because the same installed base is involved. In contrast with the existing literature, this paper explicitly models the operational control of both failed- and phase-out returns, which proves far from trivial given the nonstationary nature of the problem. We have to consider subintervals within the total planning interval to optimize both Last Time Buy and control policies well. Given the novelty of the problem, we limit ourselves to a single customer, single-item approach. Our heuristic solution methods prove efficient and close to optimal when validated. The resulting control policies in the case-study are also counter-intuitive. Contrary to (management) expectations, exogenous variables prove to be more important to the repair firm (which we show by sensitivity analysis) and optimizing the endogenous control policy benefits the customers. Last Time Buy volume does not make the decisive difference; far more important is the disposal versus repair policy. PUSH control policy is outperformed by
The primary objective of closed-loop supply chains (CLSC) is to reap the maximum economic benefit from end-of-use products. Nevertheless, literature within this stream of research advocates that closing the loop helps to mitigate the undesirable footprint of supply chains. In this paper we assess the magnitude of such environmental gains for Electric and Electronic Equipments (EEE), based on a single environmental metric of Cumulative Energy Demand. We detail our analysis for the different phases of the CLSC, i.e. manufacturing, usage, transportation and end-of-life activities. According to our literature review, within the same group of EEE, results greatly vary. Furthermore, based on the environmental hot-spots, we propose extensions of the existing CLSC models to incorporate the CED.
The well-known deterministic resource-constrained project scheduling problem (RCPSP) involves the determination of a predictive schedule (baseline schedule or pre-schedule) of the project activities that satisfies the finish-start precedence relations and the renewable resource constraints under the objective of minimizing the project duration. This pre-schedule serves as a baseline for the execution of the project. During execution, however, the project can be subject to several types of disruptions that may disturb the baseline schedule. Management must then rely on a reactive scheduling procedure for revising or reoptimizing the pre-schedule. The objective of our research is to develop procedures for allocating resources to the activities of a given baseline schedule in order to maximize its stability. We propose two integer programming based heuristics and report on computational results obtained on a set of benchmark problems.
Collaborative Networks (CNs) enhance the preparedness of their participants to promptly form Virtual Organisations (VOs) that
are able to successfully tender for large scale and distributed projects. However, the CN efficiency essentially depends on
the ability of its managers to match and customise available reference models but often, also to create new project activities.
Thus, given a particular VO creation project, the CN managers must promptly infer ‘what needs to be done’ (discover the project
processes) and how to best communicate their ‘justified beliefs’ to the CN members involved. This paper proposes a framework
for a decision support system that can help managers and enterprise architects discover/update the main activities and aspects
that need to be modelled for various enterprise task types, with special emphasis on the creation of VOs. The framework content
is also explained ‘by example’, in the context of a real-world scenario.
In this paper, we consider a newly-designed compact three-dimensional automated storage and retrieval system (AS/RS). The system consists of an automated crane taking care of movements in the horizontal and vertical direction. A gravity conveying mechanism takes care of the depth movement. Our research objective is to analyze the system performance and optimally dimension of the system. We estimate the craneâ€™s expected travel time for single-command cycles. From the expected travel time, we calculate the optimal ratio between three dimensions that minimizes the travel time for a random storage strategy. In addition, we derive an approximate closed-form travel time expression for dual command cycles. Finally, we illustrate the findings of the study by a practical example.
During the last decade a lot of research efforts in the project scheduling literature have concentrated on resource-constrained project scheduling under uncertainty. Most of this research focuses on protecting the project due date against disruptions during execution. Few efforts have been made to protect the starting times of intermediate activities. In this paper, we develop a heuristic algorithm for minimizing a stability cost function (weighted sum of deviations between planned and realized activity starting times). The algorithm basically proposes a clever way to add intermediate buffers to a minimal duration resource-constrained project schedule. We provide an extensive simulation experiment to investigate the trade-off between quality robustness (measured in terms of project duration) and solution robustness(stability). We address the issue whether to concentrate safety time in so-called project and feeding buffers in order to protect the planned project completion time or to scatter safety time throughout the baseline schedule in order to enhance stability.
Cet article etudie le probleme de preparation d'un calendrier de production sur un horizon fini pour plusieurs produits dont la demande est statique; et ce, dans l'environnement de production multi-etapes d'un atelier multi-gammes. L'objectif poursuivi consiste a minimiser les couts de mise en route de la production, les couts d'inventaire des produits en cours de fabrication et les couts d'inventaire des produits finis. On suppose alors un cycle commun pour tous les produits et on determine le cycle commun de telle sorte que la longeur de l'horizon de planification soit un multiple entier de ce cycle. Pour les problemes de moyenne ou grande taille, on propose un algorithme de recuit simule et un algorithme de recherche tabou.
To compare different forecasting methods on demand series we require an error
measure. Many error measures have been proposed, but when demand is
intermittent some become inapplicable, some give counter-intuitive results, and
there is no agreement on which is best. We argue that almost all known measures
rank forecasters incorrectly on intermittent demand series. We propose several
new error measures with wider applicability, and correct forecaster ranking on
several intermittent demand patterns. We call these "mean-based" error measures
because they evaluate forecasts against the (possibly time-dependent) mean of
the underlying stochastic process instead of point demands.
In a recent contribution, Teunter et al. [2006. Dynamic lot sizing with product returns and remanufacturing. IJPR 44 (20), 4377-4400] adapted three well-known heuristic approaches for the single-item dynamic lot sizing problem to incorporate returning products that can be remanufactured. The Silver-Meal based approach revealed in a large numerical study the best performance for the separate setup cost setting, i.e. the replenishment options remanufacturing and manufacturing are charged separately for each order. This contribution generalizes the Silver-Meal based heuristic by applying methods elaborated for the corresponding static problem and attaching two simple improvement steps. By doing this, the percentage gap to the optimal solution which has been used as a performance measure has been reduced to less than half of its initial value in almost all settings examined.
In the past thirty years the full turnover-based storage policy as described by Hausman et al. (1976, Management Science 22(6)) has been widely claimed to outperform the commonly used ABC class-based storage policy, in terms of the resulting average storage and retrieval machine travel time. In practice however, ABC storage is the dominant policy. Hausman et al. (1976) model the turnover-based policy under the unrealistic assumption of shared storage, i.e. the storage space allocated to one product can only accommodate its average inventory level; no specific space is reserved to store the maximum inventory of a product. It appears that many authors citing Hausman et al.â€™s results overlook this assumption and use the resulting storage and retrieval machine travel times as if it were valid for full turnover-based storage. Full turnover-based storage is a dedicated storage policy where the storage space allocated to one product must accommodate its maximum inventory level. This paper adapts the travel time model of Hausman et al. to accommodate full turnover-based dedicated storage. Surprisingly, the result of the adapted travel time model is opposite to that of Hausman et al. (1976) but, in line with practice, it supports that ABC (2- or 3-) class-based storage normally outperforms full turnover-based storage.
In this paper we give an overview of recent developments in the field of modeling single-level dynamic lot sizing problems. The focus of this paper is on the modeling various industrial extensions and not on the solution approaches. The timeliness of such a review stems from the growing industry need to solve more realistic and comprehensive production planning problems. First, several different basic lot sizing problems are defined. Many extensions of these problems have been proposed and the research basically expands in two opposite directions. The first line of research focuses on modeling the operational aspects in more detail. The discussion is organized around five aspects: the set ups, the characteristics of the production process, the inventory, demand side and rolling horizon. The second direction is towards more tactical and strategic models in which the lot sizing problem is a core substructure, such as integrated production-distribution planning or supplier selection. Recent advances in both directions are discussed. Finally, we give some concluding remarks and point out interesting areas for future research.
In this paper a model is developed to simultaneously plan preventive maintenance and production in a process industry environment, where maintenance planning is extremely important. The model schedules production jobs and preventive maintenance jobs, while minimizing costs associated with production, backorders, corrective maintenance and preventive maintenance. The formulation of the model is flexible, so that it can be adapted to several production situations. The performance of the model is discussed and alternate solution procedures are suggested.
When scheduling an uncertain project, project management may wait for additional (future) information to serve as the basis for rescheduling the project. This flexibility enhances the project's value by improving its upside potential while limiting downside losses relative to the initial expectations. Using traditional techniques such as net present value or decision tree analysis may lead to false results. Instead, a real options analysis should be used. We discuss the potentials of a real options approach to project scheduling with an example and highlight future research directions.
Thispaper considers the simultaneous scheduling of material handling transporters (such as Automatic Guided Vehicles or AGVs) and manufacturing equipment (such as machines and workcenters) in the production of complex assembled products. Given the shipping schedule for the end-items, the objective of the integrated problem is to minimize the cumulative lead time of the overall production schedule (i.e., total makespan) for on-time shipment, and to reduce material handling and inventory holding costs on the shop-floor. The problem of makespan minimization is formulated as a transportation integrated scheduling problem, which is NP-hard. For industrial sized problems, an effective heuristic is developed to simultaneously schedule manufacturing and material handling operations by exploiting the critical path of an integrated operations network. The performance of the proposed heuristic is evaluated via extensive numerical studies and compared with the traditional sequential sched...
Global competition and rapidly changing customer requirements are forcing major changes in the production styles and configuration of manufacturing organizations. Traditional centralised manufacturing systems are not able to meet such requirements. This paper proposes an agent-based approach for dynamically creating and managing agent communities in such widely distributed and ever-changing manufacturing environments. After reviewing the research literature, an adaptive multi-agent manufacturing system architecture called MetaMorph is presented and its main features are described. Such architecture facilitates multi-agent coordination by minimising communication and processing overheads. Adaptation is facilitated through organizational structural change and two learning mechanisms: learning from past experiences and learning future agent interactions by simulating future dynamic, emergent behaviours. The MetaMorph architecture also addresses other specific requirements for next generat...
We study sourcing decisions of price-setting and price-taking firms with two unreliable suppliers, where a price-setting firm sets the retail price after the supply uncertainty is resolved and a price-taking firm takes the retail price as given. We investigate the impacts of market conditions, suppliers' wholesale prices and their reliabilities on the optimal sourcing decisions of price-setting and price-taking firms, and examine how a firm's pricing power affects these impacts. We define a supplier's reliability in terms of the "size" or the"variability" of his random capacity using the concepts of stochastic dominance. We find that the supplier reliability affects the optimal sourcing decisions differently for price-setting and price-taking firms. Specifically, with a price-setting firm, a supplier can win a larger order by increasing his reliability, it is not always so with a price-taking firm.
This paper introduces a new Automated Guided Vehicle (AGV) for guidewire-free industrial applications where rapid reconfiguration is required. The AGV, called OmniMate, has full omnidirectional motion capabilities, can correct odometry errors without external references, and offers a large 18391 cm (7236") loading deck. A patented, so-called compliant linkage avoids the excessive wheel slippage often found in other omnidirectional platforms. The paper describes the kinematic design and the control system of the platform and explains its unique odometry error correction method, called Internal Position Error Correction (IPEC). IPEC renders the OmniMate's odometry almost completely insensitive to even severe irregularities of the floor, such as bumps, cracks, or traversable objects. Dead-reckoning is further enhanced by the addition of a fiber-optics gyroscope. Because of its extraordinary dead-reckoning capabilities the OmniMate can travel over extended distances while following a pr...
this paper is to evaluate the benefits of dynamic vehicle routing strategies versus conventional static strategies for controlling the operation of an AGV system. The experimental tool for performing this comparison is a flexible, hierarchical simulation model that can be used to simulate an arbitrary AGV system configuration operating under an arbitrary vehicle routing strategy. This model is hierarchical in the following sense: at the time of each routing decision for an AGV in the main simulation, subordinate simulations (subsimulations) are performed sequentially for selected alternative routes between the AGV's current location and its assigned destination; and the performance observed for the latest route in the corresponding subsimulation is used to determine whether additional subsimulations should be performed before finalizing the current routing decision and resuming the main simulation. We present a case study involving a prototype AGV system operating under the control of a global vision system; and the results of this case study indicate that significant, cost-effective improvements in performance can be achieved by the use of dynamic vehicle routing strategies in conjunction with global vision--based control
We present a novel approach to specification of dynamic systems. This approach, a stochastic extension of process algebra, facilitates quantitative, or performance, analysis, in addition to qualitative analysis. For unreliable systems this integrated approach encourages the investigation of the impact of functional characteristics on the performance of the system. Throughout the paper details of the stochastic process algebra are made concrete via an example: a robot control problem. Two specifications are presented of this problem. The first, an idealisation, does not represent the possibility of failures. The second models both failures and recoveries. Each is solved to obtain performance measures for the system. Corresponding Author, address: University of Edinburgh, Kings Buildings, Edinburgh EH9 1NN; tel: 0131 650 5188; fax: 0131 667 7209 1 1 Introduction Classical process algebras such as CCS (Milner, 1989) and CSP (Hoare, 1985) are perfectly suited for modelling systems com...
This paper demonstrates how Simulated Annealing can be used to obtain line balancing solutions when one or more objectives are important. The experimental results showed that Simulated Annealing approaches yielded signi cantly better solutions on cycle time performance but average solutions on cost performance. When cycle time performance and total unit cost are weighted equally, performance rankings showed that Simulated Annealing approaches still showed better mean performance than the other approaches. 1. Introduction and literature review 1.1. Background Assembly line balancing is an area of research that has received relatively little attention in recent years. With the number of simplifying assumptions that most traditional line balancing approaches make, it is unsurprising that production managers today are often reluctant to use these old approaches. Modern production environments are often fast paced and exible, and an increasing number of co
We present a robust generalised queuing network algorithm as an evaluative procedure for optimising production line configurations using simulated annealing. We compare the results obtained with our algorithm to those of other studies and find some interesting similarities but also striking differences between them in the allocation of buffers, numbers of servers, and their service rates. While context dependent, these patterns of allocation are one of the most important insights which emerge in solving very long production lines. The patterns, however, are often counter-intuitive, which underscores the difficulty of the problem we address. The most interesting feature of our optimisation procedure is its bounded execution time, which makes it viable for optimising very long production line configurations. Based on the bounded execution time property, we have optimised configurations of up to 60 stations with 120 buffers and servers in less than five hours of CPU time.
We consider the problem of scheduling multiple, large--scale, make--to--order assemblies under resource, assembly area, and part availability constraints. Such problems typically occur in the assembly of high volume, discrete make--to--order products. Based on a list scheduling procedure which has been proposed in Kolisch  we introduce three efficient heuristic solution methods. Namely, a biased random sampling method and two tabu search--based large--step optimization methods. The two latter methods differ in the employed neighborhood. The first one uses a simple API-- neighborhood while the second one uses a more elaborated so--called `critical neighborhood ' which makes use of problem insight. All three procedures are assessed on a systematically generated set of test instances. The results indicate that especially the large--step optimization method with the critical neighborhood gives very good results which are significant better than simple single--pass list scheduling proce...
This paper addresses the integrated scheduling and lot-sizing problem in a manufacturing environment which produces complex assemblies. Given the due-dates of the end items, the objective is to minimize the cumulative lead time of the production schedule #total makespan# and reduce set-up and inventory holding costs. A JIT production strategy is adopted in which production is scheduled as late as possible #to minimize WIP costs#, but without backlogging end items. The integrated scheduling and lot-sizing problem within suchanenvironment has been formulated and is NP hard. An e#cient heuristic is developed that schedules operations by exploiting the critical path of a network and iteratively groups orders to determine lotsizes that minimize the makespan as well as set-up and holding costs. The performance of the proposed heuristic is evaluated and numerical results are presented comparing the savings achieved in makespan and cost over a lot-for-lot production strategy, and sch...
This paper evaluates a new branch-and-cut approach, establishing a computational benchmark for the single-product, assembly system design (SPASD) problem. Our approach, which includes a heuristic, preprocessing, and two cut-generating methods, outperformed OSL in solving a set of 102 instances of the SPASD problem. The approach is robust; test problems show that it can be applied to variations of the generic SPASD problem that we encountered in industry. 1.
. This paper formulates Shewhart mean (X-bar) and range (R) control charts for diagnosis and interpretation by artificial neural networks. Neural networks are trained to discriminate between samples from probability distributions considered within control limits and those which have shifted in both location and variance. Neural networks are also trained to recognize samples and predict future points from processes which exhibit long term or cyclical drift. The advantages and disadvantages of neural control charts compared to traditional statistical process control are discussed. Keywords. control charts, neural networks, statistical quality control, artificial intelligence. Revised for International Journal of Production Research March, 1993 2 X-Bar and R Control Chart Interpretation Using Neural Computing Abstract. This paper formulates Shewhart mean (X-bar) and range (R) control charts for diagnosis and interpretation by artificial neural networks. Neural networks are trained ...
We introduce the batch sequencing problem (BSP) with item and batch availability for the singlemachine and two-machine flow-shop case. We propose a genetic algorithm which solves the BSP through a decomposition into a Phase I-Batching and a Phase II-Scheduling decision. The batch sequencing problem is closely related to the discrete lotsizing and scheduling problem (DLSP). Computational experience shows that the genetic algorithm for solving the BSP favorably compares with procedures for solving the DLSP. Zusammenfassung Wir betrachten das Batch Sequencing Problem (BSP) mit geschlossener und offener Produktweitergabe fur den Ein-Maschinen- und Zwei-Maschinen Flow-Shop Fall. Ein genetischer Algorithmus lost das BSP durch Dekomposition des Losungsverfahrens in eine Phase I-Batching und eine Phase II-Scheduling Entscheidung. Das Batch Sequencing Problem hangt eng zusammen mit dem Discrete Lotsizing and Scheduling Problem (DLSP). Rechenergebnisse zeigen, daß der genetische Algorithmus leis...
A hierarchical cell loading approach is proposed to solve the production planning problem in cellular manufacturing systems. Our aim is to minimize the variable cost of production subject to production and inventory balance constraints for families and items, and capacity feasibility constraints for group technology cells and resources over the planning horizon. The computational results indicated that the proposed algorithm was very efficient in finding an optimum solution for a set of randomly generated problems.
We propose an integrated algorithm that will solve the part-family and machine-cell formation problem by simultaneously considering the within-cell layout problem. To the best of our knowledge, this is the first study that considers the efficiency of both individual cells and the overall system in monetary terms. Each cell should make at least a certain amount of profit to attain self-sufficiency, while we maximize the total profit of the system using a holonistic approach. The proposed algorithm provides two alternative solutions; one with independent cells and the other one with inter-cell movement. Our computational experiments indicate that the results are very encouraging for a set of randomly generated problems.
This paper is organized in the following manner. In section 2, we present the problem formulation. In Section 3, we discuss the issues involved in the design of 5 control charts for grouped data. Solution methodology for both the large and the small sample size cases are presented. When a and b are small, or if the difference between 0 and 1 is small, then large sample sizes are required and we can appeal to the central limit theorem. If small sample sizes are required, the solution is more difficult. Section 3 also addresses the issue of discreteness. Since we are working with grouped (discrete) data and integer sample sizes the design problem is complicated. In Section 4, we address the related but separate problem of step-gauge design. There are two decisions to be made in specifying the grouping criteria: we must decide how many groups are to be used, and how these groups are to be distinguished. In general, a k-step gauge classifies units into (k+1) groups. As more groups are used, more information becomes available about the parameters of the underlying distribution. The limiting case occurs when the variable is measured to arbitrary precision. Given that a k-step gauge is to be used, not all gauge limits will provide the same amount of information about the parameters of the underlying distribution. It is not intuitively clear how to set the k-steps of the gauge to minimize the sample size required. In Section 4, we give tables of step gauge limits that minimize the sample size required for tests with specific type I and II risks. We consider in detail the important special case where the error risks are equal and the gauge limits are placed symmetrically about 0 + 1 () 2 . It is well known that the optimal single limit gauge should be placed at 0 + 1 () ...