IEEE Transactions on Automation Science and Engineering

Published by Institute of Electrical and Electronics Engineers
Online ISSN: 1558-3783
Print ISSN: 1545-5955
Publications
Contact mode Atomic Force Microscopy (CM-AFM) is popularly used by the biophysics community to study mechanical properties of cells cultured in petri dishes, or tissue sections fixed on microscope slides. While cells are fairly easy to locate, sampling in spatially heterogeneous tissue specimens is laborious and time-consuming at higher magnifications. Furthermore, tissue registration across multiple magnifications for AFM-based experiments is a challenging problem, suggesting the need to automate the process of AFM indentation on tissue. In this work, we have developed an image-guided micropositioning system to align the AFM probe and human breast-tissue cores in an automated manner across multiple magnifications. Our setup improves efficiency of the AFM indentation experiments considerably. Note to Practitioners: Human breast tissue is by nature heterogeneous, and in the samples we studied, epithelial tissue is formed by groups of functional breast epithelial cells that are surrounded by stromal tissue in a complex intertwined way. Therefore sampling a specific cell type on an unstained specimen is very difficult. To aid us, we use digital stained images of the same tissue annotated by a certified pathologist to identify the region of interest (ROI) at a coarse magnification and an image-guided positioning system to place the unstained tissue near the AFM probe tip. Using our setup, we could considerably reduce AFM operating time and we believe that our setup is a viable supplement to commercial AFM stages with limited X-Y range.
 
A clear association has been demonstrated between gait stability and falls in the elderly. Integration of wearable computing and human dynamic stability measures into home automation systems may help differentiate fall-prone individuals in a residential environment. The objective of the current study was to evaluate the capability of a pair of electronic textile (e-textile) pants system to assess local dynamic stability and to differentiate motion-impaired elderly from their healthy counterparts. A pair of e-textile pants comprised of numerous e-TAGs at locations corresponding to lower extremity joints was developed to collect acceleration, angular velocity and piezoelectric data. Four motion-impaired elderly together with nine healthy individuals (both young and old) participated in treadmill walking with a motion capture system simultaneously collecting kinematic data. Local dynamic stability, characterized by maximum Lyapunov exponent, was computed based on vertical acceleration and angular velocity at lower extremity joints for the measurements from both e-textile and motion capture systems. Results indicated that the motion-impaired elderly had significantly higher maximum Lyapunov exponents (computed from vertical acceleration data) than healthy individuals at the right ankle and hip joints. In addition, maximum Lyapunov exponents assessed by the motion capture system were found to be significantly higher than those assessed by the e-textile system. Despite the difference between these measurement techniques, attaching accelerometers at the ankle and hip joints was shown to be an effective sensor configuration. It was concluded that the e-textile pants system, via dynamic stability assessment, has the potential to identify motion-impaired elderly.
 
This paper presents an automated robotic micromanipulation system capable of force-controlled mechanical stimulation and fluorescence imaging of Drosophila larvae, for mechanotransduction studies of Drosophila neural circuitry. An elastomeric microdevice is developed for efficient immobilization of an array of larvae for subsequent force-controlled touching. A microelectromechanical systems (MEMS) based force sensor is integrated into the system for closed-loop force control of larva touching at a resolution of 50 μN. Two microrobots are coordinately servoed using orchestrated position and force control laws for automatic operations. The system performs simultaneous force-controlled larva touching and fluorescence imaging at a speed of 4 larvae per minute, with a success rate of 92.5%. This robotic system will greatly facilitate the dissection of mechanotransduction mechanisms of Drosophila larvae at both the molecular and cellular levels.
 
Passive radio-frequency identification (RFID) systems based on the ISO/IEC 18000-6C (aka EPC Gen2) protocol have typical read rates of up to 1200 unique 96-bit tags per second. This performance is achieved in part through the use of a medium access control algorithm, christened the Q-algorithm, that is a variant of the Slotted Aloha multiuser channel access algorithm. We analyze the medium access control algorithm employed by the ISO/IEC 18000-6C RFID air interface protocol and provide a procedure to achieve optimal read rates. We also show that theoretical performance can be exceeded in many practical use cases and provide a model to incorporate real-world data in read-rate estimation.
 
In this paper, we present a new sketch-based system - KnitSketch, to improve the efficiency of process planning for knitting garments at an early design stage. The KnitSketch system utilizes sketching interface with the pen-paper metaphor and users only need to draw outlines of different parts of the garment. Based on sketching understanding, the system automatically makes reasonable geometric inferences about the process-planning data of the garment. The system is designed for nonprofessional users and can design diverse garment styles by freehand drawings. The contributions of this work include contextual extraction of reusable data from sketches, a MDG structure for sketch beautification, and an integrated system with natural expression and effective communication that reduces the cognitive load of human beings. User experience shows that the proposed system helps designers focus on the task instead of the designing tools, and thus improves the efficiency and productivity of human beings.
 
In this paper, we are concerned with the registration of two 3D data sets with large-scale stretches and noises. First, by incorporating a scale factor into the standard iterative closest point (ICP) algorithm, we formulate the registration into a constraint optimization problem over a 7D nonlinear space. Then, we apply the singular value decomposition (SVD) approach to iteratively solving such optimization problem. Finally, we establish a new ICP algorithm, named Scale-ICP algorithm, for registration of the data sets with isotropic stretches. In order to achieve global convergence for the proposed algorithm, we propose a way to select the initial registrations. To demonstrate the performance and efficiency of the proposed algorithm, we give several comparative experiments between Scale-ICP algorithm and the standard ICP algorithm.
 
Airport baggage handling is a field of automation systems that is currently dependent on centralized control systems and conventional automation programming techniques. In this and other areas of manufacturing and materials handling, these legacy automation technologies are increasingly limiting for the growing demand for systems that are reconfigurable, fault tolerant, and easy to maintain. IEC 61499 Function Blocks is an emerging architectural framework for the design of distributed industrial automation systems and their reusable components. A number of architectures have been suggested for multiagent and holonic control systems that incorporate function blocks. This paper presents a multiagent control approach for a baggage handling system (BHS) using IEC 61499 Function Blocks. In particular, it focuses on demonstrating a decentralized control system that is scalable, reconfigurable, and fault tolerant. The design follows the automation object approach, and produces a function block component representing a single section of conveyor. In accordance with holonic principles, this component is autonomous and collaborative, such that the structure and the behavior of a BHS can be entirely defined by the interconnection of these components within the function block design environment. Simulation is used to demonstrate the effectiveness of the agent-based control system and a utility is presented for real-time viewing of these systems. Tests on a physical conveyor test system demonstrated deployment to embedded control hardware.
 
In this paper, fault diagnosis and accommodation control are developed for robotic systems. First, a nonlinear observer in the proposed method is designed based on the available model. The fault detection is carried out by comparing the observer states with their signatures. Secondly, state observers are constructed based on possible fault function sets. Thirdly, the accommodation control design is developed using a normal controller plus a neural network compensator to capture the nonlinear characteristics of faults. Finally, if the fault isolation is completed successfully, the second fault accommodation controller is presented based on the fault information obtained by the isolation scheme.
 
On-line Selection Flow of the SS-scheme  
VM Conjecture Results of Various Algorithms  
Selection schemes between neural-network (NN) and multiple-regression (MR) outputs of a virtual metrology system (VMS) are studied in this paper. Both NN and MR are applicable algorithms for implementing virtual-metrology (VM) conjecture models. A MR algorithm may achieve better accuracy only with a stable process, whereas a NN algorithm may have superior accuracy when equipment property drift or shift occurs. To take advantage of both MR and NN algorithms, the simple-selection scheme (SS-scheme) is first proposed to enhance the VM conjecture accuracy. This SS-scheme simply selects either NN or MR output according to the smaller Mahalanobis distance between the input process data set and the NN/MR-group historical process data sets. Furthermore, a weighted-selection scheme (WS-scheme), which computes the VM output with a weighted sum of NN and MR results, is also developed. This WS-scheme generates a well-behaved system with continuity between the NN and MR outputs. Both the CVD and photo processes of a fifth-generation TFT-LCD factory are adopted in this paper to test and compare the conjecture accuracy among the solo-NN, solo-MR, SS-scheme, and WS-scheme algorithms. One-hidden-layered back-propagation neural network (BPNN-I) is applied to establish the NN conjecture model. Test results show that the conjecture accuracy of the WS-scheme is the best among those solo-NN, solo-MR, SS-scheme, and WS-scheme algorithms.
 
Most factories depend on skilled workers to test the quality of transmission devices by listening to the sound. In this paper, an intelligent inspection system is proposed to evaluate the quality of transmission devices in place of experts. Since the causes of faults of transmission devices are complex and a defective product might simultaneously have many types of faults, the discrimination process between defective and nondefective products and the classification process of defective products are treated separately in the proposed system. From the acoustic data of operating transmission devices, we extract the feature vectors based on time-frequency analysis and train a neuroclassifier by using the learning vector quantization (LVQ). Furthermore, the genetic algorithm (GA) with floating point (FP) is utilized to select some significant frequencies from the spectra of acoustic data of defective and nondefective products and to make a quality evaluation rule automatically. The defective products are picked up from the automatic production line according to the evaluation rule and the trained neuroclassifier. At last, the self-organizing feature map (SOM) algorithm is used to identify the kinds of defective products. The experimental results show that the proposed intelligent system is able to perform the quality evaluation of transmission devices successfully.
 
This paper presents a droplet-ejection-based technique for synthesizing deoxyribonucleic acid (DNA) sequences on different substrates, such as glass, plastic, or silicon. Any DNA sequence can be synthesized by ejecting droplets of DNA bases by a self-focusing acoustic transducer (SFAT) that does not require any nozzles. An SFAT can eject liquid droplets around 3-5 μm in diameter, which is significantly smaller than those ejected by commercial ink jet printers and reduces the amount of reagents needed for the synthesis. An array of SFATs is integrated with microchannels and reservoirs for delivery of DNA bases to the SFATs. Poly-l-lysine-coated glass slide is patterned, and is used as a target substrate for in situ synthesis of multiple T bases. The significant advantage of this scheme over some of the existing commercial solutions is that it can allow geneticists to synthesize any DNA sequence within hours using a computer program at an affordable cost in their own labs. This paper describes the concept and scheme of the on-demand DNA synthesis (with an acoustic ejector integrated with microfluidic components) along with the results of an actual DNA synthesis by an SFAT. Note to Practitioners-Deoxyribonucleic acid (DNA) microarrays allow geneticists to monitor the interactions among thousands of genes simultaneously in a chip. There are commercial systems for producing DNA microarrays, but none of them give flexibility to synthesize DNA microarrays on-demand in the geneticist's own lab. Affymetrix's GeneChip technology produces DNA probe sequences premade at Affymetrix with a set of 4n photomasks for n-mers. Other techniques transfer premade DNA sequences to a substrate (glass, plastic, or silicon) through ink-jet printing or contact dispensing. Agilent and Rosetta use their ink-jet printing technology to produce DNA probe sequences at their factories. The ink-jet print heads used for printing microarrays use either piezoelectric or thermal actuation, and eject liquid droplets through nozzles. Thus, the smallest droplet size ejected from these devices depends on the size of the nozzle. The small nozzles are difficult to construct with good uniformity and tend to get clogged. The idea presented in this paper is to develop a microelectromechanical-syst- em (MEMS)-based portable system for synthesizing DNA on different substrates, using nozzleless, heatless, lensless, acoustic droplet ejectors. The future research is to synthesize longer DNA sequences with a combination of different bases, using directional droplet ejectors.
 
A pair of mobile robots acting on opposite sides of a thin plate is developed for a class of tasks where robots have to work together, carrying a pair of end-effectors and traversing across a plate surface. Using powerful magnets, the paired robots attract each other, support themselves against gravity, and generate traction force to move across the panel. First, the design concept of paired mobile robots is presented, followed by dynamic modeling and magnetic analysis. Conditions for preventing the robot from falling as well as from slipping on the plate surface are examined. Time-optimal control of the paired robots subject to the no-fall, no-slip conditions is formulated and solved numerically. Precision positioning control using a laser beacon is designed and tested. A prototype of the paired robots using Halbach array permanent magnets and Lorentz force actuators is developed, and the control methods are implemented and tested on the prototype.
 
The contribution of this paper is the introduction of the event-condition-action (ECA) paradigm for the design of modular logic controllers that are reconfigurable. ECA rules have been used extensively to specify the behavior of active database and expert systems and are recognized as a highly reconfigurable tool to design reactive behavior. This paper develops a method to design modular logic controllers whose dynamics are governed by ECA rules, with the ultimate goal of producing reconfigurable control. Modularity, integrability, and diagnosability measures that have in the past been used to measure the reconfigurability of manufacturing systems are used to assess the reconfigurability of the developed controllers. For the modularity measure, criteria found in computer science to evaluate the modularity of object-oriented programs are adapted to evaluate the modularity of modular logic controllers. The results of this paper are that reconfigurability is highly dependent on the level of modularity of the logic control system, and that not all "modular" structures are reconfigurable. There are approaches, such as the one shown in this paper using ECA rules, that can greatly increase the modularity, integrability, and diagnosability of the logic control system, thus increasing its reconfigurability. Note to Practitioners-This paper has been motivated by the problem of designing reconfigurable modular logic controllers. Reconfiguration is important in manufacturing, but it has also been an issue in the software design domain. There are software systems that currently exist, such as active data bases or expert systems with very powerful reconfiguration capabilities enabled by event-condition-action (ECA) rules. This paper applies the ECA concept to the design of modular logic controllers. This paper begins by describing what an ECA logic system is and then focuses on how ECA logic systems can be implemented with modular control approaches. To this end, two designs are - - considered. First, modular finite state machines are used to construct ECA logic systems, and a theoretical framework is built using this approach. Three qualitative measures for reconfigurability (modularity, integrability, and diagnosability) are presented and the controllers are evaluated using these measures. Second, an implementation using the IEC 61499 function block standard is presented as it is a widely understood and accepted standard for modular control applications. Future work entails theoretical analysis using modular verification techniques that exploit a controller structure
 
This paper presents a robust methodology for constrained motion tracking control of piezo-actuated flexure-based four-bar micro/nano manipulation mechanisms. This unique control approach is established for the tracking of desired motion trajectories in a constrained environment exhibiting some degree of uncertain stiffness. The control methodology is also formulated to accommodate not only the parametric uncertainties and unknown force conversion function, but also nonlinearities including the hysteresis effect and external disturbances in the motion systems. In this paper, the equations for the dynamic modelling of a flexure-hinged four-bar micro/nano manipulation mechanism operating in a constrained environment are established. A lumped parameter dynamic model that combines the piezoelectric actuator and the micro/nano manipulation mechanism is developed for the formulation of the control methodology. Stability analysis of the proposed closed-loop system is conducted, and the convergence of the motion tracking errors is proven theoretically. Furthermore, precise motion tracking ability in following a desired motion trajectory is demonstrated in the experimental study. This robust constrained motion tracking control methodology is very useful for the development of high performance flexure-based micro/nano manipulation applications demanding high-precision motion tracking with force sensing and feedback.
 
This paper presents a complete design and development procedure of a new XY micromanipulator for two-dimensional (2-D) micromanipulation applications. The manipulator possesses both a nearly decoupled motion and a simple structure, which is featured with parallel-kinematic architecture, flexure hinge-based joints, and piezoelectric actuation. Based on pseudo-rigid-body (PRB) simplification approach, the mathematical models predicting kinematics, statics, and dynamics of the XY stage have been obtained, which are verified by the finite-element analysis (FEA) and then integrated into dimension optimization via the particle swarm optimization (PSO) method. Moreover, a prototype of the micromanipulator is fabricated and calibrated using a microscope vision system, and visual servo control employing a modified PD controller is implemented for the accuracy improvement. The experiments discover that a workspace size of 260 mum times 260 mum with a 2-D positioning accuracy and repeatability around 0.73 and 1.02 mum, respectively, can be achieved by the micromanipulator.
 
This paper describes the design, assembly, fabrication, and evaluation of artificial molecular machines with the goal of implementing their internal nanoscale movements within nanoelectromechanical systems in an efficient manner. These machines, a unique class of switchable molecular compounds in the shape of bistable [2]rotaxanes, exhibit internal relative mechanical motions of their ring and dumbbell components as a result of optical, chemical, or electrical signals. As such, they hold promise as nanoactuation materials. Although micromechanical devices that utilize the force produced by switchable [3]rotaxane molecules have been demonstrated, the current prototypical devices require a mechanism that minimizes the degradation associated with the molecules in order for bistable rotaxanes to become practical actuators. We propose a modified design in which electricity, instead of chemicals, is employed to stimulate the relative movements of the components in bistable [3]rotaxanes. As an initial step toward the assembly of a wholly electrically powered actuator based on molecular motion, closely packed Langmuir-Blodgett films of an amphiphilic, bistable [2]rotaxane have been characterized and an in situ Fourier transform infrared spectroscopic technique has been developed to monitor molecular signatures in device settings. Note to Practitioners-Biological molecular components, such as myosin and actin in skeletal muscle, organize to perform complex mechanical tasks. These components execute nanometer-scale interactions, but produce macroscopic effects. Inspired by this concept, we are developing a new machines called bistable rotaxanes. In this paper, a series of experiments has been conducted to study the molecular properties of bistable rotaxanes in thin films and on solid-state nanodevices. Our results have shed light on the optimization of future molecular machine-based systems particularly with respect to their implementation and manufacture.
 
A new approach to compensate the strong hysteresis nonlinearity in piezoelectric materials is proposed. Based on the inverse multiplicative scheme, the approach avoids models inversion as employed in existing works. The compensator is therefore simple to implement and does not require additional computation as soon as the direct model is available. The proposed compensation technique is valuable for hysteresis that are modeled with the Bouc-Wen set of equations.
 
The first part of this paper develops a linear characterization for the space of the Petri net markings that are reachable from the initial marking M<sub>0</sub> through bounded-length fireable transition sequences. The second part discusses the practical implications of this result for the liveness and reversibility analysis of a particular class of Petri nets known as process-resource nets with acyclic, quasi-live, serializable, and reversible process subnets. Note to Practitioners-One of the main challenges in the analysis and design of the resource allocation taking place in modern technological systems is the verification of certain properties of the system behavior such as liveness and deadlock freedom. The last decade has seen the development of a number of computational tests that can evaluate the aforementioned properties for a large class of resource allocation systems. The tests that are most promising essentially verify the target properties by establishing the absence of some undesirable structure from the states that are reachable during system operation. As a result, the effective execution of these tests necessitates the effective representation of the underlying reachability space. Yet, in the past, the development of a concise and computationally manageable representation of the system reachability space has been considered to be a challenging proposition and a factor that compromises the resolution power of the aforementioned tests. The work presented in this paper establishes that for a very large class of the considered resource allocation systems, the underlying reachability space admits a precise and computationally efficient characterization, which subsequently leads to more powerful verification tools of the target behavioral properties
 
This paper presents a decision-making approach towards adaptive setup planning that considers both the availability and capability of machines on a shop floor. It loosely integrates scheduling functions at the setup planning stage, and utilizes a two-step decision-making strategy for generating machine-neutral and machine-specific setup plans at each stage. The objective of the research is to enable adaptive setup planning for dynamic job shop machining operations. Particularly, this paper covers basic concepts and algorithms for one-time generic setup planning, and run-time final setup merging for dynamic machine assignments. The decision-making algorithms validation is further demonstrated through a case study. Note to Practitioners-With increased product diversification, companies must be able to profitably produce in small quantities and make frequent product changeovers. This leads to dynamic job shop operations that require a growing number of setups in a machine shop. Moreover, today's customer-driven market and just-in-time production demand for rapid and adaptive decision making capability to deal with dynamic changes in the job shop environment. Within the context, how to come up with effective and efficient setup plans where machine availability and capability change over time is crucial for engineers. The adaptive setup planning approach presented in this paper is expected to largely enhance the dynamism of fluctuating job shop operations through adaptive yet rapid decision makings.
 
Availability of only limited or sparse experimental data impedes the ability of current models of chemical mechanical planarization (CMP) to accurately capture and predict the underlying complex chemomechanical interactions. Modeling approaches that can effectively interpret such data are therefore necessary. In this paper, a new approach to predict the material removal rate (MRR) and within wafer nonuniformity (WIWNU) in CMP of silicon wafers using sparse-data sets is presented. The approach involves utilization of an adaptive neuro-fuzzy inference system (ANFIS) based on subtractive clustering (SC) of the input parameter space. Linear statistical models were used to assess the relative significance of process input parameters and their interactions. Substantial improvements in predicting CMP behaviors under sparse-data conditions can be achieved from fine-tuning membership functions of statistically less significant input parameters. The approach was also found to perform better than alternative neural network (NN) and neuro-fuzzy modeling methods for capturing the complex relationships that connect the machine and material parameters in CMP with MRR and WIWNU, as well as for predicting MRR and WIWNU in CMP.
 
This paper addresses a novel method for localizing a stationary object in an indoor office environment. The proposed method utilizes the received-signal-strength index (RSSI) of radio signals radiating from fixed reference nodes and reference tags placed at known positions to generate a precise signal propagation model. Signal attenuation parameters are updated online according to environmental variation; thus, the proposed method has environmental-adaptation capabilities. Subsequent experiments were conducted to demonstrate the superiority of the proposed technique over a commercial location-based service (LBS) chipset.
 
Biological studies, drug discovery, and medical diagnostics benefit greatly from automated microscope platforms that can outperform even the most skilled human operators in certain tasks. However, the small field-of-view of a traditional microscope operating at high resolution poses a significant challenge in practice. The common approach of using a moving stage suffers from relatively low dynamic bandwidth and agitation to the specimen. This paper describes an automated microscope station based on the novel adaptive scanning optical microscope (ASOM), which combines a high-speed post-objective scanning mirror, a custom design scanner lens, and a microelectromechanical systems (MEMS) deformable mirror to achieve a greatly expanded field-of-view. After describing the layout and operating principle of the ASOM imaging subsystem, we present a system architecture for an automated microscope system suitable for the ASOM's unique wide field and high-speed imaging capabilities. We then describe a low-cost experimental prototype of the ASOM that demonstrates all critical optical characteristics of the instrument, including the calibration of the MEMS deformable mirror. Finally, we present initial biological (living nematode worms) imaging results obtained with the experimental apparatus and discuss the impact of the ASOM on biomedical imaging activities.
 
In this paper, we consider the problem of allocating machine resources among multiple agents, each of which is responsible to solve a flowshop scheduling problem. We present an iterated combinatorial auction mechanism in which bid generation is performed within each agent, while a price adjustment procedure is performed by a centralized auctioneer. While this approach is fairly well-studied in the literature, our primary innovation is in an adaptive price adjustment procedure, utilizing variable step-size inspired by adaptive PID-control theory coupled with utility pricing inspired by classical microeconomics. We compare with the conventional price adjustment scheme proposed in Fisher (1985), and show better convergence properties. Our secondary contribution is in a fast bid-generation procedure executed by the agents based on local search. Putting both these innovations together, we compare our approach against a classical integer programming model as well as conventional price adjustment schemes, and show drastic run time improvement with insignificant loss of global optimality.
 
Homogeneous culture of neural cells. The length of the bar in the lower left corner is 20 m. (K. Kovanen, MST group.)  
Advanced in-vitro cell-based barrier model (after 28).  
Simplified illustration of a cell and its inputs and outputs.  
This paper introduces the latest technology developments in the field of adherent cell culturing. This highly multidisciplinary field is first matched with its numerous applications, such as the production of therapeutics, drug development, and toxicology. Further, the paper focuses on perfusion cell culturing systems intended for difficult-to-culture cells (such as primary and stem cells) and high-content screening (HCS) purposes. Parts of such advanced adherent cell culturing systems are presented including the open questions related to them. The system constituents are addressed in terms of cells and tissue models, cell culturing media, culturing vessels, actuation instrumentation, measurement instrumentation, and control. Finally, the paper outlines future directions for laboratory automation with regard to cultivation of primary and stem cells. Note to Practitioners-This paper discusses the latest technology developments and open automation challenges in the field of adherent cell culturing. Adherent cells are cells that need a surface to attach in order to grow. Most of the human cells are adherent cells. With the legislation push supported by the cost reductions, researchers are avoiding testing of various new compounds (e.g., drug candidates, environmental toxins) on animals. Instead, human cells are grown outside of the body (i.e., in vitro) for testing purposes. Growing cells in vitro is a challenge. The cells need to be maintained at a correct temperature, pH, oxygen, and humidity levels, and they need to be fed with a proper mixture of nutrients and growth factors. In other words, the cell cultivation system, which keeps the cells alive, must mimic the human body conditions as close as possible. Instruments have been developed to perform the needed functions, and a few robot-based automation systems are available to transfer the cells from an instrument to another with high throughput. Cells traditionally cultured are immortalized cell lines of cancer origin and of one cell type. These are easier to cultivate but are not as good models of human tissues as researchers would wish. Therefore, biologists are developing new models which contain many types of cells taken directly from a patient. These various kinds of primary cell cultures are more dema- nding to cultivate and need sophisticated, preferably integrated instruments and control algorithms to ensure homogenous stable environment for all of the cells. This paper provides an insight in the applications and current research trends of adherent cell cultivation. It also discusses the various subsystems needed and the automation engineering challenges posed by the new tissue models being developed by the biologists.
 
Configuration for the closed-loop control of sedation delivery. I: drug infusion rate; P : sensor measurement of transcutaneous CO partial pressure ; ^ P : estimated measurement; P : control setpoint. The override system enforces the SatO , C , C thresholds to maximize the safety of opioid administration.  
The open-loop arterial P response to stepwise changes of remifentanil plasma concentration is displayed versus time for two different subjects. Filled circles: remifentanil plasma concentration (C ) measurements; empty circles: arterial carbon dioxide tension (P ) measurements. The experimental data are compared with simulation results. Solid lines: C input to the model that reproduces the experimental administration schedule;  
Simulated closed-loop induction and maintenance of sedation (FiO = 0:33). Model results in terms of estimated P , measured P , remifentanil plasma concentration, and normalized minute ventilation are displayed. The reference P signal is changed from baseline to 50 mmHg at time t = 20 min, thereby activating the controller and initiating drug infusion. A painful stimulation and a generic surgical disturbance occur for 10 min at time t = 90 min and t = 160 min, respectively. At t = 230 min P signal loss occurs (reproducing, e.g., the detachment of the sensor). The sequence of events is simulated for three different levels of drug sensitivity: high sensitivity (C = 0:7 ng=ml, dotted line), average sensitivity (C = 1 ng=ml, solid line), low sensitivity (C = 1:3 ng=ml, dash-dotted line).
The metabolic model and its relationship to the ventilatory and cardiovascular control systems. Model inputs are the metabolic rates of oxygen consumption/carbon dioxide production (not shown) and the partial pressures of the respiratory gases in inspired air (P ; P ). P , x = a, b, t, tc: arterial , cerebral, tissue, transcutaneous partial pressure of carbon dioxide (equivalent notation for oxygen). Shaded blocks depict the structure of the remifentanil PKPD model. I, C , and C are the infusion rate, plasma, and effect site concentrations of the opioid, respectively. E: pharmacologic depressant effect on ventilation; _ V : baseline ventilation; _ V : minute ventilation; _ V : alveolar ventilation; Q: cardiac output. The dead space indicates the fraction of minute ventilation that does not participate in gas exchange ( _ V = (2=3) _ V ).  
Monitored anesthesia care (MAC) is increasingly used to provide patient comfort for diagnostic and minor surgical procedures. The drugs used in this setting can cause profound respiratory depression even in the therapeutic concentration range. Titration to effect suffers from the difficulty to predict adequate analgesia prior to application of a stimulus, making titration to a continuously measurable side effect an attractive alternative. Exploiting the fact that respiratory depression and analgesia occur at similar drug concentrations, we suggest to administer opioids and propofol during MAC using a feedback control system with transcutaneously measured partial pressures of CO<sub>2</sub>(P<sub>tcCO2</sub>) as the controlled variable. To investigate this dosing paradigm, we developed a comprehensive model of human metabolism and cardiorespiratory regulation, including a compartmental pharmacokinetic and a pharmacodynamic model for the fast acting opioid remifentanil. Model simulations are in good agreement with ventilatory experimental data, both in presence and absence of drug. Closed-loop simulations show that the controller maintains a predefined CO<sub>2</sub> target in the face of surgical stimulation and variable patient sensitivity. It prevents dangerous hypoventilation and delivers concentrations associated with analgosedation. The proposed control system for MAC could improve clinical practice titrating drug administration to a surrogate endpoint and actively limiting the occurrence of hypercapnia/hypoxia.
 
Problems of inventory control and customer admission control are considered for a manufacturing system that produces one product to meet random demand. Four admission policies are investigated: lost sales, complete backordering, randomized admission, and partial backordering. These policies are combined with an integral inventory control policy, which releases raw items only when an incoming order is accepted and keeps the inventory position (total inventory minus outstanding orders) constant. The objective is to determine the inventory level and the maximum number of backorders, as well as the admission probability that maximize the mean profit rate of the system. The system is modeled as a closed queueing network and its performance is computed analytically. The optimal parameters for each policy are found using exhaustive search and convex analysis. Numerical results show that managing inventory levels and sales jointly through partial backordering achieves higher profit than other control policies.
 
Group elevator scheduling has long been recognized as an important problem for building transportation efficiency, since unsatisfactory elevator service is one of the major complaints of building tenants. It now has a new significance driven by homeland security concerns. The problem, however, is difficult because of complicated elevator dynamics, uncertain traffic in various patterns, and the combinatorial nature of discrete optimization. With the advent of technologies, one important trend is to use advance information collected from devices such as destination entry, radio frequency identification, and sensor networks to reduce uncertainties and improve efficiency. How to effectively utilize such information remains an open and challenging issue. This paper presents the optimized scheduling of a group of elevators with destination entry and future traffic information for normal operations and coordinated emergency evacuation. Key problem characteristics are abstracted to establish a two-level separable formulation. A decomposition and coordination approach is then developed, where subproblems are solved by ordinal optimization-based local search, and top ranked nodes are selectively optimized by using dynamic programming. The approach is then extended to handle up-peak with little or no future traffic information, elevator parking for low intensity traffic, and coordinated emergency evacuation. Numerical testing results demonstrate near-optimal solution quality, computational efficiency, the value of future traffic information, and the potential of using elevators for emergency evacuation.
 
Nanomanipulation with atomic force microscopes (AFMs) for nanoparticles with overall sizes on the order of 10 nm has been hampered in the past by the large spatial uncertainties encountered in tip positioning. This paper addresses the compensation of nonlinear effects of creep and hysteresis on the piezo scanners which drive most AFMs. Creep and hysteresis are modeled as the superposition of fundamental operators, and their inverse model is obtained by using the inversion properties of the Prandtl-Ishlinskii operator. Identification of the parameters in the forward model is achieved by a novel method that uses the topography of the sample and does not require position sensors. The identified parameters are used to compute the inverse model, which in turn serves to drive the AFM in an open-loop, feedforward scheme. Experimental results show that this approach effectively reduces the spatial uncertainties associated with creep and hysteresis, and supports automated, computer-controlled manipulation operations that otherwise would fail.
 
This paper presents a new control mechanism based on a novel distinguished node (DN) model, for network topology and analysis in small-agent networks such as urban traffic network. The proposed DN model is represented by an extended direct graph that contains congregation and dispersing nodes, and main and connecting links. Both static and dynamic properties of the DN model are analyzed. With this new network model, a control mechanism is developed, which is governed by two heuristic rules of conflict avoidance and total delay minimization. A case study on a real urban traffic network is performed to demonstrate feasibility of the proposed DN model applied to the empirical networks, and verify the effectiveness of the proposed control mechanism.
 
In this paper, we analyze and compare the performance of the vertical and the horizontal automated-guided-vehicle transportation systems. We use results in queuing network theory and a transportation simulator to design a hybrid strategy for this study, and to set the appropriate number of agents in the systems. Next, these two transportation systems are evaluated based on cost-effectiveness criteria. For this purpose, the total construction costs of the systems for the various transportation demands are compared. Finally, we provide analytical results to evaluate and to obtain the most efficient system, based on the validity of each system, under different demand scenario. Note to Practitioners-A good design methodology is essential for the study of the optimal layout in an automated container terminal. Port designers need to select the most efficient automated-guided-vehicle (AGV) transportation system, and to set the appropriate number of agents operating in the system. This study presents a hybrid design methodology and a cost-effectiveness comparison of the vertical and the horizontal transportation systems. Our proposed design methodology is able to derive the combinatorial optimal design solutions rapidly, and at the same time pin point the bottleneck in the system. This proposed methodology can be easily applied to any transportation or logistics system, provided the system can be divided into components represented as nodes in a graph. Our results demonstrate that the horizontal AGV transportation system is more effective than the vertical AGV transportation system under most demand scenarios.
 
In this paper, we propose a Petri Net (PN) decomposition approach to the optimization of route planning problems for automated guided vehicles (AGVs) in semiconductor fabrication bays. An augmented PN is developed to model the concurrent dynamics for multiple AGVs. The route planning problem to minimize the total transportation time is formulated as an optimal transition firing sequence problem for the PN. The PN is decomposed into several subnets such that the subnets are made independent by removing the original shared places and creating its own set of resource places for each subnet with the appropriate connections. The partial solution derived at each subnet is not usually making a feasible solution for the entire PN. The penalty function algorithm is used to integrate the solutions derived at the decomposed subnets. The optimal solution for each subnet is repeatedly generated by using the shortest-path algorithm in polynomial time with a penalty function embedded in the objective function. The effectiveness of the proposed method is demonstrated for a practical-sized route planning problem in semiconductor fabrication bay from computational experiments.
 
This paper presents a dynamic routing method for supervisory control of multiple automated guided vehicles (AGVs) that are traveling within a layout of a given warehouse. In dynamic routing a calculated path particularly depends on the number of currently active AGVs' missions and their priorities. In order to solve the shortest path problem dynamically, the proposed routing method uses time windows in a vector form. For each mission requested by the supervisor, predefined candidate paths are checked if they are feasible. The feasibility of a particular path is evaluated by insertion of appropriate time windows and by performing the windows overlapping tests. The use of time windows makes the algorithm apt for other scheduling and routing problems. Presented simulation results demonstrate efficiency of the proposed dynamic routing. The proposed method has been successfully implemented in the industrial environment in a form of a multiple AGV control system.
 
A new cooling system for a fleet of scientific instruments in the form of miniature wireless robots designed for interactions at the nanometer-scale is assessed to determine its limitations. Unlike other approaches, the use of a cooling chamber allows us to remove an embedded cooling system and maintain the overall size of each robot to a minimum, hence increasing the density of instruments per surface area and resulting in enhanced performance of the platform. The goal of this paper is to assess the capacity of this cooling system; not only to remove heat but also to reduce temperature fluctuations and difference in temperature levels among the robots to maintain each robot within an operational temperature range of 0-70 °C. One hundred dummy robots were therefore placed in a custom-built cooling chamber which uses forced air convection. The temperature levels of the dummy robots were recorded with power dissipations from 0 to 15 W/robot and a maximum air flow rate of 0.5 m/s. It was determined that the maximum range in difference in temperature levels among the dummy robots increases by ∼ 20 °C per additional 5 W/robot of power dissipation with an initial difference of ∼ 40 °C at 5 W/robot. An estimated total power dissipation of 10 W/robot was determined to be a safe limit in order to maintain the operating temperature range of the robots between 0-70 °C. For power dissipation over 10 W/robot, additional compensation methods are required.
 
This paper describes a case study of the development and testing of a prototype system to support condition-based maintenance of the door systems of airport transportation vehicles. Every door open/close cycle produces a ??signature?? that can indicate the current degradation level of the door system. A combined statistical and neural network approach was used. Time, electrical current and voltage signals from the open/close cycles are processed in real-time to estimate, using the neural network, the condition of the door set relative to maintenance needs. Data collection hardware for the vehicle was designed, developed and tested to monitor door characteristics to quickly predict degraded performance, and to anticipate failures. The prototype system was installed on vehicle door sets at the Pittsburgh International Airport and tested for several months under actual operating conditions.
 
This paper studies the statics and the instantaneous kinematics of a rigid body constrained by one to six contacts with a rigid static environment. These properties are analyzed under the frictionless assumption by modeling each contact with a kinematic chain that, instantaneously, is statically and kinematically equivalent to the contact and studying the resulting parallel chain using the Grassmann-Cayley algebra. This algebra provides a complete interpretation of screw theory, in which twist and wrench spaces are expressed by means of the concept of extensor and its inherent duality reflects the reciprocity condition between possible twists and admissible wrenches of partially constrained rigid bodies. Moreover, its join and meet operators are used to compute sum and intersections of the twist and wrench spaces resulting from serial and parallel composition of motion constraints. In particular, it has an explicit formula for the meet operator that gives closed-form expressions of twist and wrench spaces of rigid bodies in contact. The Grassmann-Cayley algebra permits us to work at the symbolic level, that is, in a coordinate-free manner and therefore provides a deeper insight into the kinestatics of rigid body interactions.
 
The projections of the arc wrench sets in Fig. 1b onto Γ 2 .
(a) shows a blue range contained in a red range, in (b) red and blue half-lines cross, in (c) line segments cross arcs, while (d) shows a case where a red half-line crosses a blue arc
We propose a technique which significantly simplifies the computation of frictionless force-closure grasps of a curved planar part P . We use a colored projection scheme from the three-dimensional wrench space to two-dimensional screens, which allows us to reduce the problem of identifying combinations of arcs and concave vertices of P that admit frictionless force-closure grasps, to colored intersection searching problems in the screens. We show how to combine this technique with existing intersection searching algorithms to obtain efficient, output-sensitive algorithms to compute all force-closure grasps of P , where at most four hard, frictionless point contacts exert exactly four wrenches on P . If the boundary of P consists of n algebraic arcs of constant complexity and m concave vertices, we show how to compute all force-closure grasps with: (1) four contacts along four arcs in O ( n <sup>8/3</sup>log<sup>1/3</sup> n + K ) time; (2) four contacts along three arcs in O(n<sup>5/2+ε</sup> + K) time; (3) one contact at a concave vertex and two contacts along two arcs in O(n<sup>2</sup>m<sup>1/2+ε</sup> + K) time; (4) one contact at a concave vertex and two contacts along a single arc in O(nm) or O(n<sup>3/2+ε</sup> + K) time (depending on the size of m); where ε is an arbitrarily small positive constant and K is the output size-that is, the number of combinations of arcs and vertices of each type, that actually admit frictionless force-closure grasps.
 
For practical automated manufacturing systems (AMSs), the time dimension is of great significance and should be integrated in their plant models. Reasonably, many of the realistic general mutual exclusion constraints (GMECs) imposed on these discrete models should be timed rather than merely algebraic or logic. In the past, such a problem was studied on the basis of the Ramadge-Wonham supervisory control technique (SCT) and the theory of regions. It proves to be NP-hard since it necessitates the generation of reachability graphs. This paper shows that it can be solvable in polynomial time by using generalized linear constraints, which are originally proposed to increase the expressive power of the linear marking constraints. By dividing each constraint into marking, firing vector, and Parikh terms, its respective control place can be synthesized algebraically without considering the separation of dangerous states and events. Several examples are used to validate the effectiveness and efficiency of the proposed approach.
 
Generalized algebraic deadlock avoidance policies (DAPs) for sequential resource allocation systems (RASs) have recently been proposed as an interesting extension of the class of algebraic DAPs, that maintains the analytical representation and computational simplicity of the latter, while it guarantees completeness with respect to the maximally permissive DAP. The authors' original work that introduced these policies also provided a design methodology for them, but this methodology is limited by the fact that it necessitates the deployment of the entire state space of the considered RAS. Hence, this paper seeks the development of an alternative computational tool that can support the synthesis of correct generalized algebraic DAPs, while controlling the underlying computational complexity. More specifically, the presented correctness verification test possesses the convenient form of a mixed integer programming (MIP) formulation that employs a number of variables and constraints polynomially related to the size of the underlying RAS, and it can be readily solved through canned optimization software. Furthermore, since generalized algebraic DAPs do not admit a convenient representation in the Petri net modeling framework, an additional contribution of the presented results is that they effect the migration of the relevant past insights and developments with respect to simpler DAP classes, from the representational framework of Petri nets to that of the Deterministic Finite-State Automata.
 
The rigid complex fixture is presented as a mathematical model for the analysis and design of a fixture that contains more fingers than needed for immobilizing an object alone. But the fixture with more fingers may act in violation of the approachability and accessibility conditions. So we propose the design principle for a complex fixture: the finger set must be accessible; the distance from a workpiece to the finger set should be selected such that the workpiece can be in contact with all locators simultaneously; the clamping force should enable the workpiece to be in contact with all of the locators (approachable). Furthermore, we develop a new efficient procedure of analysis and verification in screw space based on the linear programming formulation and the geometric interpretation of the rigid fixture model. In addition, we introduce a new quantitative test for force closure, captured by a function that measures how far a fixture is from achieving force closure. Locating the table problem, as a complex fixture prototype, is investigated in detail to gain a great insight into the mathematical model and the analysis procedure for the rigid complex fixture. Note to Practitioners-This paper is concerned with the parameter design and verification of a complex fixture that consists of more locators, supports, and clamps than needed for force closure. There are two essential issues related to the design problems: 1) the test on whether the workpiece can be in contact with all locators simultaneously (approachable) and 2) the test on whether the workpiece can reach the desired location smoothly (accessible). A procedure for system analysis and performance verification of the rigid complex fixture is outlined in this paper. In the procedure, a linear programming with its dual formulation is applied to establish the sufficient condition for the stable state and to recognize the locators and supports. A performance index is provided to verify if a fixture is accessible, a- - nd a quantitative test is used to check whether a fixture has force closure. In the future, the linear and nonlinear compliance models of complex fixtures will be discussed and incorporated with this rigid model for the implementation of a complete analysis and design tool of the complex fixture
 
Dynamic programming, branch-and-bound, and constraint programming are the standard solution principles for finding optimal solutions to machine scheduling problems. We propose a new hybrid optimization framework that integrates all three methodologies. The hybrid framework leads to powerful solution procedures. We demonstrate our approach through the optimal solution of the single-machine total weighted completion time scheduling problem subject to release dates, which is known to be strongly NP-hard. Extensive computational experiments indicate that new hybrid algorithms use orders of magnitude less storage than dynamic programming, and yet can still reap the full benefit of the dynamic programming property inherent to the problem. We are able to solve to optimality all 1900 instances with up to 200 jobs. This more than doubles the size of problems that can be solved optimally by the previous best algorithm running on the latest computing hardware.
 
This paper addresses a real-life single-item dynamic lot sizing problem arising in a refinery for crude oil procurement. It can be considered as a lot sizing problem with bounded inventory. We consider two managerial policies. With one policy, a part of the demand of a period can be backlogged and with the other, a part of the demand of a period can be outsourced. We define actuated inventory bounds and show that any bounded inventory lot sizing model can be transformed into an equivalent model with actuated inventory bounds. The concept of actuated inventory bounds significantly contributes to the complexity reduction. In the studied models, the production capacity can be assumed to be unlimited and the production cost functions to be linear but with fixed charges. The results can be easily extended to piecewise linear concave production cost functions. The goal is to minimize the total cost of production, inventory holding and backlogging, or outsourcing. We show that the backlogging model can be solved in O(T<sup>2</sup>) time with general concave inventory holding and backlogging cost functions where T is the number of periods in the planning horizon. The complexity is reduced to O(T) when the inventory/backlogging cost functions are linear and there is no speculative motives to hold either inventory or backlogging. When the outsourcing levels are unbounded, we show that the outsourcing model can be transformed into an inventory/backlogging model. As a consequence, the problem can be solved in O(T<sup>2</sup>) time, if the outsourcing cost functions are linear with fixed charges even if the inventory holding cost functions are general concave functions. When the outsourcing level of a period is bounded from above by the demand of the period, which is the case in many application areas, we show that the outsourcing model can be solved in O(T<sup>2 </sup>logT) time if the inventory holding and the outsourcing cost functions are linear. Note to Practitioners-This p- - aper considers dynamic lot-sizing models with bounded inventory and outsourcing or backlogging decisions. Based on the forecasted requirements of a given item for each period of the planning horizon, the problem consists of determining the quantity to be produced inhouse or to be ordered from a supplier and the quantity to be outsourced in each period to minimize a total cost over the considered planning horizon, composed of the production or purchasing cost, inventory holding cost, and the backlogging cost or the outsourcing cost. These problems initially come from real-life crude oil procurement and often arise in many companies. In this paper, we consider two models. In one model, backlogging is allowed with a backlogging penalty while there is no possibility of outsourcing. In the other model, all of the customer requirements are satisfied in time (i.e., without backlogging) but outsourcing is possible. For each model, we develop an algorithm to find an optimal solution. The computation time of these algorithms can be bounded by a one or two degree polynom of the number of periods in the planning horizon, which means that the computation time required to find an optimal solution is very short
 
One of the challenges faced by the users of automated visual inspection (AVI) systems is how to efficiently upgrade the legacy systems to inspect the new components introduced into the assembly lines. If the AVI systems are not flexible enough to accommodate new components, they will be rendered obsolete even by small changes in the product being inspected. The overall objective of the research presented in this paper is to produce the methodological basis that will result in the development of highly reconfigurable AVI systems. In this paper, we focus on part of this overall development, the adaptation of preexisting inspection algorithms to inspect similar components introduced into the assembly line. While this paper bases its development and discussion on the inspection of surface mounted devices (SMDs), the proposed methodology is general enough to be applicable to a broad range of inspection problems. In this paper, we present a methodology that would allow the automation of the refinement of AVI algorithms. In particular, the proposed method identifies a set of components, or cluster of components, for which a particular set of inspection features or algorithms, renders a certain level of inspection reliability. This is particularly useful for adapting preexisting systems to inspect new components, especially when the characteristics of the new components are similar to those of components already inspected by the inspection system. We applied this methodology to a case of study of the inspection of SMDs.
 
This paper deals with the optimization of vehicle routing problem in which multiple depots, multiple customers, and multiple products are considered. Since the total traveling time is not always restrictive as a time window constraint, the objective regarded in this paper comprises not only the cost due to the total traveling distance, but also the cost due to the total traveling time. We propose to use a stochastic search technique called fuzzy logic guided genetic algorithms (FLGA) to solve the problem. The role of fuzzy logic is to dynamically adjust the crossover rate and mutation rate after ten consecutive generations. In order to demonstrate the effectiveness of FLGA, a number of benchmark problems are used to examine its search performance. Also, several search methods, branch and bound, standard GA (i.e., without the guide of fuzzy logic), simulated annealing, and tabu search, are adopted to compare with FLGA in randomly generated data sets. Simulation results show that FLGA outperforms other search methods in all of three various scenarios.
 
We present two efficient discrete parameter simulation optimization (DPSO) algorithms for the long-run average cost objective. One of these algorithms uses the smoothed functional approximation (SFA) procedure, while the other is based on simultaneous perturbation stochastic approximation (SPSA). The use of SFA for DPSO had not been proposed previously in the literature. Further, both algorithms adopt an interesting technique of random projections that we present here for the first time. We give a proof of convergence of our algorithms. Next, we present detailed numerical experiments on a problem of admission control with dependent service times. We consider two different settings involving parameter sets that have moderate and large sizes, respectively. On the first setting, we also show performance comparisons with the well-studied optimal computing budget allocation (OCBA) algorithm and also the equal allocation algorithm.
 
2D Example: the original curve to be coded is in red thick line, including non Delaunay patches (parts in solid line). The Delaunay triangulation is in black thin line. The current evolving convection curve, in orange thick line, is temporarily blocked (parts in solid line) to avoid intersecting the patches. The remaining hidden Delaunay edges of the original curve will thus be discovered later, when the convection process will be relaunched to intersect the patches.
During a highly productive period running from 1995 to about 2002, the research in lossless compression of surface meshes mainly consisted in a hard battle for the best bitrates. However, for a few years, compression rates seem stabilized around 1.5 bit per vertex for the connectivity coding of usual triangular meshes, and more and more work is dedicated to remeshing, lossy compression, or gigantic mesh compression, where memory access and CPU optimizations are the new priority. However, the size of 3D models keeps growing, and many application fields keep requiring lossless compression. In this paper, we present a new contribution for single-rate lossless connectivity compression, which first brings improvement over current state of the art bitrates, and second, does not constraint the coding of the vertex positions, offering therefore a good complementarity with the best performing geometric compression methods. The initial observation having motivated this work is that very often, most of the connectivity part of a mesh can be automatically deduced from its geometric part using reconstruction algorithms. This has already been used within the limited framework of projectable objects (essentially, terrain models and GIS), but finds here its first generalization to arbitrary triangular meshes, without any limitation regarding the topological genus, the number of connected components, the manifoldness or the regularity. This can be obtained by constraining and guiding a Delaunay-based reconstruction algorithm so that it outputs the initial mesh to be coded. The resulting rates seem extremely competitive when the meshes are fully included in Delaunay, and are still good compared to the state-of-the-art in the case of scanned models.
 
There are two challenges for the frame-slotted ALOHA algorithms in radio-frequency identification (RFID). The first challenge is estimating unknown tag-set size accurately; the second challenge is improving the efficiency of the arbitration process so that it uses less time slots to read all tags. This study proposes estimation algorithm based on the Poisson distribution theory and identifies the overestimation phenomenon in full collision. Our novel anticollision algorithm alternates two distinct reading cycles for dividing and solving tags in collision groups. This makes it more efficient for a reader to identify all tags within a small number of time slots.
 
New multi-axis satellites allow camera imaging parameters to be set during each time slot based on competing demand for images, specified as rectangular requested viewing zones over the camera's reachable field of view. The single frame selection (SFS) problem is to find the camera frame parameters that maximize reward during each time window. We formalize the SFS problem based on a new reward metric that takes into account area coverage and image resolution. For a set of n client requests and a satellite with m discrete resolution levels, we give an algorithm that solves the SFS problem in time O(n<sup>2</sup>m). For satellites with continuously variable resolution (m=∞), we give an algorithm that runs in time O(n<sup>3</sup>). We have implemented all algorithms and verify performance using random inputs. Note to Practitioners-This paper is motivated by recent innovations in earth imaging by commercial satellites. In contrast to previous methods that required waits of up to 21 days for desired earth- satellite alignment, new satellites have onboard pan-tilt-zoom cameras that can be remotely directed to provide near real-time response to requests for images of specific areas on the earth's surface. We consider the problem of resolving competing requests for images: Given client demand as a set of rectangles on the earth surface, compute camera settings that optimize the tradeoff between pan, tilt, and zoom parameters to maximize camera revenue during each time slot. We define a new quality metric and algorithms for solving the problem for the cases of discrete and continuous zoom values. These results are a step toward multiple frame selection which will be addressed in future research. The metric and algorithms presented in this paper may also be applied to collaborative teleoperation of ground-based robot cameras for inspection and videoconferencing and for scheduling astronomic telescopes.
 
The LS and RS paths.
Proof of lemma II.4.
Use the sequence got from the tour and construct paths using the S algorithm between the corresponding targets as in the single vehicle case.
Remove one of the edges incident on each vehicle in the optimal tour.
This paper is about the allocation of tours of m targets to n vehicles. The motion of the vehicles satisfies a nonholonomic constraint (i.e., the yaw rate of the vehicle is bounded). Each target is to be visited by one and only one vehicle. Given a set of targets and the yaw rate constraints on the vehicles, the problem addressed in this paper is 1) to assign each vehicle a sequence of targets to visit, and 2) to find a feasible path for each vehicle that passes through the assigned targets with a requirement that the vehicle returns to its initial position. The heading angle at each target location may not be specified. The objective function is to minimize the sum of the distances traveled by all vehicles. A constant factor approximation algorithm is presented for the above resource allocation problem for both the single and the multiple vehicle case. Note to Practitioners-The motivation for this paper stems from the need to develop resource allocation algorithms for unmanned aerial vehicles (UAVs). Small autonomous UAVs are seen as ideal platforms for many applications, such as searching for targets, mapping a given area, traffic surveillance, fire monitoring, etc. The main advantage of using these small autonomous vehicles is that they can be used in situations where a manned mission is dangerous or not possible. Resource allocation problems naturally arise in these applications where one would want to optimally assign a given set of vehicles to the tasks at hand. The feature that differentiates these resource allocation problems from similar problems previously studied in the literature is that there are constraints on the motion of the vehicle. This paper addresses the constraint that captures the inability of a fixed wing aircraft to turn at any arbitrary yaw rate. The basic problem addressed in this paper is as follows: Given n vehicles and m targets, find a path for each vehicle satisfying yaw rate contraints such that each target is visited exactly once by- a vehicle and the total distance traveled by all vehicles is minimized. We assume that the targets are at least 2r apart, where r is the minimum turning radius of the vehicle. This is a reasonable assumption because the sensors on these vehicles can map or see an area whose width is at least 2r. We give an algorithm to solve this problem by combining ideas from the traveling salesman problem and the path planning literature. We also show how these algorithms perform in the worst-case scenario
 
Top-cited authors
Ashwin Carvalho
  • University of California, Berkeley
Stephanie Lefevre
Ming Liu
  • The Hong Kong University of Science and Technology
Jionghua Jin
  • University of Michigan
Pansoo Kim
  • Electronics and Telecommunications Research Institute