IEEE Systems Journal

Published by Institute of Electrical and Electronics Engineers
Print ISSN: 1932-8184
Publications
In order to provide high speed broadband access to users in next generation networks, Gigabit Ethernet-Passive Optical Network (GE-PON) has emerged as one of the most promising technologies. The evolution of the GE-PON technology permits the users to connect to the Internet via gigabit access networks, which contribute to the increase of the network traffic in not only downstream but also along the upstream direction. However, the increase of the upstream traffic may lead to network congestion and complicate the bandwidth allocation issue, thereby affecting the Quality-of-Service (QoS) requirements of the users. In this paper, we point out the performance degradation issue of Transmission Control Protocol (TCP) communications due to the unanticipated effect of Dynamic Bandwidth Allocation (DBA) mechanism employed in GE-PONs. When network congestion occurs, DBA fails to achieve efficient and fair sharing of the bottleneck bandwidth amongst a number of competing TCP connections. In order to overcome this shortcoming of the conventional DBA scheme, we envision an appropriate solution by controlling the TCP flows' rates based upon packet marking. The envisioned solution, dubbed as PPM-TRC, aims at controlling the TCP throughput for achieving both high efficiency and fair utilization of the passive optical line. The effectiveness of the PPM-TRC approach, verified through extensive computer simulations, demonstrates its applicability in dealing with a large number of competing traffic flows.
 
GEONETCast, a near real-time global environmental information-delivery system by which in situ , airborne, and space-based observations, products, and services are transmitted to users through communication satellites, was accepted as a GEO initiative by the second GEO Plenary. GEONETCast is an interconnected global network of regional dissemination systems that are each focused on a specific geographic region under the respective satellites' footprints. Data from each region can be disseminated outside the originating region through data-exchange links between regions, such as through dedicated lines, overlapping satellite footprints, or use of the Internet or other existing networks. The regional components include one or more data collection, management, and dissemination hubs that receive, process, prioritize, and schedule the incoming data streams or products originating within the particular region. These GEONETCast Network Centres (GNCs) forward the prioritized data stream to the uplink ground station, which receives it, wraps it in a DVB-S dissemination protocol, and uplinks it to a communication satellite for dissemination at Ku- or C-band frequency. The data GEONETCast delivers is specifically targeted to address nine society benefit areas such as natural and human-induced hazards, environment and health, environmental-related energy issues, climate change, water management, weather, ecosystem management, sustainable agriculture, and desertification and biodiversity, with the aim of reaching a global coverage and allow the reception of this data at very low cost (basic reception station below $US2000) by nearly anyone on the planet. GEONETCast is a prominent case in which typical obstacles such as interoperability of existing systems and components reuse of existing infrastructure and interfacing with newly developed components have been resolved successfully.
 
This paper introduces a MAC-layer active dropping scheme to achieve effective resource utilization, which can satisfy the application-layer delay for real-time video streaming in time division multiple access based 4G broadband wireless access networks. When a video frame is not likely to be reconstructed within the application-layer delay bound at a receiver for the minimum decoding requirement, the MAC-layer protocol data units of such video frame will be proactively dropped before the transmission. An analytical model is developed to evaluate how confident a video frame can be delivered within its application-layer delay bound by jointly considering the effects of time-varying wireless channel, minimum decoding requirement of each video frame, data retransmission, and playback buffer. Extensive simulations with video traces are conducted to prove the effectiveness of the proposed scheme. When compared to conventional cross-layer schemes using prioritized-transmission/retransmission, the proposed scheme is practically implementable for more effective resource utilization, avoiding delay propagation, and achieving better video qualities under certain conditions.
 
IEEE 802.16 and Passive Optical Network (PON) are two promising broadband access technologies for high-capacity wireless access networks and wired access networks, respectively. The convergence of 802.16 and PON networks can take the mobility feature of wireless communications and the bandwidth advantage of optical networks jointly. Dynamic bandwidth allocation (DBA) plays an important role in each of these two networks for QoS assurance. In converged 802.16 and PON networks, the integration of the DBA schemes in both networks plays an even more critical role, since bandwidth request/grant mechanisms used in 802.16 and PON are different and the performance of the integrated DBA directly determines the overall system performance. In this paper, we investigate integrated dynamic bandwidth allocation schemes and their signaling overhead. First, this paper starts with discussing the converged network architecture and especially the issues on integrating optical network unit (ONU) and 802.16 base station (BS). Second, it proposes a slotted DBA (S-DBA) scheme and its performance analytic model. The S-DBA scheme takes into account the specific features of the converged network, aiming to reduce signaling overhead caused by cascaded bandwidth requests and grants. The simulation results show that the proposed S-DBA scheme can effectively reduce signaling overhead and increase channel utilization.
 
Most of the IEEE 802.16e Mobile WiMAX scheduling proposals for real-time traffic using unsolicited grant service (UGS) focus on the throughput and the guaranteed latency. The delay variation or delay jitter and the effect of burst overhead have not yet been investigated. This paper introduces a new technique called swapping min-max (SWIM) for UGS scheduling that not only meets the delay constraint with optimal throughput, but also minimizes the delay jitter and burst overhead.
 
This paper presents a multiobjective control scheme based on the dynamic model of three-level, neutral-point-clamped voltage source inverter for integration of distributed generation (DG) resources based on renewable energy resources to the distribution grid. The proposed model has been derived from the abc/ɑβ and ɑβ/dq transformation of the AC system variables. The proposed control technique generates the compensation current references and by setting appropriate references of DG control loop, the DG link not only provides active and reactive currents in fundamental frequency, but also it can supply nonlinear load harmonic currents with a fast dynamic response. Simulation results and mathematical analysis have achieved a reduced total harmonic distortion, increased power factor, inject maximum power of renewable energy resources via a multilevel converter as an interface to the AC grid. It also compensated the active and reactive powers of linear and nonlinear loads. The analyses and simulation results show the high performance of proposed control scheme in the integration of renewable energy resources to the AC grid.
 
The proliferation of radio frequency identification (RFID) systems in application domains such as supply chain management requires an IT infrastructure that provides RFID device and data management and supports application development. In this paper, we discuss these application requirements in detail. We also contend that the characteristics of passive RFID technology introduce constraints that are unique to the development of middleware for the RFID domain. These constraints include the occurrence of false negative reads, tag memory variations, the heterogeneous reader landscape, and the limited communication bandwidth available to RFID readers. To address these constraints and the application requirements for filtered and aggregated RFID data, we developed Accada, an open source RFID platform. This paper shows that the Accada implementation, which is based on a set of specifications developed by the EPCglobal community and a number of extensions, such as the surrogate concept and the virtual tag memory service, addresses the majority of the application requirements and limitations of passive RFID technology.
 
Snapshot of two time-series measurements taken by two different nodes measuring the same movement. The two sensor nodes have different calibrations and placements. In both graphs, the x-axis represents time, and the y-axis the magnitude of acceleration.
Autocorrelation of the absolute acceleration values of the measurements taken from the first person.
Autocorrelation of the absolute acceleration values of the measurements taken from the second person.
Membership function for establishing the fuzzy set of the MCR feature.
This paper investigates the expressive power of several time- and frequency-domain features extracted from 3-D accelerometer sensors. The raw data represent movements of humans and cars. The aim is to obtain a quantitative as well as a qualitative expression of the uncertainty associated with random placement of sensors in wireless sensor networks. Random placement causes calibration, location and orientation errors to occur. Different type of movements are considered-slow and fast movements; horizontal, vertical, and lateral movements; smooth and jerky movements, etc. Particular attention is given to the analysis of the existence of correlation between sets of raw data which should represent similar or correlated movements. The investigation demonstrates that while frequency-domain features are generally robust, there are also computationally less intensive time-domain features which have low to moderate uncertainty. Moreover, features extracted from slow movements are generally error prone, regardless of their specific domain.
 
We propose an Ethernet architecture for local access and switching of Internet traffic. To achieve Terabit or Petabit switching, both time (high transmission speed) and space (multistage interconnection network) technologies are required. The Ethernet is both time and space carrier sensed, extending CSMA/CD to a time-space protocol called CSMA/TS. We call the space-time transmission medium Terabit Ethernet (TbE). We focus on the 3-stage Clos network, treating the first stage as an access switch and the second stage as the core switch. Space time carrier sensing allows routing in the TbE by the Network Interface Card (NIC). The advantages are scalability, lower cost, and most importantly, reduced delay end-to-end. Simple analysis is given for evaluating throughput for the access switch, as well as for 2-stage and 3-stage TbE networks.
 
Prior work in modeling the satellite-based detection and tracking components of the ballistic missile defense system as a large-scale, wireless sensor network relies on a medium access scheme that can accommodate the large propagation delays encountered in these networked satellite systems. While existing satellite-based systems typically employ a form of time division multiple access, recent efforts have begun to explore contention-based approaches. In this work, we quantify the effect of the large propagation delays on both contention-based and contention-free solutions and propose a flow-specific medium access solution that provides improved delay performance by dynamically adapting the networked satellite medium access scheme to changes in both individual flow and link characteristics. A comparison with CSMA and TDMA is provided through simulation results using a version of the traffic-adaptive cooperative wireless sensor medium access control protocol that has been modified to accommodate large propagation delays.
 
As an evolution from Synchronous Code Division Multiple Access (SCDMA), McWiLL (Multi-Carrier Wireless Information Local Loop) mobile broadband access technology was proposed to work with Next Generation Networks (NGNs) to offer various content-rich services. Several core state-of-the-art technologies used in McWiLL mobile broadband access system are introduced in this paper, such as smart antennas, CS-OFDMA (Code Spreading Orthogonal Frequency Division Multiple Access), adaptive modulation, dynamic channel allocation, make-before-break handoff, and fraud protection, etc. These key techniques allow McWiLL to support a large coverage, high spectrum efficiency (up to 3 bit/s/Hz), a homogeneous service quality between high and low rate traffics, low cost terminals, and high mobility applications. McWiLL was designed in particular to suit for its applications in hostile environments due to its superb interference cancellation capability, special frame structure design, and dynamic channel assignment scheme.
 
In this paper a novel sensor management algorithm is presented for a biometric sensor network. A distributed detection framework is managed for different energy, accuracy and time requirements. The design variables include sensor thresholds, fusion rule, sensor selection and sensor mode selection. Different sensors are associated with different transaction times. Hence, varying sensor modes can affect the accuracy and energy consumption. Once the sensors and their modes are selected, the accuracy achieved by this subset of sensors is maximized by managing the thresholds and the fusion rule. Risk, time and energy are the three objectives that the system attempts to minimize. The three objectives are tied into a single objective function by weighting them. A hybrid particle swarm optimization algorithm is design the system. The algorithm is a hybrid of continuous, discrete and binary particle swarm. The continuous particle swarm is used to manage the thresholds. The binary particle swarm is used to manage the fusion rule. The discrete particle swarm is used to select the sensors and the sensor mode. The system is adapted for different threat levels that depend on the a priori of imposter in the network. Results show the effectiveness of the proposed method in adapting the system to different requirements under different threat situations.
 
In this paper, a simple model for the UHF low power rectifier circuit is proposed. Using a novel approach to model the rectifier current waveform, simple analytical equations are derived. The output DC voltage and the efficiency of the rectifier are derived analytically. Simulation results of the rectifier using actual models are very close to those predicted by the proposed model. The derived formulas for the output DC voltage and the efficiency are simple and physically meaningful and can be used to optimize the performance of the rectifier.
 
Equity principles are embedded within a multiple objective framework for obtaining optimal solutions to a generic allocation problem, such as the fair distribution of water, energy, or other types of key resources among users in society. Because of the great import of fresh water as a scarce and diminishing resource within and among many nations of the world, the equity concepts are used in the allocation of water rights and subsequent water transfers or trades among users to improve the economic efficiency of water utilization. More specifically, water allocation is formulated as a generalized multiple objective problem and then the measurable fairness principles are used in the development of two approaches for equitable water rights allocation. In particular, the proposed priority-based maximal multiperiod network flow programming method has a social aggregation function satisfying the monotonicity and priority principles, and thus is a priority equitable allocation approach. The lexicographic minimax water shortage ratios method, which satisfies the principles of monotonicity, impartiality, and equitability, generates perfectly equitable solutions. The priority ranks and weights used in the two proposed approaches, respectively, are designed to achieve fair treatment of all the competitors for access to limited water resources and constitute important parameters. The capabilities of the approaches are tested and demonstrated in applications to the Aral Sea Basin in Central Asia and the South Saskatchewan River Basin in Western Canada.
 
In this paper, the problem of code acquisition for global navigation satellite systems (GNSS) is investigated in the presence of interference. A low-complexity interference mitigation technique is proposed to combat the effect of interference, which offers the potential for improved performance especially when high power jammers affect the reception. For practical implementation, an integrated navigation-communication network architecture is proposed, composed by an assistance network to aid code synchronization and an interference management system, augmenting the GNSS local component to enhance the system quality of service. Accordingly, an overall assistance network is proposed to estimate the interference characteristics and broadcast them to conventional terminals along with a rough time and frequency reference to improve code acquisition performance. Analytical and simulated results are provided, in the presence of binary offset carrier (BOC) modulation, showing the clear potential of assistance GNSS to significantly improve performance and reduce terminal complexity at the same time.
 
The field of intelligent actuators has been emerging for mechanical systems (aircraft, robots, manufacturing systems, more electric vehicles, human rehabilitation systems, etc.) as the dual of the computer chip for electronic systems. The level of intelligence necessary will be driven by the increasing performance demands for these ever more complex and nonlinear mechanical systems. These demands are best illustrated by actual duty cycles such as that presented here for aircraft control surface actuators. A representative duty cycle is analyzed using four basic methods/objectives that can be applied both in the development of new design requirements and the support of subsequent operational decision making software to maximize performance, provide for fault tolerance, and for condition-based maintenance. Both thermal and durability analyses are conducted yielding results which a prospective designer can use to support design decisions for a wide range of applications including power-by-wire (PBW) aircraft actuators.
 
In this paper, we consider the problem of location updating in mobile ad-hoc networks (MANETs). We propose a node stability-based location updating approach. In order to optimize the routing, most of the existing routing algorithms use some mechanism for determining a node's neighbors. This information is stored in a table called the neighbor table. The updating of the neighbor table is referred to as location updating. To evaluate our proposed algorithm, we simulated it and compared its performance with that of the performance of the conventional location updating algorithm, by considering different performance measures such as the number of collisions on the carrier, the number of acknowledgments received and the energy consumed. In our work, we obtained different types of results by varying different parameters such as the number of nodes and the terrain dimensions. The simulation results obtained show that the proposed algorithm outperforms the typical location updating algorithm used in existing routing protocols.
 
Several protocols have been proposed to improve data accessibility and reduce query delay in MANETs. Some of these proposals have adopted the cooperative caching scheme, allowing multiple mobile hosts within a neighborhood to cache and share data items in their local caches. Cross-layer optimization has not been fully exploited to further improve the performance of cooperative caching in these proposals. In this paper we propose a cluster-based cooperative caching scheme. A cross-layer design approach is employed to further improve the performance of cooperative caching and prefetching schemes. The cross-layer information is maintained in a separate data structure and is shared among network protocol layers. The experimental results in the NS-2 simulation environment demonstrate that the proposed approach improves caching performance in terms of data accessibility, query delay and query distance compared to the caching scheme that does not adopt the cooperative caching strategy.
 
Friend based Ad hoc routing using Challenges to Establish Security (FACES) is an algorithm to provide secure routing in ad hoc mobile networks. We propose this scheme that has been drawn from a network of friends in real life scenarios. The algorithm works by sending challenges and sharing friend Lists to provide a list of trusted nodes to the source node through which data transmission finally takes place. The nodes in the friend list are rated on the basis of the amount of data transmission they accomplish and their friendship with other nodes in the network. The account of friendship of a node with other nodes in the network is obtained through the Share Your Friends process which is a periodic event in the network. As a result of this scheme of operation, the network is able to effectively isolate the malicious nodes which are left with no role to play in the ad hoc network. One major benefit of this scheme is that the nodes do not need to promiscuously listen to the traffic passing through their neighbors. The information about the malicious nodes is gathered effectively by using Challenges. This reduces the overhead on the network significantly. Through extensive simulation analysis it was inferred that this scheme provides an efficient approach towards security and easier detection of malicious nodes in the mobile ad hoc network.
 
In the aeronautics industry, various stages of a product development process such as physical phenomena (like stress, strain, and thermal ones) or virtual reality simulations are regarded as product views and based on different digital shapes of the same physical component. The shape adaptation processes needed to progress from one stage of the product development process to another one, the multiple shapes needed for a simulation at a given stage of this process are analyzed and shape adaptation operators are presented. Then, it is shown why geometric models adaptation processes from the design stage to downstream applications are important in an industrial design context, and why this is a concurrent engineering enabler. Subsequently, we show how shapes adaptation processes can be achieved through their appropriate formalization and representation using direct acyclic graphs connected to the scene structure of a simulation and the product structure used as input. A process data structure is also proposed to describe shape adaptation processes and ease their reuse. Finally, based on two EADS case studies, we illustrate how methods and criteria can be set up to produce adapted shapes from computer-aided design data in a controlled and integrated manner.
 
This paper presents an adaptive neural network, designed to improve the performance of conventional automatic landing systems (ALS). Real-time learning was applied to train the neural network using the gradient-descent of an error function to adaptively update weights. Adaptive learning rates were obtained through the analysis of Lyapunov stability to guarantee the convergence of learning. In addition, we applied a DSP controller using the VisSim/TI C2000 Rapid Prototyper to develop an embedded control system and establish on-line real-time control. Simulations show that the proposed control scheme has superior performance to conventional ALS under conditions of wind disturbance of up to 75 ft/s.
 
The operation of Flexible AC Transmission System Controllers (FACTS) in the power transmission system pose a challenge to the distance relaying scheme. This paper suggests an adaptive scheme for estimating the trip boundaries of a distance relay in presence of a Unified Power Flow Controller (UPFC), utilizing a Generalized Regression Neural Network (GRNN). Initially, the impact of the UPFC on the relay's trip boundary is studied for its automatic power flow control mode as well as bypass mode of operation. The GRNN has been trained off-line with the data generated from a detailed performance analysis of the power system for various faults considering the effects of the UPFC, fault resistance, and system loading conditions on the trip boundaries. This work has also proposed a strategy that computes the control parameters of the UPFC on-line, namely, series voltage and reactive current injections, utilizing the synchronized phasor measurements from Phasor Measurement Units (PMUs). Pre-fault system states, including the control parameters of the UPFC and the apparent impedance measured by the relay unit have been utilized by the GRNN for predicting the trip boundaries of the relay. The proposed scheme has considered Single Line-to-Ground (SLG), Double Line-to-Ground (LLG) and Three Phase-to-Ground (LLLG) faults and the effectiveness of the scheme has been tested on 39-bus New England system and also on a 17-bus system, a reduced equivalent of practical Northern Regional Power Grid (NRPG) system in India.
 
An emerging research in pervasive computing is the inference of social context to facilitate and mediate communications among proximate people. Understanding users' needs through information reasoning and leveraging principles of social networks play an important role in the emergence of innovative computer-mediated social networks. This paper introduces a generic social networking framework for the analysis and visualization of mobile and spontaneous social networks. The proposed framework is capable of analyzing social scores in order to provide decision support to users in the form of egocentric social graphs. As part of the framework, we introduce a matching algorithm that its efficiency is compared to commonly used “Stable Marriage Matching” algorithms in opportunistic social networks. We show the performance of the algorithm as social profile attributes increase in a network.
 
This paper presents an adaptive policy design approach based on a system-of-systems (SoS) perspective. Using a case of carbon emissions reduction in the residential sector, the SoS perspective is used as a way to structure the policy issue into interdependent relevant systems. This representation of the system provides a framework to test a large number of hypotheses about the evolution of the system's performance using computational experiments. In particular, in a situation where the realized emission level misses the intermediate target, policies can be adapted to meet the policy target. Our approach shows the different policy designs that decision-makers can envision to influence the overall system performance.
 
This paper presents a hybrid resource management environment, operating on both application and system levels developed for minimizing the execution time of parallel applications with divisible workload on heterogeneous grid resources. The system is based on the adaptive workload balancing algorithm (AWLB) incorporated into the distributed analysis environment (DIANE) user-level scheduling (ULS) environment. The AWLB ensures optimal workload distribution based on the discovered application requirements and measured resource parameters. The ULS maintains the user-level resource pool, enables resource selection and controls the execution. We present the results of performance comparison of default self-scheduling used in DIANE with AWLB-based scheduling, evaluate dynamic resource pool and resource selection mechanisms, and examine dependencies of application performance on aggregate characteristics of selected resources and application profile.
 
The rapidly increasing importance of wireless communications (including satellite), together with the rapid growth of high speed networks, pose new challenges to transmission control protocol (TCP). Among them, the most prominent are long round trip times (RTTs), not negligible packet error rates (PER), and very large bandwidths. To overcome them, a wide variety of TCP enhancements has been presented in the literature with different purposes and capabilities. However, as most proposals aim to address different impairments, they result optimized for specific network environments. Therefore, given the increasing level of heterogeneity of present and future networks, the choice of ldquothe bestrdquo TCP enhancement seems a quite irresolvable problem, depending on the characteristics of the specific connections. The TCP adaptive-selection concept, presented and discussed in this paper, aims to circumvent this problem by providing an alternative approach that challenges at the root the idea that only one TCP enhancement must be adopted, not only on the whole network in general, but also on the same server machine. In fact, by extending the concept that underlies adaptive coding and modulation (ACM) to transport layer, TCP adaptive-selection envisages the concurrent adoption of different TCP versions on the same server, the better to match the different impairments present on different connections. The implication of this novel approach, as well as the possible criteria to be adopted for the TCP selection, are deeply discussed in this paper, where a particular emphasis is given to the ldquodynamicrdquo TCP adaptive-selection variant. Preliminary results, referring to a simple network topology, chosen to enlighten the mechanism of the TCP adaptive-selection technique, are also provided. They are quite encouraging and justify the following remarks on feasibility and the discussion of some implementation proposals.
 
A four-levels hierarchical wireless body sensor network (WBSN) system is designed for biometrics and healthcare applications. It also separates pathways for communication and control. In order to improve performance, a communication cycle is constructed for synchronizing the WBSN system with the pipeline. A low-power adaptive process is a necessity for long-time healthcare monitoring. It includes a data encoder and an adaptive power conserving algorithm within each sensor node along with an accurate control switch system for adaptive power control. The thermal sensor node consists of a micro control unit (MCU), a thermal bipolar junction transistor sensor, an analog-to-digital converter (ADC), a calibrator, a data encoder, a 2.4-GHz radio frequency transceiver, and an antenna. When detecting ten body temperature or 240 electrocardiogram (ECG) signals per second, the power consumption is either 106.3 ??W or 220.4 ??W. By switching circuits, multi sharing wireless protocol, and reducing transmission data by data encoder, it achieves a reduction of 99.573% or 99.164% in power consumption compared to those without using adaptive and encoding modules. Compared with published research reports and industrial works, the proposed method is 69.6% or 98% lower than the power consumption in thermal sensor nodes which consist only of a sensor and ADC (without MCU, 2.4-GHz transceiver, modulator, demodulator, and data encoder) or wireless ECG sensor nodes which selected Bluetooth, 2.4-GHz transceiver, and Zigbee as wireless protocols.
 
The development and deployment of radio frequency identification (RFID) systems render a novel distributed sensor network which enhances visibility into manufacturing processes. In RFID systems, the detection range and read rates will suffer from interference among high-power reading devices. This problem grows severely and degrades system performance in dense RFID networks. Consequently, medium access protocols (MAC) protocols are needed for such networks to assess and provide access to the channel so that tags can be read accurately. In this paper, we investigate a suite of feasible power control schemes to ensure overall coverage area of the system while maintaining a desired read rate. The power control scheme and MAC protocol dynamically adjust the RFID reader power output in response to the interference level seen during tag reading and acceptable signal-to-noise ratio (SNR). We present novel distributed adaptive power control (DAPC) as a possible solution. A suitable back off scheme is also added with DAPC to improve coverage. A generic UHF wireless testbed is built using UMR/SLU GEN4-SSN for implementing the protocol. Both the methodology and hardware implementation of the schemes are presented, compared, and discussed. The results of hardware implementation illustrate that the protocol performs satisfactorily as expected.
 
This paper develops an adaptive proportional-integral-derivative (APID) control system to deal with the metallic sphere position control of a magnetic levitation system (MLS), which is an intricate and highly nonlinear system. The proposed control system consists of an adaptive PID controller and a fuzzy compensation controller. The adaptive PID controller is a main tracking controller, and the parameters of the adaptive PID controller are online tuned by the derived adaptation laws. In this design, the particle swarm optimization (PSO) technology is adopted to search the optimal learning-rates of the adaptive PID controller to increase the learning speed. The design of the fuzzy compensation controller can guarantee the stability of the control system. Since system-on-programmable-chip (SoPC) has several benefits including low cost, high speed, and small volume, the developed control scheme is implemented in the SoPC-based hardware. Finally, simulation and experimental results of the magnetic levitation system have been performed to verify the effectiveness of the proposed control scheme.
 
The Coordinated Enhanced Observing Period (CEOP) was proposed in 1997 as an initial step for establishing an integrated observation system for the global water cycle. The Enhanced Observing Period was conducted from October 2002 to December 2004, with satellite data, in-situ data, and model output data collected and available for integrated analysis. Under the framework of CEOP, the CEOP Asia-Australia Monsoon Project (CAMP) was organized and provided the in-situ dataset in the Asian region. CAMP included 13 different reference sites in the Asian monsoon region during Phase 1 (October 2002 to December 2004). These reference sites were operated by individual researchers for their own research objectives. Therefore, the various sites' data had important differences in observational elements, data formats, recording intervals, etc. This usually requires substantial manual data processing to use these data for scientific research which consumes a great deal of researcher time and energy. To reduce the time and effort for data quality checking and format conversion, the CAMP Data Center (CDC) established a Web-based quality control (QC) system. This paper introduces this in-situ data management and quality control system for the Asian region data under the framework of CEOP.
 
Portability and scalability factors. 
Readability factor in complexity. 
Coordination. Coordination A C Meaning 
Portability and scalability metric application.
Autonomy metric for the hydro PMS. 
A set of software metrics for the evaluation of power management systems (PMSs) is presented. Such systems for managing power need to be autonomous, scalable, low in complexity, and comprised of portable algorithms in order to be well applied across the varying implementations that utilize power systems. Although similar metrics exist for software in general, their definitions do not readily lend themselves to the unique characteristics of power management systems or systems of similar architecture.
 
This special issue is devoted to global navigation and communication satellite systems that can provide the necessary QoS to the terminal users and fully use the advantages offered by navigation systems. The included papers are summarized here.
 
Algorithm selection for Portfolio using the multi-agent based architecture 
Knowledge base of agents 
Intercession tree for forecasting 
Nowadays the supply chain for distributed manufacturing is gaining attention of the researchers worldwide. Realizing its significance this article proposes a self correcting multi-agent architecture for the supply chain for the distributed manufacturing environment. The main aim of the proposed architecture is to generate an effective manufacturing plan while exploring the algorithm portfolio concept to minimize the manufacturing and supply chain costs. This architecture focuses on automatic selection of best techniques and suppliers while making the tradeoff between the cost, availability, reliability, distance and quality of the products supplied. When the new order arrives, the proposed architecture explores the delicacy of the skill exploitation algorithm to simultaneously incorporate the new and old orders. This will help manufacturing firms to execute their manufacturing processes efficiently.
 
Radio frequency identification (RFID) technology adoption in business environments has seen strong growth in recent years. Adopting an appropriate RFID-based information system has become increasingly important for enterprises making complex and highly customized products. However, most firms still use conventional barcode and run-card systems to manage their manufacturing processes. These systems often require human intervention during the production process. As a result, traditional systems are not able to fulfill the growing demand for managing dynamic process flows and are not able to obtain real-time work-in-process (WIP) views in mass customization manufacturing. This paper proposes an agent-based distributed production control framework with UHF RFID technology to help firms adapt to such a dynamic and agile manufacturing environment. This paper reports the design and development of the framework and the application of UHF RFID technology in manufacturing and logistic control applications. The framework's RFID event processing agent model is implemented in a smart end-point (SEP) device. A SEP can manage RFID readers, wirelessly communicate with shop-floor machines, make local decisions, and coordinate with other SEPs. A case study of a bicycle manufacturing company demonstrates how the proposed framework could improve a firm's mass customization operations. Results of experiments show the decentralized multiagent coordination scheme among SEPs outperformed the current practice of the firm in terms of reducing work-in-process and parts inventory.
 
The primary idea behind deploying sensor networks is to utilize the distributed sensing capability provided by tiny, low powered, and low cost devices. Multiple sensing devices can be used cooperatively and collaboratively to capture events or monitor space more effectively than a single sensing device. The realm of applications envisioned for sensor networks is diverse including military, aerospace, industrial, commercial, environmental, and health monitoring. Typical examples include: traffic monitoring of vehicles, cross-border infiltration detection and assessment, military reconnaissance and surveillance, target tracking, habitat monitoring, and structure monitoring, to name a few. Most of the applications envisioned with sensor networks demand highly reliable, accurate, and fault-tolerant data acquisition process. In this paper, we focus on innovative approaches to deal with multivariable, multispace problem domains (data integrity, energy-efficiency, and fault-tolerant framework) in wireless sensor networks and present novel ideas that have practical implementation in developing power-aware software components for designing robust networks of sensing devices.
 
To analyze the resilience of logistic networks, it is proposed to use a quantificational resilience evaluation approach. Firstly, the node resilience in a network is evaluated by its redundant resources, distributed suppliers and reachable deliveries. Then, an index of the total resilience of logistic network is calculated with the weighted sum of the node resilience. Based on the evaluation approach of resilience, the reasonable structure of the logistic networks is analyzed. A model is then studied to optimize the allocation of resources with connections, distribution centers or warehouses. Our approach has been used to study the resilience of logistic networks for aircraft maintenance and service and to guarantee the security and service quality of aeronautical systems. To monitor the operation of the logistic networks and enhance resilience, the architecture of a synthesized aircraft maintenance information management system and service logistic network is designed and being developed, which is called resilience information management system for aircraft service (RIMAS). The research results have been provided to the decision makers of the aviation management sector in the Chu Chiang Delta of China. Good comments have been achieved, which shows that the approach has potential for application in practice.
 
This paper investigates an application-oriented bandwidth allocation scheme to ensure fairness among queues with diversified quality-of-serice (QoS) requirements in EPONs. Formerly, differentiated services (DiffServ) were suggested to be used in EPON so as to provision some queues with higher QoS over others. However, owing to the coarse granularity, DiffServ can hardly facilitate any particular QoS profile of an application in EPONs. In this paper, we define application utilities to quantify users' quality-of-experience (QoE) as a function of network layer QoS metrics. Then, we formulate the fair resource allocation issue into a utility max-min optimization problem, which is quasiconcave over queues' delayed traffic and dropped traffic. Utilizing the quasiconvex property, we propose to employ the bisection method to solve the optimization problem. The optimal value can be achieved by proper bandwidth allocation and queue management in EPONs. Detailed implementation of the proposed algorithm is discussed, and simulation results show that our proposed scheme can ensure fairness and guarantee QoS with fine granularity.
 
In Grid environments, where virtual organization resources are allocated to users using mechanisms analogue to market economies, strong price fluctuations can have an impact on the nontrivial quality-of-service expected by end users. In this paper, we investigate the effects of the use of option contracts on the quality of service offered by a broker-based Grid resource allocation model. Option contracts offer users the possibility to buy or sell Grid resources in the future for a strike price specified in a contract. By buying, borrowing and selling option contracts using a hedge strategy users can benefit from expected price changes. In this paper, we consider three hedge strategies: the butterfly spread which profits from small changes, the straddle which benefits from large price changes, and the call strategy which benefits from soaring prices. Using our model based on an abstract Grid architecture, we find that the use of hedge strategies augment the ratio of successfully finished jobs to failed jobs. We show that the degree of successfulness from hedge strategies changes when the number of contributed resources changes. By means of a model, we also show that the effects of the butterfly spread is mainly explained by the amount of contributed resources. The dynamics of the two other hedge strategies are best explained by observing the price behavior. We also find that by using hedge strategies the users can increase the probability that a job will finish before the deadline. We conclude that hedging using options is a promising approach to improve resource allocation in environments where resources are allocated by using a commodity market mechanism.
 
This paper deals with the problem of the management of electronically steered antenna (ESA) in multitarget environments. Radars are used to detect, locate and identify targets. In this paper we focus on the detection of several aerial targets in a fixed given time. The difficulty of such detection lies in the fact that targets may be located anywhere in the space, but radars can only observe a limited part of it at a time. As a result, it is necessary to change their axis position over time. This paper describes the main steps to derive an optimal radar management in this context: the modeling of the radar, the determination of a criterion based on the target detection probability and the temporal optimization process leading to sensor management strategy. An optimization solution is presented for several contexts and several hypotheses about prior knowledge concerning the targets' locations. First, we propose a method for the optimization of the radar detection probability in a single target environment. It consists in the decomposition of the detection step into an optimal number of independent elementary detections. Then, in a multitarget context with deterministic prior knowledge, we present an optimal time allocation method which is based on the results of non linear programming. finally, in a multitarget context with probabilistic prior knowledge, results in search theory are used to determine an optimal temporal allocation.
 
In this paper, a novel cross-layer framework for optimizing the dynamic bandwidth allocation (DBA) of a digital video broadcast (DVB)-return channel satellite (RCS) system using adaptive coding is proposed. The design of the medium access control (MAC) methods taking into account the adaptive physical layer and higher layers' quality of service (QoS) requirements is cast as an optimization problem by using the network utility maximization (NUM) framework applied within the satellite subnetwork. Hierarchical and global solving procedures fully compliant with the DVB-RCS standard are proposed. They do not only provide minimum bandwidth guarantees but also maximize fairness. Further, they allow a joint optimization of the time slot size and overall system efficiency while minimizing signalling overhead. A reduced computational complexity algorithm to solve the DBA problem is presented. In practical terms, it increases the number of connections with absolute and relative QoS requirements the system can manage and facilitates the interoperability of the satellite network within an Internet protocol (IP) environment.
 
Future generations of communication systems will benefit from cognitive radio technology, which significantly improves the efficient usage of the finite radio spectrum resource. In this paper we present a wireless unlicensed system that successfully coexists with the licensed systems in the same spectrum range. The proposed unlicensed system determines the level of signals and noise in each frequency band and properly adjusts the spectrum and power allocations subject to rate constraints. It employs orthogonal frequency-division multiplexing (OFDM) modulation and distributes each transmitted bit energy over all the bands using a novel concept of bit spectrum patterns. A distributed optimization problem is formulated as a dynamic selection of spectrum patterns and power allocations that are better suited to the available spectrum range without degrading the licensed system performance. Bit spectrum patterns are designed based on a normalized gradient approach and the transmission powers are minimized for a predefined quality of service (QoS). At the optimal equilibrium point, the receiver that employs a conventional correlation operation with the replica of the transmitted signal will have the same efficiency as the minimum mean-squared error (MMSE) receiver in the presence of noise and licensed systems. Additionally, the proposed approach maximizes the unlicensed system capacity for the optimal spectrum and power allocations. The performance of the proposed algorithm is verified through simulations.
 
The South Pole -Aitken basin extends from the lunar South Pole to the Aitken crater on the far side of the Moon. In this false-color mosaic of images from the Clementine spacecraft, red represents high elevation and purple represents low elevation.
Illumination map of Shackleton crater (circle just to the right of center). The area within the crater that receives no solar light is believed to maintain a temperature of approximately 40 K (-233 C or -388 F). If water vapor has been deposited there, it should remain frozen at or below the surface.
Two concepts for large-scale, complex, robotic missions to search for frozen water at the lunar south pole are systematically analyzed to determine their relative productivity and investment requirements. A concurrent design team, a technology-assessment tool, and a sensitivity model are integrated to search a large, complex trade space. Performance goals for a broad portfolio of missions comprising NASA's lunar exploration program are optimized subject to budget, workforce, and other nontechnical constraints. Explicit distinction is made between enabling and enhancing technologies. Uncertainties and dependencies are included within the optimization framework. Given the constraints used in this analysis, the study determines that the longer mission [using a radioisotope thermoelectric generator (RTG)] would return 14 times the value of the shorter mission (using a methanol-oxygen fuel cell) for roughly a 17% increase in cost, and would be enabled with the recommended temporal technology portfolio. To assess the robustness of the investment recommendations, other potential fuel-cell chemistries are evaluated along with potential improvements in rover speed and autonomy, and a reduced activity profile. Results indicate that a lithium-oxygen fuel cell would enable the highest level of productivity among the three fuel cells studied, though not as high as that permitted by an RTG. For the shorter duration mission concepts, it was found that productivity could be enhanced by reducing the number of activities from the baseline 15 to 4, thereby permitting time for each activity to be more fully accomplished.
 
Home Networking Systems (HNS) play a crucial role in achieving broadband service delivery to end users and are quickly becoming the next arena for telecom operators and companies. This emphasizes the need for a technology roadmap in order to address several key issues associated with the deployment of these systems. The present paper presents the results of the European project ICT-OMEGA road mapping effort for future HNS focusing on the most indicative and critical function, namely that of network extension. Taking into account the various social, economic and technological factors, three alternative technologies, namely the 802.11n, 60 GHz and Power Line Communications (PLC) have been ranked, using the Analytic Hierarchy Process (AHP). Based on a number of expert surveys, the technology value of each solution is calculated. The results indicate that PLC possesses the largest potential for delivering broadband services in the home environment but also underlines the need for hybrid solutions. The results also reveal various crucial aspects of HNS deployment which are related to current research and standardization activities. A sensitivity analysis is also performed to ascertain the reliability of the results.
 
Error mean of the individual classifiers.  
ROC curves of all the MCS and MAS.  
Many approaches to the implementation of biometrics-based identification systems are possible, and different configurations are likely to generate significantly different operational characteristics. The choice of implementational structure is therefore very dependent on the performance criteria which are most important in any particular task scenario. In this paper we evaluate the merits of using multimodal structures, and we investigate how fundamentally different strategies for implementation can increase the degree of choice available in achieving particular performance criteria. In particular, we illustrate the merits of an implementation based on a multiagent computational architecture as a means of achieving high performance levels when recognition accuracy is a principal criterion. We also set out the relative merits of this strategy in comparison with other commonly adopted approaches to practical system realization. In particular we propose and evaluate a novel approach to implementation of a multimodal system based on negotiating agents.
 
One of the challenges in Risk Analysis and Management (RAM) is identifying the relationships between risk factors and risks. The complexity of the method to analyze these relationships, the time to complete the analysis, the robustness and trustworthiness of the method are important features to be considered. In this paper, we propose using Extended Fuzzy Cognitive Maps (E-FCMs) to analyze the relationships between risk factors and risks, and adopting a pessimistic approach to assess the overall risk of a system or a project. E-FCMs are suggested by Hagiwara to represent causal relationships more naturally. The main differences between E-FCMs and conventional Fuzzy Cognitive Maps (FCMs) are the following: E-FCMs have nonlinear membership functions, conditional weights, and time delay weights. Therefore E-FCMs are suitable for risk analysis as all features of E-FCMs are more informative and can fit the needs of Risk Analysis. In this paper we suggest a framework to analyze risks using E-FCMs and extend E-FCMs themselves by introducing a special graphical representation for risk analysis. We also suggest a framework for group decision making using E-FCMs. Particularly, we explore the Software Project Management (SPM) and discuss risk analysis of SPM applying E-FCMs.
 
The application of user terminals with multiple antenna inputs for use with the global navigation satellite systems like Global Positioning System (GPS) and Galileo has attracted more and more attention in the past years. Multiple antennas may be spread over the user platform and provide signals required for the platform attitude estimation or may be arranged in an antenna array to be used together with array processing algorithms for improving signal reception, e.g., for multi-path and interference mitigation. In order to generate signals for testing of receivers with multiple antenna inputs and corresponding receiver algorithms in a laboratory environment, a unique hardware signal simulation tool for wavefront simulation has been developed. The signals for a number of antenna elements are first generated in a flexible user defined geometry as digital signals in baseband and then mixed up to individual RF-outputs. This paper describes the principle functionality of the system and addresses some calibration issues. Measurement setups and results of data processing with simulated signals for different applications are shown and discussed.
 
This paper presents an anticipative system which operates during pedestrian evacuation processes and prevents escape points from congestion. The processing framework of the system includes four discrete stages: a) the detection and tracking of pedestrians, b) the estimation of possible route for the very near future, indicating possible congestion in exits, c) the proposal of free and nearby escape alternatives, and d) the activation of guiding signals, sound and optical. Detection and tracking of pedestrians is based on an enhanced implementation of a system proposed by Viola, Jones, and Snow that incorporates both appearance and motion information in near real-time. At any moment, detected pedestrians can instantly be defined as the initial condition of the second stage of the system, i.e., the route estimation model. Route estimation is enabled by a dynamic model inspired by electrostatic-induced potential fields. The model combines electrostatic-induced potential fields to incorporate flexibility in the movement of pedestrians. It is based on Cellular Automata (CA), thus taking advantage of their inherent ability to represent effectively phenomena of arbitrary complexity. Presumable congestion during crowd egress, leads to the prompt activation of sound and optical signals that guide pedestrians towards alternative escaping points. Anticipative crowd management has not been thoroughly employed and this system aims at constituting an effective proposal.
 
Today, many systems within government and industry are typically engineered, not as stand alone systems, but as part of an integrated system of systems, or a federation of systems, or systems family. A significant level of effort has been devoted in the past several years to the development, refinement, and ultimately, acceptance of processes for engineering systems or systems engineering processes. Today, we have four ldquostandardrdquo processes within present and past standards: EIA-632, IEEE 1220, ISO 15288, and MIL-STD-499C. We continue to use systems engineering processes espoused in our current set of standards, and are left to our devices to tailor these processes to one appropriate for a systems family context. This paper will examine the systems engineering processes in existence today and conclude with development of a process designed specifically for systems family applications.
 
This paper addresses the need to efficiently fuse data from multiple sources and effectively control and monitor the distribution of the data in a dynamic service-oriented architecture based command and control system of systems. We present an architecture framework consisting of two software architectural patterns and an auto-fusion process to guide the development of distributed and scalable systems to support improved data fusion and distribution. We demonstrate the technical feasibility of applying the patterns and process by prototyping an auto-fusion system.
 
Top-cited authors
Neeraj Kumar
  • Thapar Institute of Engineering and Technology (Deemed University), Patiala (Punjab), India
Debiao He
  • Wuhan University
Joel Rodrigues
  • Senac Faculty of Ceará
Chun-Wei Tsai
  • National Sun Yat-sen University
Laurence Tianruo Yang
  • St. Francis Xavier University