An inter-university cooperative research project has been launched aiming at creating a new paradigm of computing hardware based on silicon technology. The binary multivalued-analog merged hardware computation based on four-terminal device with innovative architecture is the guiding principle for the development of an intelligent system. And most importantly, the research on the high accuracy processing and materials technology that substantialize the advanced system on a silicon chip is inseparably merged into the entire project
The problem of finding elliptical shapes in an image on a pyramid architecture using moment properties is considered. Based on the moment principle, the proposed method can be employed to determine the five parameters of an elliptical shape in a given image, including the coordinates of the center of the ellipse, the length of minor and major axes, and the rotation angle of the major axis. A simulation program is also described
We propose a novel concept of an integration of compression and
sensing in order to enhance performance of the image sensor. By
integrating compression function on the sensor plane, the image signal
that has to be readout from the sensor is significantly reduced. Thus,
the integration can consequently increase the pixel rate of the sensor.
The compression scheme we make use of is conditional replenishment that
detects and encodes moving areas. In this paper, we discuss design and
implementation of two architectures for on sensor compression. One is
pixel parallel approach and the other is column parallel approach. We
describe and compare both approaches
A network environment is considered that can exist in commercial
and military products, such as automatic guided vehicles, that use
sensors and processors for movement, attack, defense and communication.
A bus protocol called IMAP (improved token bus multiaccess protocol) is
proposed for these embedded networks. Two modes of operation are defined
for the protocol. The normal mode occurs when token passing is done in a
random order and the token remains within a cluster of active stations.
For a nonuniform traffic condition, this mode of operation can be
interrupted by a mode in which token bus operation is carried out and
the token is passed through every station. The performance (channel
utilization and delay characteristics) of IMAP are compared to those of
the token bus and CSMA/CD (carrier sense multiaccess with collision
detection) and shown to be superior
With the increasing density of CMOS VLSI circuits, it is necessary to test for the combinations of different multiple faults. This paper studies the possibility of using single stuck-at fault test set (SSFTS) to detect multiple faults and their combinations. The paper shows that a single stuck-at fault test set can detect single and multiple self-feedback bridging faults, combinations of feedback bridging, input bridging and stuck-on faults when current monitoring is done. We also prove that a single stuck-at fault test set can detect the combination of single stuck-open fault and some other faults like bridging and stuck-on faults when both logic and current monitoring are done
Redundancy in a combinational circuit involving a single line is
fairly well understood. However, little is known about multiple-line
redundancies for which any proper subset is irredundant. In this paper
we provide tools for studying such redundancies. We present examples of
multiple redundancies, and we prove that circuits with redundancies of
any multiplicity exist
A transformation technique is proposed for deriving digital-filter system functions from analog transfer functions. The dual bilinear transformation is used to conformally map a continuous-domain transfer function to its discrete-domain counterpart. The resulting discrete-domain system function, when realized, represents a so-called wave digital structure. The versatility of the proposed simplified technique is illustrated by a number of practical examples leading to the hitherto known results
This paper considers two related problems of state estimation and
model validation for a class of uncertain linear systems. The main
contribution of the paper is that it considers a general information
structure which allows for discrete and continuous measurements as well
as missing data. The results are given in terms of a recursive state
estimator involving a jump Riccati differential equation and jump state
equations. These equations can be solved online
The problem of embedding link disjoint Hamiltonian cycles into 2-D and 3-D torus networks is addressed. The maximum number of link-disjoint cycles is limited to half the degree of the node in a regular network. Simple methods are presented to embed the maximum number of such cycles in a 2-D and 3-D torus. An analysis of network fault-tolerance in the presence of a set of faulty links is also presented
A series DC motor must be represented by a nonlinear model when
nonlinearities such as magnetic saturation are considered. To provide
effective control, nonlinearities and uncertainties in the model must be
taken into account in the control design. In the paper, the recursive
design method is applied to generate nonlinear control, nonlinear PI
control, and robust control, and these controls are shown to be
efficient and robust in the simulation study compared to existing
This paper addresses a constrained two-terminal reliability
measure referred to as Distance Reliability (DR) between two nodes s and
t of Hamming distance H(s,t). in hypercube networks (B<sub>n</sub>). The
shortest distance restriction guarantees optimal communication delay
between processors and high link/node utilization across the network.
Moreover, it provides a measure for the robustness of the network. In
particular, when H(s,t)=n in B<sub>n</sub>, DR will yield the
probability of degradation in the diameter, a concept which directly
relates to fault-diameter. The paper proposes two schemes to evaluate DR
in B<sub>n</sub>. The first scheme uses a combinatorial approach by
limiting the number of faulty components to (2H(s,t)-2), while the
second outlines paths of length H(s,t) and, then, generates a recursive
closed-form solution to compute DR. The theoretical results have been
verified by simulation. The discrepancy between the theoretical and
simulation results is in most cases below 1% and in the worst case 4.6%
The performance of the IMPS multiprocessor computer system is investigated. An analytic queuing model is derived for one cluster of the architecture, under some reasonable assumptions. These results are validated by developing a discrete-event simulation model which uses real values for its input parameters, collected using one processor module and a logic analyzer. The analytical performance results closely match the simulation-model performance results. The primary performance index used in the modeling is the speedup. Other performance measures can easily be derived from the speedup
Nuclear magnetic resonance spectroscopy (NMRS) signals are modeled as a sum of decaying complex exponentials. The spectral analysis of these signals in order to detect their components and estimate their parameters is crucial to the biochemical analysis of the samples under examination. This paper presents a novel time frequency representation based on a Gabor filterbank/notch filtering instantaneous frequency estimator, in order to enable the detection of weaker and shorter lived exponentials. Building on prior work involving filterank-based instantaneous frequency (IF) estimation, this new approach is an iterative procedure where a Gabor filterbank is first employed in order to obtain a reliable estimate of the IF of the strongest component present. This component is then notch filtered in order to un-mask weaker components and the procedure repreated. The performance of this method was evaluated using an artificial signal and compared to the short time Fourier transform and the original Gabor filterbank approach. The results clearly demonstrate the superiority of the new method in uncovering weaker signals and resolving components that are very close to one another in frequency.
Hash functions are common and important cryptographic primitives, which are very critical for data integrity assurance and data origin authentication security services. Field programmable gate arrays (FPGAs) being reconfigurable, flexible and physically secure are a natural choice for implementation of hash functions in a broad range of applications with different area-performance requirements. In this paper, we explore alternative architectures for the implementation of hash algorithms of the secure hash standards SHA-256 and SHA-512 on FPGAs and study their area-performance trade-offs. As several 64-bit adders are needed in SHA-512 hash value computation, new architectures proposed in this paper implement modulo-64 addition as modulo-32, modulo-16 and modulo-8 additions with a view to reduce the chip area. Hash function SHA-512 is implemented in different FPGA families of ALTERA to compare their performance metrics such as area, memory, latency, clocking frequency and throughput to guide a designer to select the most suitable FPGA for an application. In addition, a common architecture is designed for implementing SHA-256 and SHA-512 algorithms.
Elliptic curve cryptography is a very promising cryptographic method offering the same security level as traditional public key cryptosystems (RSA, El Gamal) but with considerably smaller key lengths. However, the computational complexity and hardware resources of an elliptic curve cryptosystem are very high and depend on the efficient design of EC point operations and especially point multiplication. Those operations, using the elliptic curve group law, can be analyzed in operations of the underlined GF(2k) Field. Three basic GF(2k) Field operations exist, addition–subtraction, multiplication and inversion–division. In this paper, we propose an optimized inversion algorithm that can be applied very well in hardware avoiding well known inversion problems. Additionally, we propose a modified version of this algorithm that apart from inversion can perform multiplication using the architectural structure of inversion. We design two architectures that use those algorithms, a two-dimensional multiplication/inversion systolic architecture and an one-dimensional multiplication/inversion systolic architecture. Based on either one of those proposed architectures a GF(2k) arithmetic unit is also designed and used in a EC arithmetic unit that can perform all EC point operations required for EC cryptography. The EC arithmetic unit’s design methodology is proposed and analyzed and the effects of utilizing the one or two-dimensional multiplication/inversion systolic architecture are considered. The performance of the system in all its design steps is analyzed and comparisons are made with other known designs. We manage to design a GF(2k) arithmetic unit that has the space and time complexity of an inverter but can perform all GF(2k) operations and we show that this architecture can apply very well to an EC arithmetic unit required in elliptic curve cryptography.
A hardware architecture for GF(2m) multiplication and its evaluation in a hardware architecture for elliptic curve scalar multiplication is presented. The architecture is a parameterizable digit-serial implementation for any field order m. Area/performance trade-off results of the hardware implementation of the multiplier in an FPGA are presented and discussed.
In this paper two different approaches to the design of a reconfigurable Tate pairing hardware accelerator are presented. The first uses macro components based on a large, fixed number of underlying Galois Field arithmetic units in parallel to minimise the computation time. The second is an area efficient approach based on a small, variable number of underlying components. Both architectures are prototyped on an FPGA. Timing results for each architecture with various different design parameters are presented.
Minimizing energy dissipation and maximizing network lifetime are among the central concerns when designing applications and protocols for sensor networks. Clustering has been proven to be energy-efficient in sensor networks since data routing and relaying are only operated by cluster heads. Besides, cluster heads can process, filter and aggregate data sent by cluster members, thus reducing network load and alleviating the bandwidth. In this paper, we propose a novel distributed clustering algorithm where cluster heads are elected following a three-way message exchange between each sensor and its neighbors. Sensor’s eligibility to be elected cluster head is based on its residual energy and its degree. Our protocol has a message exchange complexity of O(1) and a worst-case convergence time complexity of O(N). Simulations show that our algorithm outperforms EESH, one of the most recently published distributed clustering algorithms, in terms of network lifetime and ratio of elected cluster heads.
A cost-effective optical tilt corrector system has been developed to meet an experimental requirement . The entire system can be assembled from scratch in only 1–2 weeks and delivers the following performance specifications: 500 Hz useable closed-loop bandwidth; 1.7 mrad/V sensitivity; >±10 mrad range; 25.4 mm dia. mirror with quarter-wave optical quality.
The performances of multimedia applications built on wireless systems depend
on bandwidth availability that might heavily affect the quality of service. The
IEEE 802.11 standards do not provide performed mechanism for bandwidth
management through data load distribution among different APs of the network.
Then, an AP can be heavily overloaded causing throughput degradation. Load
Balancing Algorithms (LBAs) was been considered as one of the attractive
solution to share the traffic through the available access points bandwidths.
However, applying the load balancing algorithm and shifting a mobile connection
from an access point to another without considering the received signal
strength indicator of the concerned APs might causes worst communication
performances. This paper is a contribution to check the performance's limits of
the LBA algorithm through experimental evaluation of communication metrics for
MPEG-4 video transmission over IEEE 802.11 network. Then, the paper focuses on
the proposition of a new LBA algorithm structure with the consideration of the
In this paper, a distributed, dynamic, frequency selection and multicarrier scheduling scheme, called Distributed, QoS-based, Dynamic Carrier Reservation (D-QDCR) is proposed. D-QDCR allows coexisting IEEE 802.11 access points of different providers to contend and reserve a carrier, based on QoS demands, and to distribute the allocated carrier, as well as the reserved time, to the associated wireless terminals, enabling the spectrum agility paradigm. D-QDCR, using distributed estimations of the required QoS, seeks to schedule for transmission an access point when its transfer requirements are at their peak in order to accomplish the QoS contracts and to achieve fairness. Additionally, through self-organized and etiquette policies, it mitigates interference situations, avoiding the waste of the scarce electromagnetic spectrum. Results show that the proposed dynamic frequency selection and scheduling scheme outperforms conventional scheduling in terms of data losses, transfer delays and efficiency.
Dynamic behavior of HVd.c. systems is mainly determined by its rectifier controller parameters and the VAr compensator size at the rectifier a.c. side. Effects of critical rectifier controller parameters on system operation stability are determined through eigenvalues scanning technique of the linearized system model. Time simulations with numerical integrations of the non-linear system model subjected to large disturbances is used for dynamic system behavior studies. The paper investigates the dynamic behavior of a two-terminal HVd.c. link operating with different rectifier controller parameters or with partial loss of VAr compensation at the rectifier side subsequent to step voltage disturbances in the a.c. voltage at the inverter side, which was previously reported as a frequent and important disturbance. Power, reactive power, d.c. current, delay angles and extinction angles oscillations are presented for both stable and unstable situations. Effects of partial loss of VAr compensation on reactive power response which can lead to voltage instability in these systems are discussed. System modeling is presented in adequate detail.
Current adaptive optical telescope designs use a single deformable mirror (DM), usually conjugated to the aperture plane, to compensate for the cumulative effects of optical turbulence. The corrected field of view (FOV) of an adaptive optics system could theoretically be increased through the use of multiple DMs conjugated to a like number of corresponding planes which sample the turbulence region in altitude. Often, the atmospheric turbulence responsible for the degradation of long-exposure telescope images is concentrated in several relatively strong layers. The logical location for the planes of correction in a multiconjugate adaptive optics (MCAO) system would be the same as these “seeing layers.” Each DM would correct for the component of the total wavefront contributed by its associated turbulent layer. However, there is no known method of isolating a particular layer so that its component may be measured. Somehow, the individual components must be estimated using available measurements of the cumulative wavefront at the aperture of the telescope. This paper presents a theoretical analysis of a signal processing technique for determining these phase contributions. The method takes advantage of the spatial diversity of wavefront sensor (WFS) measurements from two or more reference sources. These separate wavefront sensor measurements are processed via minimum mean square error filtering to yield an estimate of the phase perturbation caused by a particular turbulent layer of the atmosphere. Our results indicate that multiple wavefront corrector adaptive optics systems will require much brighter reference sources than single wavefront corrector systems.
As a result of recent supersonic transport (SST) studies on the effect they may have on the atmosphere, several experiments have been proposed to capture and evaluate samples of the stratosphere where SST's travel. One means to achieve this is to utilize the quartz crystal microbalance (QCM) installed aboard the ER-2, formerly the U-2 reconnaissance aircraft. The QCM is a cascade impactor designed to perform in-situ, real-time measurements of aerosols and chemical vapors at an altitude of 60,000 - 70,000 feet. The ER-2 is primarily used by NASA for Earth resources to test new sensor systems before they are placed aboard satellites. One of the main reasons the ER-2 is used for this flight experiment is its capability to fly approximately twelve miles above sea level (can reach an altitude of 78,000 feet). Because the ER-2 operates at such a high altitude, it is of special interest to scientists interested in space exploration or supersonic aircraft. Some of the experiments are designed to extract data from the atmosphere around the ER-2. For the current flight experiment, the QCM is housed in a frame that is connected to an outer pod that is attached to the fuselage of the ER-2. Due to the location of the QCM within the housing frame and the location of the pod on the ER-2, the pod and its contents are subject to structural loads. In addition to structural loads, structural vibrations are also of importance because the QCM is a frequency induced instrument. Therefore, a structural analysis of the instrument within the frame is imperative to determine if resonance and/or undesirable deformations occur.
A personal-computer(PC)-based boundary element method to calculate the quasistatic magnetic field above the earth's surface induced by an underground, three-phase, high-voltage power-line is developed. Use of a PC to handle the relevant mainframe sized matrix formulations is indicated. Computed results are compared with measured data.
A new implementation for the Random Early Detection method algorithm for ABR (REDM–ABR) service is proposed in this paper. It keeps running an exponential average of the queue length (Q). When a cell arrives, the average queue size (Qavg) is compared with two threshold levels, lower queue threshold (QL) and higher queue threshold (QH). If it is smaller than QL, the cell is passed, but if it is larger than QH, the cell marking probability is set to one. If it is in between the two thresholds, the cell marking probability is calculated depending on the value of Qavg. The values of Resource Management (RM) cell fields: Congestion Indication (CI), No Additive Increase (NI), and Explicit Rate (ER) are filled by the Relative Rate Marking (RRM) switch and sent back to the sources. The sources will change their rate depending on CI, NI, and ER values. To investigate the effect of decreasing the impending congestion area, a dynamic Q threshold (QD) is used. The QD is shifted from QL toward QH for decreasing the congestion area and to investigate the effect of this shift on the performance of the switch through simulation study.
Property specification languages and ABV (assertion-based verification) driven by simulation are being recognized by many as essential for verification of today’s increasingly complex designs. In addition, there are few mature approaches that concentrate on improving assertion integration with high-level designs modeled in SystemC. This paper discusses the issues faced within SystemC environments to incorporate PSL (property specification language) assertions. It also proposes an automatic solution that enhances SOC (system on chip) SLD (system level design) flow with PSL assertions embedded into SystemC designs.
In this paper, a parallel AC/DC power system is investigated, and a nonlinear robust controller is proposed to improve transient stability of the power system and to damp out any prolonged oscillation after a fault is cleared. Lyapunov's direct method is used to synthesize the control, and asymptotic stability of the closed loop system and improved dynamic performance are shown by both theoretical proof and simulation results.
In this paper a robust adaptive control algorithm for AC machine is presented. The main feature of this algorithm is that minimum synthesis is required to implement the strategy—hence the appellation minimum controller synthesis (MCS). Specifically, no plant model is required (apart from a knowledge of the state dimension) and no controller gains have to be calculated. The MCS algorithm appeared to be robust in the face of totally unknown plant dynamics, external disturbances and parameter variations with the plant. Finally a new approach has been successfully implemented on field-oriented controlled drive. Discussion on theoretical aspects, such as, selection of a reference model, stability analysis, gain adaptive and steady state error are included. Results are also presented.
In today’s consumer electronics market, Java has become one of the most important programming languages for the rapid development of mobile applications – spanning from home appliances/controllers, mobile and communication devices, to network-centric applets. However, the demand for high-performance low-power Java-based consumer mobile applications puts forward new challenges to the system design and implementation. This paper analyzes the energy consumption, execution efficiency, and speed issues of Java applications in a typical consumer mobile device environment. By adopting a hardware-assisted approach, we introduce a Java accelerator with a companion Java virtual machine. The accelerator is designed in an asynchronous style, and can be integrated with most existing processors and operating systems. The core architecture, design philosophy, and implementation considerations are presented in detail in this paper.
We present a compact FPGA implementation of a modular exponentiation accelerator suited for cryptographic applications. The implementation efficiently exploits the properties of modern FPGAs. The accelerator consumes 434 logic elements, four 9-bit DSP elements, and 13604 memory bits in Altera Stratix EP1S40. It performs modular exponentiations with up to 2250-bit integers and scales easily to larger exponentiations. Excluding pre- and post-processing time, 1024-bit and 2048-bit exponentiations are performed in 26.39 ms and 199.11 ms, respectively. Due to its compactness, standard interface, and support for different clock domains, the accelerator can effortlessly be integrated into a larger system in the same FPGA. The accelerator and its performance are demonstrated in practice with a fully functional prototype implementation consisting of software and hardware components.
During the last decade, the complexity and size of circuits have been rapidly increasing, placing a stressing demand on industry for faster and more efficient CAD tools for VLSI circuit layout. One major problem is the computational requirements for optimizing the place and route operations of a VLSI circuit. Thus, this paper investigates the feasibility of using reconfigurable computing platforms to improve the performance of CAD optimization algorithms for the VLSI circuit partitioning problem. The proposed Genetic algorithm architecture achieves up-to 5× speedup over conventional software implementation while maintaining on average 88% solution quality. Furthermore, a reconfigurable computing based Hybrid Memetic algorithm improves upon this solution while using a fraction of the execution time required by the conventional software based approach.
Video applications are characterized by their increased requirements for huge storage spaces and timing synchronization. Video data storage is a critical issue due to the so-called I/O bottleneck problem in relation to the quality of service while accessing video applications. The main contribution of the paper is that it considers video data dependencies, access frequencies and timing constraints in order to introduce a video data representation model which guides the storage policies. Two video data representation levels are considered to capture the frequencies of accesses at external (video objects) and internal (video clips) levels. A simulation model has been developed in order to evaluate the placement strategies. Video data placement is performed on a tertiary storage subsystem by both constructive and iterative improvement policies. Iterative improvement placement has been proven to outperform the other video data placement approaches.
This paper presents a method of code rate adaptation using punctured convolutional codes for direct sequence spread spectrum communication systems over slowly fading channels. A blind channel estimation technique is used to estimate the nature of the multi-user channel at the detector (before the decoder). The path gains obtained from the channel estimation technique are used to adapt the code rates. Punctured codes derived from a specific rate 1/2 (M = 4) mother code are used to provide error protection corresponding to the actual channel state. The upper and lower bounds on the bit error probability and the upper bound on the error event probability are derived for hard-decision and soft-decision decoding over Rayleigh and Rician fading channels. The throughput gains obtained using the adaptive scheme and the performances of the punctured codes are studied.
Recently, we have proposed a new multi-rate/multi-media system called wavelet based scale-code division multiple access (W/S-CDMA) that depends on the code, time and scale orthogonality introduced by pseudo-noise (PN) sequences, and wavelets. Wavelets are used as an orthogonal set of symbols for signaling, and their orthogonality is exploited over scale and time. In this system, the channel is divided into different scales, and each scale into time slots. In addition, the PN sequences are used in each scale to accommodate multiple users. In other words, each user encodes its successive information symbols with time-shifted replicas of the same basic wavelet in a specific scale and spreads its scaled and translated wavelets with its PN sequence. The finer the scale used, more symbols are transmitted. In this paper, we analyze the performance of Haar wavelet based S-CDMA (HW/S-CDMA) over a synchronous additive white Gaussian noise (AWGN) channel using a decorrelating multi-user detector. Results reveal that HW/S-CDMA holds promise since the users of HW/S-CDMA can achieve variable and higher data rates than those of direct sequence (DS)-CDMA for a similar bit error rate performance when real-valued PN sequences are used. However, HW/S-CDMA achieves better performance than DS-CDMA when complex-valued PN sequences are used. In addition, for all rates the same performance is achieved in HW/S-CDMA and the multi-user detector has no processing delay for different rates and has the same features as the standard decorrelating detector. Because of the reuse capability introduced by the scales, HW/S-CDMA is also capable of employing many available PN sequences from the optimal PN sequence families with limited number of sequences such as Kasami, Bent, etc.
This article reviews the state-of-the-art energy-efficient contention-based and scheduled-based medium access control (MAC) protocols for mobile sensor networks (MSNs) by first examining access schemes for wireless sensor networks (WSNs). Efficient and proper mobility handling in sensor networks provides a window of opportunity for new applications. Protocols, such as S-MAC, reduce energy consumption by putting nodes to sleep after losing to channel contention or to prevent idling. Sleeping is a common method for energy-efficient MAC protocols, but delay depends on sleep duration or frame time, and longer delays lead to higher packet lost rate when nodes are unsynchronized due to network mobility. MS-MAC extends S-MAC to include mobility-awareness by decreasing this sleep duration when mobility is detected. S-MAC with extended Kalman filter (EKF) reduces mobility-incurred losses by predicting the optimal data frame size for each transmission. MMAC utilizes a dynamic mobility-adaptive frame time to enhance TRAMA, a scheduled-based protocol, with mobility prediction. Likewise, G-MAC utilizes TDMA for cluster-based WSNs by combining the advantages of contention and contention-free MACs. Z-MAC also combines both methods but without clustering and allows time slot re-assignments during significant topology changes. All of the above MAC protocols are reviewed in detail.
We analyze bit error rate performance of a recently proposed multiple access system called scale time code division multiple access (STCDMA) for quasisynchronous communication over an AWGN channel. STCDMA depends on code, time and scale orthogonality introduced by spreading sequences and wavelets. Wavelets are employed as an orthogonal set of symbols for signaling, and their orthogonality over scale and time is exploited. The channel is partitioned into different scales, and each scale into different time slots. Each user is assigned a specific scale, time slot, and spreading code. Information symbols of each user are encoded by the Haar wavelet in its scale and time slot, and then they are spread by its spreading code. Complex-valued Hadamard sequences are used as spreading sequences and conventional detector (i.e. matched filter) is used at the receiver. Results show that the performance of STCDMA gets much better than that of CDMA over the quasisynchronous AWGN channel as the number of scales increases.
This paper presents a transaction level SystemC model of an avionics mission system data bus that provides cycle-accurate simulation of the bus. The mission system is a complex distributed computer network consisting of a mission control computer, radars, an array of subsystems and sensors. The data bus plays a critical role in the system as it carries all the information between the system components. Therefore modelling the bus at an appropriate level of abstraction using appropriate technology is important for evaluation of the performance of both the bus and the entire system. While models based on traditional hardware description languages (HDL) provide cycle-accurate performance estimates they are very slow and have high code complexity. In order to enhance model performance this paper presents a transaction level model (TLM) utilizing the enhanced SystemC features and levels of abstraction. The TLM model incorporates a clock-based synchronisation strategy thereby providing cycle-accurate performance estimates like the HDL models. The developed model has been validated for various payloads and system sizes. Simulation results show that the proposed SystemC transaction level model is much more efficient than models developed using conventional hardware description languages.
Input data representation is highly decisive in neural learning in terms of convergence. In this paper, within an analytical and statistical framework, the effect of the distribution characteristics of the input pattern vectors on the performance of the back-propagation (BP) algorithm is established for a function approximation problem, where parameters of an articulatory speech synthesizer are estimated from acoustic input data. The aim is to determine the optimum statistical characteristics of the acoustic input patterns in order to improve neural learning. Improvement is obtained through a modification of the statistical characteristics of the input data, which reduces effectively the occurrence of node saturation in the hidden layer.
In this paper a novel subband-based acoustic echo canceller (AEC) is proposed for teleconferencing applications. The proposed AEC consists of K non-overlapping main subbands and K auxiliary subbands. The auxiliary subbands are introduced in order to provide frequency coverage in the non-overlapping region between the neighboring main subbands. The main and auxiliary bands are constructed using the discrete Fourier transform (DFT) filter bank approach. Owing to the complex modulation feature implied in the DFT filter bank, the proposed structure consists of ( + 1) distinct main subbands and ( + 1) distinct auxiliary subbands. Central to computational efficiency of the proposed AEC is the down-sampling of the auxiliary bands by a factor of two compared to the main bands. The normalized mean squares algorithm (NLMS) is employed for updation of subband adaptive filter coefficients. The proposed subband AEC is implemented both in a real-time teleconferencing system employing a floating-point DSP chip and in simulation for the case of K = 16 i.e. nine distinct main bands and nine distinct auxiliary bands.
In this paper, we propose an new error estimate algorithm (NEEA) for stereophonic acoustic echo cancellation (SAEC) that is based on the error estimation algorithm (EEA) in [Nguyen-Ky T, Leis J, Xiang W. An improved error estimate algorithm for stereophonic acoustic echo cancellation system. In: International conference on signal processing and communication systems, ICSPCS’2007, Australia; December 2007]. In the EEA and NEEA, with the minimum error signal fixed, we compute the filter lengths so that the error signal may approximate the minimum error signal. When the echo paths change, the adaptive filter automatically adjusts the filter lengths to the optimum values. We also investigate the difference between the adaptive filter lengths. In contrast with the conclusions in [Khong AWH, Naylor PA. Stereophonic acoustic echo cancellation employing selective-tap adaptive algorithms. IEEE Trans Audio, Speech, Lang Process 2006;14(3):785–96, Gansler T, Benesty J. Stereophonic acoustic echo cancellation and two channel adaptive filtering: an overview. Int J Adapt Control Signal Process 2000;4:565–86, Benesty J, Gansler T. A multichannel acoustic echo canceler double-talk detector based on a normalized cross-correlation matrix. Acoust Echo Noise Control 2002;13(2):95–101, Gansler T, Benesty J. A frequency-domain double-talk detector based on a normalized cross-correlation vector. Signal Process 2001;81:1783–7, Eneroth P, Gay SL, Gansler T, Benesty J. A real-time implementation of a stereophonic acoustic echo canceler. IEEE Trans. Speech Audio Process 2001;9(5):513–23, Gansler T, Benesty J. New insights into the stereophonic acoustic echo cancellation problem and an adaptive nonlinearity solution. IEEE Trans. Speech Audio Process 2002; 10(5):257–67, Benesty J, Gansler T, Morgan DR, Sondhi MM, Gay SL. Advances in network and acoustic echo cancellation. Berlin: Springer-Verlag; 2001], our simulation results have shown that the filter lengths can be different. Our simulation results also confirm that the NEEA is better than EEA and SM-NLMS algorithm in terms of echo return loss enhancement.
In this paper, a practical pole-zero lattice adaptive acoustic echo canceller (AEC) for hands-free telephone is proposed. The proposed algorithm consists in two parts: forward lattice and inverse lattice. Collectively, they are referred to as LATIN (Lattice and Inverse Lattice) configuration. While the forward lattice is responsible for acoustic echo cancellation, the inverse lattice is employed in the double-talk (DT) mode only as to undo the distortion of the near-end speech brought about by the forward lattice filter when suppressing the acoustic echo. Assuming M poles and M zeros for the proposed AEC, the complexity of the proposed algorithm is approximately twice the complexity of an M-tap FIR gradient lattice algorithm. Real-time experimentation conducted on a floating-point DSP chip and simulation results attest to the stability, fast convergence, and high acoustic echo suppression of the proposed pole-zero algorithm.
Mobile ad hoc NETworks (MANETs) are becoming more popular due to the advantage that they do not require any fixed infrastructure, and that communication among processors can be established quickly. For this reason, potential MANET applications include military uses, search and rescue and meetings or conferences. Therefore, the fault-tolerance and reliability of the MANET is an important issue, which needs to be considered. The problem of reaching agreement in the distributed system is one of the most important areas of research to design a fault-tolerant system. With an agreement, each correct processor can cope with the influence from other faulty components in the network to provide a reliable solution. In this research, a potential MANET with a dual failure mode is considered. The proposed protocol can use the minimum number of rounds of message exchange to reach a common agreement and can tolerate a maximum number of allowable faulty components to induce all correct processors to reach a common agreement within the MANET.
In MANET, each mobile host can freely move around and the network topology is dynamically changing. To send a datagram, a source host broadcasts a route discovery packet to the network. All neighboring nodes receiving this packet will rebroadcast this packet until it reaches the destination. It will have large flooding overhead, poor network performance and undesirable battery power consumption. To improve network performance, we design a novel routing protocol called RAPLF (Routing with Adaptive Path and Limited Flooding) for mobile ad hoc networks. Simulation results show that our protocol has better performance especially in packet delivery rate and flooding overhead when compared to similar protocols.
Path compression techniques are efficient on-demand routing optimizing techniques for mobile Ad Hoc networks. However, there is no efficient model for path compression techniques. This paper analyzed the principles and characteristics of path compression algorithms and proposed dynamic model which provided theoretical basis to improve or propose path compression algorithms. This model took the mobility and expansibility of Ad Hoc networks into account and was efficient to analyze or evaluate path compression algorithms. The quantitative relationship and probability expression for pivotal compression events were given based on the model. The simulation results of SHORT (self-healing and optimizing routing techniques) and PCA (path compression algorithm) show that it is a correct and efficient dynamic model for path compression. Finally, some suggestions and application scenarios about the model were proposed.
This paper presents a scalable routing protocol for very large and dense ad hoc networks. The scalability of ad hoc networks is becoming an important issue due to the increasing number of applications in a distributed environment, the great number of mobile nodes involved into communication and wide speed range in which the mobile nodes can move. It is important to offer scalability in terms of network size, traffic load and mobility speed. The proposed protocol, called Geo-LANMAR, inherits the same advantage of LANMAR protocol in terms of group motion support and traffic load scalability and it reflects also the behaviour of geo-routing protocols such as GPSR. Geo-LANMAR is based on the idea of Terminodes routing for the forwarding scheme: long-distance geo-forwarding and low-distance table-driven routing. Its updating scheme, instead, is similar to the Hazy Sighted Link State Routing (HSLS) through a spatial and time update rate differentiation: frequent update rate for short distance and lower update rate for long distance. Performance evaluation of Geo-LANMAR has been lead out and a comparison in terms of throughput, average end-to-end delay and control overhead has been conducted against other well-known protocols such as LANMAR, GPSR and AODV. Geo-LANMAR results scalable in terms of traffic load, mobility speed, number of nodes and number of groups.
Topological changes in mobile ad hoc networks frequently render routing paths unusable. Such recurrent path failures have detrimental effects on quality of service. A suitable technique for eliminating this problem is to use multiple backup paths between the source and the destination in the network. Most of the proposed on-demand routing protocols however, build and rely on single route for each data session. Whenever there is a link disconnection on the active route, the routing protocol must perform a path recovery process. This paper proposes an effective and efficient protocol for backup and disjoint path set in an ad hoc wireless network. This protocol converges into a highly reliable path set very fast with no message exchange overhead. The paths selection according to this algorithm is beneficial for mobile ad hoc networks, since it produces a set of backup paths with much higher reliability. Simulations are conducted to evaluate the performance of our algorithm in terms of route numbers in the path set and its reliability. In order to acquire link reliability estimates, we use link expiration time (LET) between each two nodes.In another experiment, we save the LET of entire links in the ad hoc network during a specific time period, then use them as a data base for predicting the probability of proper operation of links.Links reliability obtains from LET. Prediction is done by using a multi-layer perceptron (MLP) network which is trained with error back-propagation error algorithm. Experimental results show that the MLP net can be a good choice to predict the reliability of the links between the mobile nodes with more accuracy.
The development of various network architectures to connect wireless sensors for monitoring physical environments has emerged as an important new area of applications. The major issues occurring in these new types of systems are: (a) the continuity of environment monitoring, (b) the mobility management of sensors, (c) the ad-hoc communication scheme, and (d) the security of sensitive messages. This paper develops an ad-hoc architecture and communication and mobility schemes to provide an efficient monitoring system in an environment where targets have unpredicted motion. It also discusses a security solution. A numerical simulation is provided for the validation of our schemes.
In this paper, a detailed theoretical analysis of variable energy adaptation in an asynchronous code division multiple access (A-CDMA) system is discussed. Rayleigh and Rician frequency-selective slowly fading channels are considered. The receiver, capable of measuring the received signal energy-to-noise ratio, provides the transmitter with the necessary signal-to-noise ratio measurement to control the transmitter energy through a noise-free feedback channel. System parameters such as fading margin and mean transmitter energy gain are calculated for both Rayleigh and Rician fading channels as a function of the probability of error specification and the probability of unsatisfactory operation.