This paper proposes an efficient prefetching strategy for interactive remote browsing of sequences of high resolution JPEG 2000 images. As a result of the inherent latency of client-server communication, the experiments of this study prove that a significant benefit, can be achieved, in terms of both quality and responsiveness, by anticipating certain data from the rest of the sequence while an image is being explored. This paper proposes a model based on the quality progression of the image in order to estimate which percentage of the bandwidth is dedicated to prefetch data. This solution can be easily implemented over any existing remote browsing architecture.
This paper studies the integration of a just noticeable distortion model in a H.264/AVC standard codec to improve the final rate-distortion performance. Three masking aspects related to lossy transform coding and natural video contents are considered: frequency band decomposition, luminance component variations and pattern masking. For the latter aspect, three alternative models are considered, namely the Foley-Boynton, Foley-Boynton adaptive and Wei-Ngan models. Their performance, measured for high definition video content, and reported in terms of bitrate improvement and objective quality loss, reveals that the Foley-Boynton and its adaptive version provide the best performance with up to 35.6 % bitrate reduction at the cost of at most 1.4 % objective quality loss.
This paper discusses the design of a 60 GHz low noise amplifier (LNA) using a standard low power SOI CMOS process from ST Microelectronics. First, we outline the technology as well as the mm-wave design challenges. Using recent work on coplanar waveguide (CPW) modeling, we describe how it's possible to use parametric, 3D electromagnetic simulation to complete or replace analytical models of on-chip passive devices. A short description of the transistor model is also provided. Finally, we discuss the details of the LNA design and show how the simulation results compare to the measurements.
In recent times, range based modeling and simulation techniques have emerged for systems with parameter tolerances and deviations. They are used to perform a semi-symbolic simulation and to analyze the examined systems for their time domain behavior. The system quantities in such simulations are represented as range based signals using the concept of Affine Arithmetic. Transforming the range based signals from a time domain to a frequency domain representation significantly increases the analysis capabilities and provides a broader insight into the system's behavior. Such a transformation enriches the expressiveness of semi-symbolic system quantities and simultaneously allows frequency based analysis techniques to be applicable. We use the Discrete Fourier Transform to compute a range based frequency representation and finally discuss the method and interpretation of the frequency spectrum on two examples.
Gate assignment is an important decision making problem which involves multiple and conflict objectives in airport. In this paper, fuzzy model is proposed to handle two main objectives, minimizing the total walking distance for passengers and maximizing the robustness of assignment. The idle times of flight-to-gate are regarded as fuzzy variables, and whose membership degrees are used to express influence on robustness of assignment. Adjustment function on membership degree is introduced to transfer two objectives into one. Modified genetic algorithm is adopted to optimize the NP-hard problem. Finally, illustrative example is given to evaluate the performance of fuzzy model. Three distribution functions are tested, and comparison with the method of fixed buffer time is given. Simulation results demonstrate the feasibility and effectiveness of proposed fuzzy method.
A bottleneck during hardware design is the localization and the correction of faults — so-called debugging. Several approaches for automation of debugging have been proposed. This paper describes a methodology for evaluation and comparison of automated debugging algorithms. A fault model for faults occurring in SystemC descriptions at design time or during implementation is an essential part of this methodology. Each type of fault is characterized by mutations on the program dependence graph. The presented methodology is applied to evaluate the capability of a simulation based debugging procedure.
The objective of this work is to investigate a new approach for object segmentation in videos. While some amount of user interaction is still necessary for most algorithms in this field, these can be reducedmaking use of certain properties of graph-based image segmentation algorithms. Based on one of these algorithms a framework is proposed, that tracks individual foreground objects through arbitrary video sequences and partly automates the necessary corrections required from the user. Experimental results suggest, that the proposed algorithm performs well on both low- and high-resolution video sequences and can even cope with motion blur.
A cost-effective operation of complex automation systems requires the continuous diagnosis of the asset functionality. The early detection of potential failures and malfunctions, the identification and localization of present or impending component failures and, in particular, the monitoring of the underlying physical process are of crucial importance for the efficient operation of complex process industry assets. With respect to these suppositions a software agent based diagnosis and monitoring concept has been developed, which allows an integrated and continuous diagnosis of the communication network and the underlying physical process behavior. The present paper outlines the architecture of the developed distributed diagnostic concept based on software agents and presents the functionality for the diagnosis of the unknown process behaviour of the underlying automation system based on machine learning methods.
Plasma-exposed Si surface related to Si recess in source/drain region was investigated in detail for various superposed bias configurations with frequencies of 13.56 MHz and 400 kHz. Two different bias powers were utilized by an inductively coupled plasma reactor (ICP). The surface layer (SL) and the interfacial layer between the SL and Si substrate (IL) were analyzed by spectroscopic ellipsometry (SE), photoreflectance spectroscopy (PR) and capacitance-voltage (C-V) measurement. The SE identified the interfacial layer growth by an optimized optical model, and the PR, the structural strain change and carrier trap site generation in IL, in accordance with a bias power and a superposed bias configuration. The aerial trap site density was estimated on the basis of a PR-based model. Also C-V measurement confirmed the surface and interfacial layer growth and carrier trap site generation in the vicinity of plasma-exposed surface. The obtained findings imply that superposed bias configurations, widely believed inevitable for future plasma processing, should be optimized in terms of Si substrate damage quantitatively estimated by the methods presented in this article.
A typical CMOS photonic circuit may comprise analog, digital and optical devices. To simulate it, a common simulation environment for electrical/optical systems is necessary. In this article, a simulation methodology for CMOS photonic heterogeneous system has been proposed. Using hardware description language, we create behavioral models for optical devices with S-matrix formalism. The challenges in model implementation have been addressed, such as large-size vector representation at model ports and complex matrix calculation. And a Verilog-AMS + VPI simulation strategy is proposed to solve the simulation issues. Finally, the proposed method is applied to bottom-up verification of a micro-ring array, and the simulation result matches well with brute force simulation, while the simulation time is largely reduced.
Recently, diverse sensor technologies have been advanced dramatically, so that people can use those sensors in many areas. Camera to capture the video data is one of the most useful sensors among them, and the use of camera with other sensors or the use of several cameras has been done to obtain more information. This paper deals with the multi-camera system, which uses the several cameras as sensors. Previous multi-camera systems have been used to track a moving object in a wide area. In this paper, we have set cameras to focus on the same place in an office so that system can provide diverse views on a single event. We have modeled office events, and modeled events can be recognized from annotated features. Finally, we have conducted the event recognition, view selection and event retrieval experiments based on a scenario in an office to show the usefulness of the proposed system.
In this paper, we present a new criterion to evaluate point correspondences within a stereo setup. Many applications such as stereo matching, triangulation, lens distortion correction, and camera calibration require an evaluation criterion, indicating how well point correspondences fit to the epipolar geometry. The common criterion here is the epipolar distance. Since the epipolar geometry is often derived from noisy and partially corrupted data, an uncertainty regarding the estimation of the epipolar distance arises. However, the uncertainty of the epipolar geometry, in the shape of the covariance matrix of an epipolar line, provides additional information, and our approach utilizes this information for a new distance measure. The basic idea behind our criterion is to determine the most probable epipolar geometry that explains the point correspondence in the two views. Furthermore, we show that using Lagrange multipliers, this constrained minimization problem can be reduced to solving a set of three linear equations.
In the last years with according to the increasing application of wireless sensor networks in military, space, medicine, safety, fields and etc, many methods and algorithms have been represented to improve the quality of routing. But according to existence limits in this network, the representative methods aren't properly responsible and the goals should be improved. Our goal is that we should prepare a soft QoS for different packets that the data on route will be not available easily in wireless networks. In wireless network, it has been worked a lot about reliability, delay and energy but in this network the energy problem is the main goal and also must be work it. In this paper determining the route unlike many of papers is based on the node's local information and links.
Designing and implementing operating system for wireless embedded systems powered by energy harvesters is a challenging task. Conventional operating systems require lot of system resources thus they are not applicable to these highly energy constrained devices. This paper discusses the requirements, design challenges, architecture of an operating system for energy autarkic wireless embedded systems. The paper is divided in two parts. First part of the paper gives an introduction to wireless embedded systems powered by energy harvester. Two application scenarios are presented here. The second part of the paper discusses the problematic of the operating system design for these devices.
The number of firefighter deaths and casualties (274 in 2007, 352 in 2006) published in the latest UK fire statistics demonstrates the demand for better support for first responders. More complete information can help the fire Incident Commanders (ICs) analyze the risks and deal with the hazards more efficiently. Wireless Sensor Networks (WSNs) have shown great potential in providing information from inside the building. However, there is a lack of a comprehensive understanding of how ICs retrieve the required information on-site and what WSNs can provide to facilitate fire Emergency Response. An analysis of fire ICs' requirements was undertaken, to understand their goals, tasks/decisions, and information needs. The result was a list of technology opportunities for WSNs. The findings established a link between ICs' requirements and WSNs' technology capabilities, which can contribute to sustainable fire disaster management using WSN technology.
In this paper we propose a new method for trajectory analysis in surveillance scenarios using Context-Free Grammars. Starting from a predefined set of activities, we provide a tool to compare the incoming paths with the stored templates, analyzing the sequence of samples at a syntactic level. Using this approach it is possible to perform the matching of trajectories at different abstraction layers, retrieving for example recurrent motion patterns or anomalous activities. The implemented system has been validated in indoor, considering as the main objective activity monitoring for assisted living applications. The results demonstrate the capability of the framework in recognizing known motion patterns, as well as in determining the presence of unknown actions, classified as anomalous.
This paper builds upon previous work on local interest point detection and description to propose the extraction and representation of novel Local Invariant Feature Tracks (LIFT). These features compactly capture not only the spatial attributes of 2D local regions, as in SIFT and related techniques, but also their long-term trajectories in time. This and other desirable properties of LIFT allow the generation of Bags-of-Spatiotemporal-Words models that facilitate capturing the dynamics of video content, which is necessary for detecting high-level video features that by definition have a strong temporal dimension. Preliminary experimental evaluation and comparison of the proposed approach reveals promising results.
The majority of recent work in forensic analysis of visual surveillance content has been focusing on automatic information extraction aspects. However, little attention has been paid to the intelligent reuse of extracted (meta)data. For reasoning upon such pre-acquired metadata, in our previous paper, we proposed the use of logic programming to represent human knowledge and the use of subjective logic to handle uncertainty implied in the extracted data and the logical rules. In this paper, we further explore the proposed approach for analyzing the relationship between two persons and, more specifically, for estimating whether one person could serve as a witness of another person in a public area scene. We first develop a rule based model for the likelihood of being a good witness that uses metadata extracted by a person tracker and evaluates the relationship between the tracked persons. To cope with the uncertainty in the relationship model, we develop a reputational subjective opinion function for the spatial-temporal relations. In addition, we accumulate the acquired opinions over time using subjective logic's fusion operator. To verify our approach, we finally present a preliminary experimental case study.
This paper presents a method for fusing two maps of an environment: one estimated with an application of the simultaneous localization and mapping (SLAM) concept and the other one known a priori by a vehicle. The goal of such an application is double: first, to estimate the vehicle pose in this known map and, second, to constrain the map estimate with the known map using an implementation of the local maps fusion approach and a heterogeneous mapping of the environment. This article shows how a priori knowledge available in the form of a map can be fused within an EKF-SLAM framework to obtain more accuracy on the vehicle poses and map estimates. Simulation and experimental results are given to show these improvements.
A new version of Absolute Polar Duty Cycle Division Multiplexing transmission scheme over Wavelength Division Multiplexing
system is proposed. We modeled and analyzed a method to improve the performance of AP-DCDM over WDM system by using Dual-Drive
Mach–Zehnder-Modulator (DD-MZM). Almost 4.1dB improvement in receiver sensitivity of 1.28Tbit/s (32×40Gbit/s) AP-DCDM-WDM
over 320km fiber is achieved by optimizing the bias voltage in DD-MZM.
KeywordsModified AP-DCDM-Wavelength Division Multiplexing-Mach–Zehnder-Modulator
This paper presents the implementation of an intelligent sensor interface embedded system, compliant with the new IEEE 1451
family of standards for smart networked transducers, integrating on-chip the mixed-signal processing chain plus data fusion
and communication digital resources. As application case study, a gas leak detection system for H2-based vehicles is presented.
KeywordsMixed-signal embedded systems-Intelligent sensor interface (ISIF)-Gas leak measures-Automotive safety-Sensor networks
Disease mapping is a method used to display the geographical distribution of disease occurrence. Some traditional methods
of classification for detection of high- or low-risk area such as traditional percentiles method and significant method have
been used in disease mapping for map construction. However, as described by several authors, the classification based on these
traditional methods has some disadvantages for describing the spatial distribution of the risk of the disease concerned. To
overcome these limitations, an approach using space–time mixture model within an empirical Bayes framework is described in
this chapter. The aim of this chapter is to investigate the geographical distribution of infant mortality in peninsular Malaysia
from 1991 to 2000. The analysis showed that in the early 1990s the spatial heterogeneity effect was more prominent; however,
toward the end of 1990s this pattern tends to disappear. Indirectly, this may indicate that the provisions of health services
throughout peninsular Malaysia are uniformly distributed over the period of the study, particularly toward the year 2000.
KeywordsDisease mapping–Space–time mixture model–Infant mortality–Geographical distribution–Spatial heterogeneity
A new generation of media consumers has arisen this century that is becoming more and more accustomed to content that is delivered
via a wireless network. News Corporation, a global media industry giant, is taking on this challenge of providing sports,
news, and other content to these consumers in a variety of ways. This chapter looks at the investments and strategies employed
by News Corporation in delivering content via a wireless means for both today and in the future.
Engineering objects are represented in the form of manual and computerized line drawings in the production and manufacturing
industry. Digitization of manual drawings in some computerized vector format is one of the very crucial areas in the domain
of geometric modeling and imaging. There are conventionally three views of an object; front, top, and side. Registration of
these 2D views to form a 3D view is also a very important and critical research area. In this paper we have developed a technique
for the reconstruction of 3D models of engineering objects from their 2D views. Our technique is versatile in the sense that
3D representation is in both the drawing exchange file (DXF) format recognized by various CAD tools as well as the scalable
vector graphics (SVG) format recognized by various web-based tools. We have also described different comparison metrics and
have compared our technique with existing techniques.
In this paper we have implemented a novel approach called perception-based vision (PBV) to retrieve depth information of a
hole from a single camera perspective. Three dimensional modeling of real world objects is always of great concern for scientists
and engineers. Different approaches are used for this purpose, e.g., 3D scanners, CAD modeling, and contour tracing by coordinate
measuring machines (CMMs). This paper does not deal with 3D modeling as a whole but specifically addresses the issue of depth
information retrieval of a hole. This is a cost effective, efficient, and accurate solution and requires just a single 2D
camera perspective under ambient conditions.
The 2PARMA project focuses on the development of parallel programming models and run-time resource management techniques to
exploit the features of many-core processor architectures. The main goals of the 2PARMA project are: definition of a parallel
programming model combining component-based and single-instruction multiple-thread approaches, instruction set virtualisation
based on portable byte-code, run-time resource management policies and mechanisms as well as design space exploration methodologies
for many-core computing architectures.
In this paper we present a spectrometer realized by bump-bonding a 300-μm-pitch, 32 × 32-pixel silicon X-ray detector chip
to a 0.35-μm CMOS, 3-cm2 read-out chip. The selftriggered, mixed analog-digital read-out chip, with 1,024 channels, digitizes the X-ray photon energy
with 10 bits of resolution, provides the coordinates of the triggered pixels and achieves 34 e-
rms of input referred noise, ±3.3 LSB of INL and ±0.2 LSB of DNL, while consuming 555 mW from a 3.3-V supply. Preliminary experimental
results on the complete spectrometer are reported.
CMOS technology scaling has allowed for unprecedented integration of analog and digital circuits onto a single chip. The integration of RF and analog circuits with digital logic has provided the consumer with wireless devices with high performance and increasing functionality but at lower cost. System - on - Chip (SOC) is considered the ultimate goal for a low cost, high performance semiconductor chip for mobile products. At the 32nm node, high-K and metal gate will be the mainstream gate stack in high volume manufacturing. In this paper, we will review SOC requirements with a focus on high-K / metal gate effects on analog passive devices. We will present a new metal gate resistor which can be programmed to exhibit either a positive, negative, or zero temperature coefficient (TC) by adjusting its physical dimensions. We will also discuss the trade-offs that RF / Analog designers will have to take into consideration.
a surface recognition algorithm capable of determining contact surfaces types by means of tactile sensor fusion is proposed.
The authors present a recognition processes for 3-dimensional deformations in a 2-dimensional parametric domain. Tactile information
is extracted by physical contact with a grasped object through a sensing medium. Information is obtained directly at the interface
between the object and the sensing device and relates to three-dimensional position and orientation of the object in the presence
of noise. The technique called “eigenvalue trajectory analysis”, is introduced and adopted for specifying the margin of classification
and classification thresholds. The authors demonstrate mathematically that this approach, which complements existing work,
offers significant computational advantages when applied to challenging contact scenarios such as dynamic recognition of contact
Exploiting geometric features, such as points, straight or curved lines and corners, plays an important role in object recognition.
In this paper, we present a model-based recognition of 3D objects using intersecting lines. We concentrate on using perpendicular
line pairs to test recognition of a parallelepiped model and represent the visible face of the object. From 2D images and
point clouds, first, 3D line segments are extracted, and then intersecting lines are selected from them. By estimating the
coverage ratio, we find the most accurate matching between detected perpendicular line pairs and the model database. Finally,
the position and the pose of the object are determined. The experimental results show the performance of the proposed algorithm.
In seismic, radar, and sonar imaging the exact determination of the reflectivity distribution is usually intractable so that
approximations have to be applied. A method called synthetic aperture focusing technique (SAFT) is typically used for such
applications as it provides a fast and simple method to reconstruct (3D) images. Nevertheless, this approach has several drawbacks
such as causing image artifacts as well as offering no possibility to model system-specific uncertainties. In this paper,
a statistical approach is derived, which models the region of interest as a probability density function (PDF) representing
spatial reflectivity occurrences. To process the nonlinear measurements, the exact PDF is approximated by well-placed Extended
Kalman Filters allowing for efficient and robust data processing. The performance of the proposed method is demonstrated for
a 3D ultrasound computer tomograph and comparisons are carried out with the SAFT image reconstruction.
Trees are important objects in our living environment. Modeling of living tree in our environment is hard work in computer
vision and pattern recognition, since trees are related to large shape diversity and geometry complexity. In this paper, we
present a range image analysis based approach to model a 3D tree from a single range image data. Range image pixels are thought
of as 3D discrete points. Points from leaves and points from branches are segmented based on a new metrics on the convergence
of local directions. A region growing method is then adopted to classify points from different branches. Skeletons of main
branches are then computed by clustering each branch segment into small bins. The shape patterns of visible branches are used
to predict those of obscured branches. Experiments show that this approach is applicable to modeling living trees.
Keywordsprincipal direction-segmentation-skeleton-tree branch modeling-generation
Nowadays many applications require for their processing knowledge about the shapes of objects. In a lot of cases, the shapes
to be acquired can have the kind of segments for which measurement is difficult, because of the occlusion problem. Shapes
with such properties can be observed on crashed car bodies for example, where the deformation can be of complex shapes. In
this paper a new measurement system is introduced, the aim of which is to accurately acquire the deformed parts of a car body
and obtain useful information supporting car deformation analysis.
Because 3D models are increasingly created and designed using computer graphics, computer vision, CAD medical imaging, and a variety of other applications, a large number of 3D models are being shared and offered on the Web. Large databases of 3D models, such as the Princeton Shape Benchmark Database , the 3D Cafe repository , and Aim@Shape network , are now publicly available. These datasets are made up of contributions from the CAD community, computer graphics artists, and the scientific visualization community. The problem of searching for a specific shape in a large database of 3D models is an important area of research. Text descriptors associated with 3D shapes can be used to drive the search process , as is the case for 2D images . However, text descriptions may not be available, and furthermore may not apply for part-matching or similarity-based matching. Several content-based 3D shape retrieval algorithms have been proposed [6–8].
Locating cells for 3G services is a complex process due to the heterogeneity of service requirements. These requirements considerably
affect the service coverage and resource utilisation of the network. This is a complex issue to model in WCDMA networks due
to the dynamic nature of power control and the dependency between coverage and load. In this work we consider the issue of
evaluating user service coverage and resource utilisation, or load that a trial candidate network design of cells can sustain.
We investigate the sensitivity of two fundamental variables associated with snapshot evaluation – priorities for admission
and the mix of requested services. This is conducted for the downlink scenario using a highly efficient offline evaluation
technique that avoids recourse to lengthy simulation of online system power control. The results expose the significant influence
that the variables have on the resultant evaluation of a candidate network design using a single traffic snapshot. This is
important knowledge for network planning.
With the rapid development of cryptography and network communication, random number is becoming more and more important in
secure data communication. The nonlinearity of backward propagation neural network (BPNN) is used to improve the traditional
random number generator (RNG). SHA-2 (512) hash function can ensure the unpredictability of the produced random numbers. So,
a novel and secure RNG architecture is proposed in the presented paper, which is BPNN based on SHA-2 (512) hash function.
The quality of random number generated by this proposed architecture can well satisfy the security of cryptographic system
according to results of test suites standardized by the U.S. The proposed architecture can be used to improve performances
such as power consumption, flexibility, cost and area in network security and security for cryptographic systems.
The traditional means of power telecontrol communication must be changed because of the use of IEC 61850 standards in Substation
communication. Considering the integration of IEC 61850 standards and IEC 61970, this paper presents an IEC 61850/OPC-based
telecontrol communication model. The telecontrol information can be exchanged through ACSI/MMS services between IEDs in substation
and OPC server in control center. The OPC server performs the mapping of IEC 61850 information models and OPC information
model, and publishes information to SCADA applications which run as OPC clients. A prototype system is built to verify the
feasibility of the model. It is proved that this system can meet the real-time requirements. It is different from gateway
and need no complex protocol conversion.
Keywordspower telecontol system-IEC 61850-OPC-manufacturing message specification-Component Interface Specification
Diabetes not under control results in many additional health risks including skin ulcers that are extremely hard to cure.
Laser therapy has been seen to be effective. This study shows that two wavelengths, 660 and 880 nm, when used together seems
to be more effective than other forms of light therapy. This hypothesis will be tested with larger populations.
KeywordsDiabetic ulcers–clinical–660 and 880 nm wavelengths of laser light used together
Low level laser irradiation (LLLI) has been shown to reduce inflammation of tissue and we show that 780 nm radiation modifies
certain processes fundamental to aneurysm progression. This study has been designed to determine the effect of LLLI on cytokine
gene expression and secretion of inductible nitric oxide synthase (iNOS) and NO production in lipopolysaccharide (LPS) - stimulated
KeywordsCytokines–Nitric Oxide–780 nm radiation–Aneurysm progression
This paper discussed novel algorithms for synchronizations and time resolution improvements of One Way Propagation Time (OWPT)
measurements in the 802.11 Wireless Local Area Network (WLAN). In OWPT measurements, the Mobile Station (MS) records each
802.11 Beacon frame’s arrival time. The Beacon frame’s arrival time minus its Timestamp, which is recorded when the Beacon
frame is transmitted from an Access Point (AP), is the Beacon frame’s Propagation Time. The Propagation Time represents the
distance between the MS and the AP. Rather than microseconds (μs) time resolution Timestamp, the MS could use its high precision
clock to record the Beacon frame’s arrival time in nanoseconds (ns). The first part of this paper proposes algorithms which
can utilize the ns resolution arrival Time to improve the OWPT measurements time resolution from μs to ns and to highly synchronize
the MS with all APs. These algorithms provide an opportunity to apply OWPT in 802.11 WLAN for highly accurate indoor localization.
The second part discusses the possibility to utilize existing software and hardware platform to realize the proposed algorithms.
At the end of this paper, the shortages of existing MS timing ability were raised and several options are provided for future
researches to improve MS timing ability for OWPT application.
Keywords802.11 WLAN-Synchronization-TOA-One way propagation time-Indoor localization-Time resolution improvement
Toward the complete integration of color sensors in CMOS technologies, a novel color sensitive device, the Transverse Field
Detector, is proposed. The TFD color detection principle is based on the creation of a transverse, V-shaped electric field
configuration in a Silicon active layer. The electric field is generated only by means of surface biasing/ collecting electrodes.
Taking advantage on the dependence of the Silicon absorption length with respect to the incoming wavelength, each of the surface
contacts collect photo-carriers down to a different depth. In this way three spectral functions are obtained at the three
electrodes, without the use of any color filter. Newly developed pixel structures and a preliminary Active Pixel readout circuitry
design are presented.
Real-time embedded applications tend to combine periodic and aperiodic computations. Modeling standards must then support
both discrete-time and discrete-event models of computation and communication whereas they historically pertain to two different
communities: asynchronous and synchronous designers. In this article, two emerging standards of the domain (MARTE and AADL)
are compared and their ability to tackle this issue is assessed. We plead for combining both standards and show how MARTE
can be extended to integrate AADL features required for end-to-end flow latency analysis.
Students spend much of their life in an attempt to assess their aptitude for numerous tasks. For example, they expend a great Students spend much of their life in an attempt to assess their aptitude for numerous tasks. For example, they expend a great
deal of effort to determine their academic standing given a distribution of grades. This research finds that students use deal of effort to determine their academic standing given a distribution of grades. This research finds that students use
their absolute performance, or percentage correct as a yardstick for their self-assessment, even when relative standing is their absolute performance, or percentage correct as a yardstick for their self-assessment, even when relative standing is
much more informative. An experiment shows that this reliance on absolute performance for self-evaluation causes a misallocation much more informative. An experiment shows that this reliance on absolute performance for self-evaluation causes a misallocation
of time and financial resources. Reasons for this inappropriate responsiveness to absolute performance are explored. of time and financial resources. Reasons for this inappropriate responsiveness to absolute performance are explored.
KeywordsEngineering education-Self-evaluation-Academic performance-Study plans KeywordsEngineering education-Self-evaluation-Academic performance-Study plans
An innovative fiber optic setup for the scattering–free absorption spectroscopy of liquids is presented. It makes use of an
integrating sphere that contains the sample under test, coupled to a fiber optic supercontinuum source and to a fiber optic spectrometer. A collection of turbid lubricant oils was considered as a test case for verifying
and validating the innovative scheme of diffuse–light absorption spectroscopy. Scattering–free spectra were successfully measured
and processed as product fingerprints for the prediction of turbine types.
The number of end-users using the Internet inside and outside of the office has been increasing. As a result, the number of
Web applications that end-users use has also been increasing. Most of these applications are developed by IT professionals.
Thus, the work to be automated is limited to particular tasks such as electronic commerce relating to B-to-B and B-to-C, for
example, which calculates profit over the cost of development. Furthermore, it is difficult to develop and maintain applications
quickly. Primarily, Web applications should be supported by domain experts themselves because Web applications must be modified
frequently based on the domain experts’ ideas. Therefore, end-user initiative development of applications has become important
for the automation of end-users’ own tasks. In the near future, the information society will require such new technologies,
empowering domain experts and enabling them to automate their own work independently without extra training or the help of
Safety verification for hybrid systems is a very important issue but also a very complex one. The abstraction is a mean to
simplify it by shifting from global considerations to local ones. The aim of this paper is to present an overview of techniques
of abstraction of hybrid systems that are commonly used for safety analysis. Reachability approaches related to abstraction
for verification are briefly presented.
Triaxial accelerometers are used as a low cost solution in wide areas of patient care. This paper describes the use of triaxial
accelerometer together with ZigBee transceiver to detect fall of patients. The system, including calibration of accelerometers
and measurement is explained in detail.
Keywordsaccelerometers-ZigBee standard-fall detection
External memory access in MPSoCs becomes more challenging with the growing requirements for high bandwidth and low latency.
We propose a novel method for optimizing external memory access in term of latency for NoC-based MPSoCs. Our approach considers
the off-chip memory access within a system approach: from the initiators to the memory modules through the NoC-based interconnect.
We couple QoS of both NoC and memory scheduler in order to guarantee continued services throughout the request and the response
paths, between the masters and the SDRAM modules. We study the influence of low-priority requests over high-priority requests.
We also analyze the influence of the number of the conflict points inside the NoC over high-priority requests latency. We
compare the use of virtual channels with the physical direct connection to map latency-sensitive IPs requests towards the
memory subsystem, and demonstrate that both solutions are equivalent in term of memory access latency.
The convergence speed of algebraic reconstruction technique (ART) depends heavily on the order in which the projections are
considered. In this study, a projection access scheme based on prime number increment is proposed, which is applicable to
uniform projection sampling in any angle range. We compared the results reconstructed from the proposed method with the results
reconstructed from the conventional sequential method, the prime number decomposition method and random ordering method, for
cone-beam X-ray computed tomography reconstruction and for the case of circular acquisition. The results indicate that using
the proposed method can accelerate the convergence of ART and produces more accurate images with fewer artifacts.
KeywordsCT reconstruction–ART–projection order–prime number increment
This paper addresses sensor network applications which need to obtain an accurate image of physical phenomena and do so with
a high sampling rate in both time and space. We present a fast and scalable approach for obtaining an approximate representation
of all sensor readings at high sampling rate for quickly reacting to critical events in a physical environment. This approach
is an improvement on previous work in that after the new approach has undergone a startup phase then the new approach can
use a very small sampling period.