IEICE Transactions on Information and Systems

Published by Institute of Electronics, Information and Communication Engineers
Online ISSN: 1745-1361
Publications
Architecture of an MTANN (a class of PML) consisting of an ML regression model (e.g., linear-output ANN regression and support-vector regression) with sub-region (local window or patch) input and single-pixel output. 
Classification components in CADe schemes for detection of polyps in CT colonography.
Article
Computer-aided detection (CADe) and diagnosis (CAD) has been a rapidly growing, active area of research in medical imaging. Machine leaning (ML) plays an essential role in CAD, because objects such as lesions and organs may not be represented accurately by a simple equation; thus, medical pattern recognition essentially require "learning from examples." One of the most popular uses of ML is the classification of objects such as lesion candidates into certain classes (e.g., abnormal or normal, and lesions or non-lesions) based on input features (e.g., contrast and area) obtained from segmented lesion candidates. The task of ML is to determine "optimal" boundaries for separating classes in the multidimensional feature space which is formed by the input features. ML algorithms for classification include linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), multilayer perceptrons, and support vector machines (SVM). Recently, pixel/voxel-based ML (PML) emerged in medical image processing/analysis, which uses pixel/voxel values in images directly, instead of features calculated from segmented lesions, as input information; thus, feature calculation or segmentation is not required. In this paper, ML techniques used in CAD schemes for detection and diagnosis of lung nodules in thoracic CT and for detection of polyps in CT colonography (CTC) are surveyed and reviewed.
 
Article
This paper addresses a new surface reconstruction scheme for approximating the isosurface from a set of tomographic cross sectional images. Differently from the novel Marching Cubes (MC) algorithm, our method does not extract the iso-density surface (isosurface) directly from the voxel data but calculates the iso-density point (isopoint) first. After building a coarse initial mesh approximating the ideal isosurface by the cell-boundary representation, it metamorphoses the mesh into the final isosurface by a relaxation scheme, called shrink-wrapping process. Compared with the MC algorithm, our method is robust and does not make any cracks on surface. Furthermore, since it is possible to utilize lots of additional isopoints during the surface reconstruction process by extending the adjacency definition, theoretically the resulting surface can be better in quality than the MC algorithm. According to experiments, it is proved to be very robust and efficient for isosurface reconstruction from cross sectional images.
 
Hardware configuration.
Conference Paper
Displaying a 3D geometric model of a user in real-time has an advantage for a telecommunication system because depth information is useful for non-verbal communication such as finger-pointing and gestures that contain 3D information. However, a range image acquired by a rangefinder is suffered from errors due to image noises and distortions in depth measurement. On the other hand, a 2D image is free from such errors. In this paper, we propose a new method for a shared space communication system that combines advantages of both 2D and 3D representations. A user is represented as a 3D geometric model to exchange non-verbal communication cues. A background is displayed as a 2D image to give the user enough ideas about atmosphere of a remote site. We have constructed a prototype of a high presence shared space communication system to evaluate our method. In experiments, we have found that our proposed method is effective for telecommunication.
 
Conference Paper
For cross-sectional imaging of animal body using backscattered light, the measurement of depth-distribution of optical absorption is required. A technique has been developed to reconstruct the depth-distribution using the pulse shape of backscattered light obtained in a time-resolved measurement. In this technique, the temporal path-length distribution (TPD) in each arbitrary layer of the scattering object is essential. We have developed a technique to obtain this TPD in a measurement. The feasibility of the proposed technique was verified in a phantom experiment and the cross-sectional imaging with this technique was demonstrated.
 
Conference Paper
Communication latency is central to multiprocessor design. This report presents the design principles of EM-X multiprocessor towards tolerating communication latency. Multi-threading principle is built in the EM-X to overlap communication and computation for latency tolerance. In particular, we present two types of hardware support for remote memory access: (1) priority-based packet scheduling for thread invocation, and (2) direct remote memory access mechanism. The priority-based scheduling policy extends a FIFO ordered thread invocation policy to adapt to different computational needs. The direct remote memory access based on non-preemptive thread execution is designed to overlap remote memory operations while executing threads. We give two examples to explain our approach. The 80-processor prototype of EM-X is currently being fabricated and is expected to be operational in the near future. Preliminary evaluation indicates that the EM-X can effectively overlap computation and communication, toward tolerating communication latency for high performance parallel computing
 
Conference Paper
This paper describes an adaptive feature extraction method that exploits category specific information to overcome both image degradation and deformation. When recognizing multiple fonts, geometric features such as directional information of strokes are often used but they are weak against the deformation and degradation that appear in videos and natural scenes. To tackle these problems, the proposed method estimates the degree of deformation and degradation of an input pattern by comparing the input pattern and the template of each category as category specific information. This estimation enables us to compensate the aspect ratio associated with shape and the degradation in feature values. Recognition experiments using characters extracted from videos show that the proposed method is superior to the conventional alternatives in resisting deformation and degradation.
 
Conference Paper
This paper introduces an adaptive distributed routing algorithm for the faulty star graph. By giving two routing rules based on the properties of nodes, an optimal routing function for the fault-free star graph is presented. For a given destination in the n-star graph, n-1 node-disjoint and edge-disjoint subgraphs, which are derived from n-1 adjacent edges of the destination, can be constructed by applying this routing function and the concept of breadth first search. When faults are encountered, the algorithm can route messages to the destination by finding a fault-free subgraph based on the local failure information. As long as the number f of faults (node faults and/or edge faults) is less than the degree n-1 of the n-star graph, the algorithm can adaptively find a path of length at most d+4f to route messages successfully from a source to a destination, where d is the distance between two nodes
 
Conference Paper
Service adaptation is a promising solution for mismatches in service composition by introducing a mediate service called adaptor to coordinate interactions of services. In a previous work, an approach of non-regular service adaptation using model checking has been proposed for solving behavior mismatches. The approach uses pushdown automata as behavior model of adaptors so that non-regular interactions of services can be captured. Furthermore, adaptation and verification are integrated using model checking and the adaptor can be generated automatically without adaptation contracts being specified. However, though behavior mismatch free is guaranteed in the approach, we found there are usually several or more candidates which satisfy this criteria and may need to be further selected with other requirements. This paper follows the approach and focuses on requirements helpful to automated adaptor generation. Because of the use of pushdown system model, we are especially interested in properties related to unbounded messages, i.e., messages being sent and received arbitrary multiple times, which characterize non-regular behavior in service composition. This paper also shows experimental results from a prototype tool as well as directions for building a BPEL adaptor once behavior of an adaptor is generated by our approach.
 
Conference Paper
In the training of neural networks using the error-backpropagation (BP) algorithm, over-learning phenomenon has been observed. In previous works we showed how over-learning can be viewed as being the result of using the BP criterion as a substitute for some true criterion. There, the concept of admissibility was introduced and discussed conditions for a true criterion admits a substitute criterion. In this paper we provide necessary and sufficient conditions for the projection learning to admit the memorization learning in the presence of noise. Based on these conditions, we devise methods for choosing training sets to prevent over-learning
 
Conference Paper
This paper proposes a new packet re-marking scheme that can improve per-flow quality of service (QoS) of assured forwarding (AF) service traversing multiple domains of differentiated services (Diffserv) networks. The base concept of the scheme is to distinguish packets re-marked to out-of-profile at the domain boundaries from those already marked as out-ofprofile at the time of entering the network, and to give chances to the re-marked packet to recover back to in-of-profile that can enjoy its rightful QoS within the networks. Basic performance of the proposed scheme is evaluated through simulation study, and the results show its effectiveness in preserving QoS of the inter-domain flows.
 
Conference Paper
The mobile agent paradigm is an important and promising technology to structure distributed applications. Since the mobile agent physically moves to a remote host that is under the control of a different principal, it needs to be protected from this environment which is responsible for its execution. This problem constitutes the major difficulty for using the mobile agent paradigm for privacy protection and is explored in great detail. In this paper, we provide the methodology of protecting the mobile agents from unauthorized modification for the program code or data by malicious hosts. One important technique is an integrity-based encryption, by which a mobile agent, while running on the remote host, checks itself to verify that it has not been modified and conceals some privacy sensitive parts of the mobile agent
 
A simple network and its pheromone table of node 3.
Conference Paper
In this paper, we study the dynamic RWA problem in WDM networks with sparse wavelength conversion and propose a novel hybrid algorithm for it based on the combination of mobile agents technique and genetic algorithm. By keeping a suitable number of mobile agents in the network to cooperatively explore the network states and continuously update the routing tables, the new hybrid algorithm has the ability to promptly determine the first population of routes for a new request based on the routing table of its source node without requiring the time consuming process associated with the available GA-based dynamic RWA algorithms. To achieve a good load balance in WDM networks with sparse wavelength conversion, we adopt in our hybrid algorithm a new reproduction scheme and a new fitness function that simultaneously takes into account the path length, number of free wavelengths and wavelength conversion capability in route selection. Our new algorithm has the capabilities of achieving a better load balance and resulting a significantly lower blocking probability than that of the promising Fixed-Alternate routing algorithm for both the optical networks with sparse and full-range wavelength converters and the optical networks with sparse and limited-range wavelength converters, as verified by an extensive simulation study upon the ns-2 network simulator. The ability to guarantee both a low blocking probability and a small setup delay makes the new hybrid dynamic RWA algorithm very attractive for both the optical circuit switching networks and future optical burst switching networks.
 
Conference Paper
In order to generate tests for path delay faults we propose an alternative method that does not generate a test for each path delay fault directly. The proposed method generates an n-propagation test-pair set by using an N<sub>i</sub>-detection test set for single stuck-at faults. The n-propagation test-pair set is a set of vector pairs which contains n distinct vector pairs for every transition fault at a checkpoint (primary inputs and fanout branches in a circuit are called check points). We do not target the path delay faults for test generation, instead, the n-propagation test-pair set is generated for the transition (both rising and falling) faults of check points in the circuit, and simulated to determine their effectiveness for singly testable path delay faults and robust path delay faults. Results of experiments on the ISCAS'85 benchmark circuits show that the n-propagation test-pair sets obtained by our method are very effective in testing path delay faults.
 
Conference Paper
In this paper, the dependence of the memory capacity of an analogue associative memory model using non-monotonic neurons on static synaptic noise and static threshold noise is shown. This dependence was calculated analytically by means of the self-consistent signal-to-noise analysis (SCSNA). If the noise is extremely large, a higher monotonicity produces a larger memory capacity. At moderate noise levels, if the monotonicity increases, the memory capacity decreases conversely. The memory capacity is more sensitive to an increase in static threshold noise than to an increase in static synaptic noise.
 
Conference Paper
This paper proposes a packet loss recovery method using a packet arrived behind the playout time for CELP decoding. The proposed method recovers synchronization of the filter states between encoding and decoding in the period following packet loss. The recovery is performed by replacing the degraded filter states with ones adoptively calculated from the late arrival packet in decoding. When the proposed method is applied to the AMR speech decoder, it improves the segmental SNR by 0.2 to 1.8 dB at packet loss rates of 1 to 10%, where all the packet losses occur due to their late arrival. The subjective test results show that five-grade mean opinion scores are improved by 0.3 and 0.2 at a packet loss rate of 5% at speech coding bit rates of 7.95 and 12.2 kbps, respectively.
 
Conference Paper
An attempt was made to evaluate mental workload using chaotic analysis of EEG, EEG signals registered from Fz and Cz during a mental task (mental addition task) were recorded and analyzed using chaotic measures such as attractor plot, fractal dimension and Lyapunov exponent, which are used to clarify chaotic dynamics, to investigate whether mental workload could be assessed using these measures. The largest Lyapunov exponent for all experimental conditions took positive values, which indicated chaotic dynamics In the EEG signals. However, the authors could not evaluate mental workload using the largest Lyapunov exponent or attractor plot. The fractal dimension, on the other hand, tended to increase with the work level. The authors concluded that the fractal dimension can be used to evaluate the mental state, especially a mental workload induced by mental task loading
 
Conference Paper
This paper attempts to establish a theory for a general auto-associative memory model. We start by defining a new concept called supporting function to replace the concept of energy function. The latter relies on an assumption of symmetric connection weights, which is used in the conventional Hopfield auto-associative memory, but not evidenced in any biological memories. We then formulate the information retrieval or recalling process as a dynamic system by making use of the supporting function, explore its stability and attraction conditions, and develop an algorithm for learning the attraction condition based upon Rosenblatt's perceptron rule. The effectiveness of the learning algorithm is evidenced by some outstanding experiment results
 
Conference Paper
A high-assurance online recovery technology for a space on-board computer that can be realized using commercial devices is proposed whereby a faulty processor node confirms its normality and then recovers without affecting the other processor nodes in operation. Also, the result of an evaluation test using the breadboard model (BBM) implementing this technology is reported. Because this technology enables simple and assured recovery of a faulty processor node regardless of its degree of redundancy, it can be applied to various applications, such as a launch vehicle, a satellite or a reusable space vehicle. As a result, decreasing the cost of an on-board computer is possible while maintaining its high reliability
 
Conference Paper
Differentiated service (DiffServ) is a technology which is proposed to provide quality of service (QoS) in the Internet, and it is superior to integrated service (IntServ) technology in respect to the simplicity of its architectures and the scalability of networks. Although various simulation studies and estimations over testbeds have investigated the QoS which is offered via the DiffServ framework, almost all of them were focused on the characteristics in a single DiffServ domain. However, the Internet is actually composed of a large number of AS (assured service) domains, and thus packets are very likely to arrive at their destinations through many different domains. From this viewpoint, we focus on the QoS performance in the model consisting of multiple DiffServ domains, especially we investigate quality of assured service to achieve statistical bandwidth allocation with AF-PHB (assured forwarding per hop behavior). Our simulation results show some throughput characteristics of flows over multiple DiffServ domains, and we also make it clear whether the network configurations and traffic properties have impacts on QoS over multiple DiffServ domains or not
 
Conference Paper
As pervasive computing technologies develop fast, the privacy protection becomes a crucial issue and needs to be coped with very carefully. Typically, it is difficult to efficiently identify and manage plenty of the low-cost pervasive devices like radio frequency identification devices (RFID), without leaking any privacy information. In particular, the adversary may not only eavesdrop the communication in a passive way, but also mount an active attack to ask queries adaptively, which is obviously more dangerous. Towards settling this problem, in this paper, we propose lightweight authentication protocols which are privacy-preserving against active attack. The protocols are based on a fast asymmetric encryption with novel simplification, which consequently can assign an easy work to pervasive devices. Besides, unlike the usual management of the identities, our approach does not require any synchronization nor exhaustive search in the database, which enjoys great convenience in case of a large-scale system
 
Conference Paper
This paper presents a diagnostic test generation method for transition faults. As two consecutive vectors application mechanism, launch on capture test is considered. The proposed algorithm generates test vectors for given fault pairs using a stuck-at ATPG tool so that they are distinguished. If a given fault pair is indistinguishable, it is identified. Therefore the proposed algorithm provides a complete test generation regarding the distinguishability. The conditions for distinguishing a fault pair are carefully considered, and they are transformed into the conditions of the detection of a stuck-at fault, and some additional logic are inserted in a CUT for the test generation. Experimental results show that the proposed method can generate test vectors for distinguishing the fault pairs that are not distinguished by commercial tools, and also identify all the indistinguishable fault pairs.
 
Conference Paper
This paper proposes pedestrians' attribute analysis such as gender and whether they have bags with them based on multi-layer classification. One of the technically challenging issues is we use only top-view camera images to protect the privacy of the pedestrians. The shape features over the frames are extracted by bag-of-features (BoF) using histogram of oriented gradients (HoG) vectors with the optimized parameters. Then, multiple classifiers using support vector machine (SVM) were generated by changing the parameters for the feature generation. A set of classification results using the multiple classifiers is fed to the second stage classifier to obtain the final results. The experimental results using 60-minute video captured at Haneda Airport, Japan, show that the accuracies for the gender classification and the with/without baggage classification were 95.8% and 97.2%, respectively with low false positive/negative rates, which is a significant improvement from our previous work which yielded 68.5% and 78.8% of accuracy, respectively.
 
Conference Paper
In this paper, we propose a new mask estimation method for the computational auditory scene analysis (CASA) of speech using two microphones. The proposed method is based on a hidden Markov model (HMM) in order to incorporate an observation that the mask information should be correlated over contiguous analysis frames. In other words, HMM is used to estimate the mask information represented as the interaural time difference (ITD) and the interaural level difference (ILD) of two channel signals, and the estimated mask information is finally employed in the separation of desired speech from noisy speech. To show the effectiveness of the proposed mask estimation, we then compare the performance of the proposed method with that of a Gaussian kernel-based estimation method in terms of the performance of speech recognition. As a result, the proposed HMM-based mask estimation method provided an average word error rate reduction of 69.14% when compared with the Gaussian kernel-based mask estimation method.
 
Conference Paper
Since the characteristic of current information systems is the dynamic change of their configurations and scales with non-stop provision of their services, the system management should inevitably rely on autonomic computing. Since fault tolerance is one of the important system management issues, it should also be incorporated in an autonomic computing environment. This paper argues what should be taken into consideration and what approach could be available to realize the fault tolerance in such environments.
 
Conference Paper
In a peer-to-peer (P2P) network, in order to improve the search performance and to achieve load balancing, replicas of original data are created and distributed over the Internet. However, the replication methods, which have been proposed so far focus only on the improvement of search performance. We examine the load on the storage systems, which is due to writing and reading, and propose two replication methods for balancing the load on the storages distributed over P2P networks while limiting the degradation of the search performance within an acceptable level. Furthermore, we investigate the performance of our proposed replication methods through computer simulations, and show their effectiveness in balancing the load.
 
Example of fractal extrapolation.  
Conference Paper
The effect of block loss due to the cell loss in ATM transmission is more serious in fractal coded images than in DCT coded images. It is the reason that in fractal coded images the effect of block loss is not confined to each lost block itself but is propagated to the range blocks other than the lost blocks. A new algorithm is presented for recovering the blocks lost in the transmission of the images coded by Jacquin's fractal coding. The key technique of the proposed BLRA (block loss recovery algorithm) is a fractal extrapolation that estimates the lost pixels by using the contractive mapping parameters of the neighboring range blocks which satisfy the connectivity to a lost block. The proposed BLRA is applied to the lost blocks in the iteration of decoding. Some experimental results show the proposed BLRA yields excellent performance in PSNR as well as subjective quality
 
Conference Paper
Recently, many application systems have been developed by using a large number of cameras. If the 3D points are observed from synchronized cameras, the multiple view geometry of these cameras can be computed and the 3D reconstruction of the scene is available. Thus, the synchronization of multiple cameras is essential. In this paper, we propose a method for finding synchronization of multiple cameras and for computing the epipolar geometry from un- calibrated and unsynchronized cameras. In particular we use the affine invariance on frame numbers of camera images for finding the synchronization. The proposed method is tested by using real image sequences taken from uncalibrated and unsynchronized cameras.
 
Conference Paper
The most obvious architectural solution for high-speed fuzzy inference is to exploit the temporal parallelism and spatial parallelism inherited in a fuzzy inference execution. However, the active rules in a fuzzy inference execution are often only a small part of the total rules. In this paper, we present a new architecture, which uses less hardware resources by discarding non-active rules in the earlier pipeline stage. Implementation data demonstrates that the proposed architecture achieves very good results in terms of the inference speed and the chip area.
 
Conference Paper
Today, high accuracy of character recognition is attainable using a neural network for problems with a relatively small number of categories. But for large categories, like Chinese characters, it is difficult to reach the neural network convergence because of the “local minima problem” and a large number of calculations. Studies are being done to solve the problem by splitting the neural network into some small modules. The effectiveness of the combination of learning vector quantization (LVQ) and back propagation (BP) has been reported. LVQ is used for rough classification and BP is used for fine recognition. It is difficult to obtain high accuracy for rough classification by LVQ itself. To deal with this problem, we propose hierarchical learning vector quantization (HLVQ). HLVQ divides categories in feature space hierarchically in the learning procedure. The adjacent feature spaces overlap each other near the borders. HLVQ possesses both classification speed and accuracy due to the hierarchical architecture and the overlapping technique. In the experiment using ETL9B, the largest database of handwritten characters in Japan, (includes 3036 categories, 607,200 samples), the effectiveness of HLVQ was verified
 
Conference Paper
Proposes a very small on-chip multimedia real-time operating system (OS) for embedded system LSIs and demonstrates its usefulness on MPEG-2 multimedia applications. The real-time OS, which has a new cyclic task with `suspend' and `resume' for the interacting hardware/software of embedded system LSIs, implements the minimum set of task, interrupt and semaphore management on the basis of an analysis of embedded software requirements. It requires only about 2.5 KBytes memory at run-time, reduces redundant conventional cyclic task execution steps to about 1/2 for hardware/software interactions and provides sufficient performance in real time by implementing two typical embedded software packages for practical multimedia system LSIs. This on-chip multimedia real-time OS can be easily integrated on many embedded-system LSIs and provides an efficient embedded software design environment
 
WIMNET topology for N = 24. 
An example of thin WIMNET topology (P = 5(K = 5)) 
Conference Paper
The wireless mesh network has been studied as an expandable wireless access network to the Internet. This paper focuses on the network composed of only access points (APs) that has multihop wireless connections with each other through wireless distribution system (WDS). Because the number of APs in a single WDS cluster is limited due to transmission loads of broadcasting control packets, the proper partition of APs for multiple WDS clusters is essential for its scale-up. In this paper, we formulate this WDS clustering problem, and prove the NP-completeness of its decision version through reduction from the bin packing problem. Then, we present its two-stage heuristic algorithm where we verify the effectiveness of our approach through extensive simulations.
 
Conference Paper
We propose a software architecture for one-stop services of electronic commerce (EC). Users currently have trouble using multiple EC services because they are provided independently. Therefore a mediator that combines EC services and provides one-stop services to users would be useful. Service matching and service collaboration are important issues in the mediator because they are the main difficulties for users. The proposed architecture provides solutions to these issues. Multiple service assignment provides suitable combinations of EC services, flow division enables efficient execution of the combined EC services, and dynamic alternative service assignment enables flexible failure avoidance during the execution of combined services. These features make the proposed architecture a suitable mediator for EC services
 
Conference Paper
The paper presents a formalism for the analysis of e-commerce protocols. The approach integrates logics and process calculi, providing an expressive message passing semantics and sophisticated constructs for modeling principals. A common set of inference rules for communication, reduction and information analysis supports proofs about message passing, the knowledge and behavior of principals, and protocol properties. The power of the formalism is illustrated with an analysis of the NetBill Protocol
 
Conference Paper
This paper presents a deformable fast computation elastic model for real-time processing applications. 'Gradational element resolution model' is introduced with fewer elements for fast computation, in which small elements are laid around the object surface and large elements are laid in the center of the object. Elastic element layout is changed dynamically according to the deformation of cutting or tearing objects. The element reconstruction procedure is applied little by little for each step of the recursive motion generation process to avoid an increase in motion computation time.
 
CPU load time series collected from four machines
Conference Paper
The ability to accurately predict future resource capabilities is of great importance for applications and scheduling algorithms which need to determine how to use time-shared resources in a dynamic grid environment. In this paper we present and evaluate a new and innovative method to predict the one-stepahead CPU load in a grid. Our prediction strategy forecasts the future CPU load based on the tendency in several past steps and in previous similar patterns, and uses a polynomial fitting method. Our experimental results demonstrate that this new prediction strategy achieves average prediction errors that are between 37% and 86% lower than those incurred by the previously best tendency-based method.
 
Article
This paper proposes a new simple method for network measurement. It extracts 6-bit control flags of TCP (Transmission Control Protocol) packets. The idea is based on the unique feature of flag ratios which is discovered by our exhaustive search for the new indexes of network traffic. By the use of flag ratios, one can tell if the network is really congested. It is much simpler than the conventional network monitoring by a network analyzer. The well-known monitoring method is based on the utilization parameter of a communication circuit which ranges from 0% to 100%. One cannot tell the line is congested even if the factor is 100%. 100% means full utilization and does not give any further information. To calculate the real performance of the network, one should estimate the throughput or effective speed of each user. The estimation needs much calculation. Our new method tries to correlate ratios of TCP control flags and network congestion. The result shows the usefulness of this new method. This paper analyzes the reason why the flag ratios show the unique feature.
 
Conference Paper
This paper presents a new hierarchical scheduling method for a large-scale manufacturing system based on the hybrid Petri-net model, which consists of CPN (Continuous Petri Net) and TPN (Timed Petri Net). The study focuses on an automobile production system, a typical large-scale manufacturing system. At a high level, CPN is used to represent continuous flow in the production process of an entire system, and LP (Linear Programming) is applied to find the optimal flow. At a low level, TPN is used to represent the manufacturing environment of each sub-production line in a decentralized manner, and the MCT algorithm is applied to find feasible semi-optimal process sequences for each sub-production line. Our proposed scheduling method can schedule macroscopically the flow of an entire system while considering microscopically any physical constraints that arise on an actual shop floor.
 
Conference Paper
We present a method coupling multiple switching linear models. The coupled switching linear model is an interactive process of two switching linear models. Coupling is given through causal influence between their hidden discrete states. The parameters of this model are learned via the EM algorithm. Tracking is performed through the coupled-forward algorithm based on Kalman filtering and a collapsing method. A model with maximum likelihood is selected out of a few learned models during tracking. We demonstrate the application of the proposed model to tracking and recognizing two-hand gestures.
 
Conference Paper
This paper describes subband-crosscorrelation (SBXCOR) analysis using two channel signals. The SBXCOR analysis is an extended signal processing technique of subband-autocorrelation (SBCOR) analysis that extracts periodicities present in speech signals. In this paper, the performance of SBXCOR is investigated using a DTW word recognizer, under simulated acoustic conditions on computer and a real environmental condition. Under the simulated condition, it is assumed that speech signals in each channel are perfectly synchronized while noises are not correlated. Consequently, the effective signal-to-noise ratio of the signal generated by simply summing the two signals is raised about 3dB. In such a case, it is shown that SBXCOR is less robust than SBCOR extracted from the two-channel-summing signal, but more robust than the conventional one-channel SBCOR. The resultant performance was much better than that of the smoothed group delay spectrum and mel-frequency cepstral coefficient. In a real computer room, it is shown that SBXCOR is more robust than the two-channel-summed SBCOR
 
Conference Paper
We designed an FPGA-based parallel machine called "RASH" for high speed flexible signal/data processing. Cryptanalysis is one of the most computation intensive applications because huge amounts of logical and/or arithmetic operations are required and FPGA is very suitable for this task. One of the well-known operations in cryptanalysis is "DES challenge" conducted by RSA Data Security. The objective is to find the secret key (56-bit) from a pair of plaintext and ciphertext. Time-Memory Trade-Off (TMTO) cryptanalysis is a practical method to shorten the time for key search when plaintext is given in advance. We demonstrate how TMTO cryptanalysis is well suited to RASH. Using TMTO cryptanalysis, the key will be found at 80% probability within 1 hour after ciphertext is given to 58 units with the appropriate amount of content addressable memory. The recomputation before starting key search takes 27 days on the same RASH configuration.
 
Conference Paper
Validation methods for hard real-time jobs are usually performed based on the maximum execution time. The actual execution time of jobs are assumed to be known only when the jobs arrive or not known until they finish. A predictable algorithm must guarantee that it can generate a schedule for any set of jobs such that the finish time for the actual execution time is no later than the finish time for the maximum execution time. It is known that any job-level fixed priority algorithm (such as earliest deadline first) is predictable. However, job-level dynamic priority algorithms (such as least laxity first) may or may not. In this paper, we investigate the predictability of a job-level dynamic priority algorithm EDZL (earliest deadline zero laxity). We show that EDZL is predictable on the domain of integers regardless of the knowledge of the actual execution times. Based on this result, furthermore, we also show that EDZL can successfully schedule any periodic task set if the total utilization is not greater than (m + 1)/2, where m is the number of processors
 
Conference Paper
A routing strategy for suspensive deadlock recovery called an escape-restoration routing is proposed and its performance is evaluated. In the principle of the proposed techniques, a small amount of exclusive buffer (escape-buffer) at each router is prepared for handling one of the deadlocked packets. The transmission of the packet is suspended by temporarily escaping it to the escape-buffer. After the other deadlocked packets were sent, the suspended transmission resumes by restoring the escaped packet. Evaluation results show that the proposed techniques can improve the routing performance more than that of previous recovery-based techniques in handling deadlocks
 
Conference Paper
In general cases of pattern recognition, a pattern to be recognized is first represented by a set of features and the measured values of the features are then classified. Finding features relevant to recognition is thus an important issue in recognizer design. As a fundamental design framework that systematically enables one to realize such useful features, the Subspace Method (SM) has been extensively used in various recognition tasks. However. this promising methodological framework is still inadequate. The discriminative power of early versions was not very high. The training behavior of a recent discriminative version called the Learning Subspace Method has not been fully clarified due to its empirical definition, though its discriminative power has been improved. To alleviate this insufficiency, we propose in this paper a new discriminative SM algorithm based on the Minimum Classification Error/Generalized Probabilistic Descent method and show that the proposed algorithm achieves an optimal accurate recognition result, i.e., the (at least locally) minimum recognition error situation, in the probabilistic descent sense.
 
Conference Paper
In this paper, we propose a driver identification method that is based on the driving behavior signals that are observed while the driver is following another vehicle. Driving behavior signals, such as the use of the accelerator pedal, brake pedal, vehicle velocity, and distance from the vehicle in front, are measured using a driving simulator. We compared the identification rate obtained using different identification models and different features. As a result, we found the nonparametric models is better than the parametric models. Also, the driver's operation signals were found to be better than road environment signals and car behavior signals. The identification rate for thirty driver using actual vehicle driving in a city area was 73%.
 
Conference Paper
Analysing and modeling of traffic play a vital role in designing and controlling of networks effectively. To construct a practical traffic model that can be used for various networks, it is necessary to characterize aggregated traffic and user traffic. This paper investigates these characteristics and their relationship. Our analyses are based on a huge number of packet traces from five different networks on the Internet. We found that: (1) marginal distributions of aggregated traffic fluctuations follow positively skewed (non-Gaussian) distributions, which leads to the existence of "spikes", where spikes correspond to an extremely large value of momentary throughput; (2) the amount of user traffic in a unit of time has a wide range of variability; and (3) flows within spikes are more likely to be "elephant flows", where an elephant flow is an IP flow with a high volume of traffic. These findings are useful in constructing a practical and realistic Internet traffic model.
 
Conference Paper
Presents Emergent Behavior-Based Architecture (EBBA), an architecture for heterogeneous sensor information fusion at the level of behavior modules. And we have designed and implemented the mobile robot navigation system based on EBBA. By experiment, we have confirmed that the system based on EBBA satisfies the required functions for human coexistent robots. We are currently developing a dual arm robot system and a stereo vision system using EBBA. We are planning to integrate them as one service robot in near future
 
Conference Paper
This paper presents a new road extraction method for an autonomous vehicle which can acquire a road area by using height information of objects. Since a road area can be assumed to be a sequence of a flat plane in front of a vehicle, the road's height information is very effective for extracting a road area. For the purpose, the authors propose a new approach named the planar projection stereopsis (PPS) method which can easily decide whether each point in stereo images exists on the road plane or not. At first, PPS calculates a planar equation representing a road area by using height and pose of a camera in the vehicle. Next, stereo images are projected to the plane, where corresponding points are projected to the same positions on a certain road area if they really exist on a road plane, while corresponding points with different heights from the road plane are projected to different positions in each stereo image. Planar projection description is obtained by a subtraction between projected images from a set of stereo images and a road area can be represented by a set of points with small values. Experimental results for real road scenes have shown the effectiveness of the proposed method
 
Conference Paper
This paper presents an analysis of the applicability of Sparse Kernel Principal Component Analysis (SKPCA) for feature extraction in speech recognition, as well as, a proposed approach to make the SKPCA technique realizable for a large amount of training data, which is an usual context in speech recognition systems. Although the KPCA (Kernel Principal Component Analysis) has proved to be an efficient technique for being applied to speech recognition, it has the disadvantage of requiring training data reduction, when its amount is excessively large. This data reduction is important to avoid computational unfeasibility and/or an extremely high computational burden related to the feature representation step of the training and the test data evaluations. The standard approach to perform this data reduction is to randomly choose frames from the original data set, which does not necessarily provide a good statistical representation of the original data set. In order to solve this problem a likelihood related re-estimation procedure was applied to the KPCA framework, thus creating the SKPCA, which nevertheless is not realizable for large training databases. The proposed approach consists in clustering the training data and applying to these clusters a SKPCA like data reduction technique generating the reduced data clusters. These reduced data clusters are merged and reduced in a recursive procedure until just one cluster is obtained, making the SKPCA approach realizable for a large amount of training data. The experimental results show the efficiency of SKPCA technique with the proposed approach over the KPCA with the standard sparse solution using randomly chosen frames and the standard feature extraction techniques.
 
Conference Paper
Motivated by the design of fault-tolerant multiprocessor interconnection networks the paper considers the following problem: given a positive integer t and graph H, construct a graph G from H by adding a minimum number Δ(t,H) of edges such that even after deleting any t edges from G the remaining graph contains H as a subgraph. We estimate Δ(t,H) for the hypercube and torus, which are well-known as important interconnection networks for multiprocessor systems. If we denote the hypercube and square torus on N vertices by Q <sub>N</sub> and D<sub>N</sub>, respectively, we show among others, that Δ(t,Q<sub>N</sub>)=O(tNlog(logN/t+log2e)) for any t and N (t&ges;2), and Δ(1,D<sub>N</sub>)=N/2 if N is even
 
Top-cited authors
Kenji Mase
  • Nagoya University
Hiroki Arimura
  • Hokkaido University
Arikawa Setsuo
  • Kyushu University
Tatsuya Asai
  • Fujitsu Ltd.
Yasushi Yagi
  • Osaka University