IEEE Access

Online ISSN: 2169-3536
Publications
Article
In organized healthcare quality improvement collaboratives (QICs), teams of practitioners from different hospitals exchange information on clinical practices with the aim of improving health outcomes at their own institutions. However, what works in one hospital may not work in others with different local contexts because of nonlinear interactions among various demographics, treatments, and practices. In previous studies of collaborations where the goal is a collective problem solving, teams of diverse individuals have been shown to outperform teams of similar individuals. However, when the purpose of collaboration is knowledge diffusion in complex environments, it is not clear whether team diversity will help or hinder effective learning. In this paper, we first use an agent-based model of QICs to show that teams comprising similar individuals outperform those with more diverse individuals under nearly all conditions, and that this advantage increases with the complexity of the landscape and level of noise in assessing performance. Examination of data from a network of real hospitals provides encouraging evidence of a high degree of similarity in clinical practices, especially within teams of hospitals engaging in QIC teams. However, our model also suggests that groups of similar hospitals could benefit from larger teams and more open sharing of details on clinical outcomes than is currently the norm. To facilitate this, we propose a secure virtual collaboration system that would allow hospitals to efficiently identify potentially better practices in use at other institutions similar to theirs without any institutions having to sacrifice the privacy of their own data. Our results may also have implications for other types of data-driven diffusive learning such as in personalized medicine and evolutionary search in noisy, complex combinatorial optimization problems.
 
MSE performance of the relay and compress scheme under ”perturbations” for 1) Bob having ideal measurements and 2) Eve having perturbed measurements with a) phase errors and, worse, b) rank-one distortion 
Article
As it becomes increasingly apparent that 4G will not be able to meet the emerging demands of future mobile communication systems, the question what could make up a 5G system, what are the crucial challenges and what are the key drivers is part of intensive, ongoing discussions. Partly due to the advent of compressive sensing, methods that can optimally exploit sparsity in signals have received tremendous attention in recent years. In this paper we will describe a variety of scenarios in which signal sparsity arises naturally in 5G wireless systems. Signal sparsity and the associated rich collection of tools and algorithms will thus be a viable source for innovation in 5G wireless system design. We will discribe applications of this sparse signal processing paradigm in MIMO random access, cloud radio access networks, compressive channel-source network coding, and embedded security. We will also emphasize important open problem that may arise in 5G system design, for which sparsity will potentially play a key role in their solution.
 
Article
This paper considers a downlink cloud radio access network (C-RAN) in which all the base-stations (BSs) are connected to a central computing cloud via digital backhaul links with finite capacities. Each user is associated with a user-centric cluster of BSs; the central processor shares the user's data with the BSs in the cluster, which then cooperatively serve the user through joint beamforming. Under this setup, this paper investigates the user scheduling, BS clustering and beamforming design problem from a network utility maximization perspective. Differing from previous works, this paper explicitly considers the per-BS backhaul capacity constraints. We formulate the network utility maximization problem for the downlink C-RAN under two different models depending on whether the BS clustering for each user is dynamic or static over different user scheduling time slots. In the former case, the user-centric BS cluster is dynamically optimized for each scheduled user along with the beamforming vector in each time-frequency slot, while in the latter case the user-centric BS cluster is fixed for each user and we jointly optimize the user scheduling and the beamforming vector to account for the backhaul constraints. In both cases, the nonconvex per-BS backhaul constraints are approximated using the reweighted l1-norm technique. This approximation allows us to reformulate the per-BS backhaul constraints into weighted per-BS power constraints and solve the weighted sum rate maximization problem through a generalized weighted minimum mean square error approach. This paper shows that the proposed dynamic clustering algorithm can achieve significant performance gain over existing naive clustering schemes. This paper also proposes two heuristic static clustering schemes that can already achieve a substantial portion of the gain.
 
The throughput performance comparison for general networks (the active probabilities of all the users are λ = 0.8).
Article
This letter investigates the problem of database-assisted spectrum access in dynamic TV white spectrum networks, in which the active user set is varying. Sine there is no central controller and information exchange, it encounters dynamic and incomplete information constraints. To solve this challenge, we formulate a state-based spectrum access game and a robust spectrum access game. It is proved that the two games are ordinal potential games with the (expected) aggregate weighted interference serving as the potential functions. A distributed learning algorithm is proposed to achieve the pure strategy Nash equilibrium (NE) of the games. It is shown that the best NE is almost the same with the optimal solution and the achievable throughput of the proposed learning algorithm is very close to the optimal one, which validates the effectiveness of the proposed game-theoretic solution.
 
Article
One of the main features of adaptive systems is an oscillatory convergence that exacerbates with the speed of adaptation. Recently it has been shown that Closed-loop Reference Models (CRMs) can result in improved transient performance over their open-loop counterparts in model reference adaptive control. In this paper, we quantify both the transient performance in the classical adaptive systems and their improvement with CRMs. In addition to deriving bounds on L-2 norms of the derivatives of the adaptive parameters which are shown to be smaller, an optimal design of CRMs is proposed which minimizes an underlying peaking phenomenon. The analytical tools proposed are shown to be applicable for a range of adaptive control problems including direct control and composite control with observer feedback. The presence of CRMs in adaptive backstepping and adaptive robot control are also discussed. Simulation results are presented throughout the paper to support the theoretical derivations.
 
Article
This article is about how the "SP theory of intelligence" and its realisation in the "SP machine" (both outlined in the article) may help to solve computer-related problems in the design of autonomous robots, meaning robots that do not depend on external intelligence or power supplies, are mobile, and are designed to exhibit as much human-like intelligence as possible. The article is about: how to increase the computational and energy efficiency of computers and reduce their bulk; how to achieve human-like versatility in intelligence; and likewise for human-like adaptability in intelligence. The SP system has potential for substantial gains in computational and energy efficiency and reductions in the bulkiness of computers: by reducing the size of data to be processed; by exploiting statistical information that the system gathers; and via an updated version of Donald Hebb's concept of a "cell assembly". Towards human-like versatility in intelligence, the SP system has strengths in unsupervised learning, natural language processing, pattern recognition, information retrieval, several kinds of reasoning, planning, problem solving, and more, with seamless integration amongst structures and functions. The SP system's strengths in unsupervised learning and other aspects of intelligence may help to achieve human-like adaptability in intelligence via: the learning of natural language; learning to see; building 3D models of objects and of a robot's surroundings; learning regularities in the workings of a robot and in the robot's environment; exploration and play; learning major skills; and secondary forms of learning. Also discussed are: how the SP system may process parallel streams of information; generalisation of knowledge, correction of over-generalisations, and learning from dirty data; how to cut the cost of learning; and reinforcements, motivations, goals, and demonstration.
 
Article
The performance of cellular system significantly depends on its network topology, where the spatial deployment of base stations (BSs) plays a key role in the downlink scenario. Moreover, cellular networks are undergoing a heterogeneous evolution, which introduces unplanned deployment of smaller BSs, thus complicating the performance evaluation even further. In this paper, based on large amount of real BS locations data, we present a comprehensive analysis on the spatial modeling of cellular network structure. Unlike the related works, we divide the BSs into different subsets according to geographical factor (e.g. urban or rural) and functional type (e.g. macrocells or microcells), and perform detailed spatial analysis to each subset. After examining the accuracy of Poisson point process (PPP) in BS locations modeling, we take into account the Gibbs point processes as well as Neyman-Scott point processes and compare their accuracy in view of large-scale modeling test. Finally, we declare the inaccuracy of the PPP model, and reveal the general clustering nature of BSs deployment, which distinctly violates the traditional assumption. This paper carries out a first large-scale identification regarding available literatures, and provides more realistic and more general results to contribute to the performance analysis for the forthcoming heterogeneous cellular networks.
 
Bayesian Residual Transform framework. (a) forward BRT. (b) inverse BRT. 
Example of multi-scale signal decomposition using the BRT. (a) Baseline periodic test signal. (b) Noisy input signal with zero-mean Gaussian noise. (c)-(h) Signal decompositions using the BRT at different scales. It can be observed that the noise process contaminating the test signal is well characterized in the decompositions at the lower (finer) scales (scales 1 to 3), while the structural characteristics of the test signal is well characterized in the decompositions at the higher (coarser) scales (scales 4 to 6). 
Example of multi-scale signal decomposition using the BRT. (a) Baseline piece-wise regular test signal. (b) Noisy input signal with zero-mean Gaussian noise. (c)-(h) Signal decompositions using the BRT at different scales. It can be observed that, as with the periodic signal example, the noise process contaminating the signal is well characterized in the decompositions at the lower (finer) scales (scales 1 to 2), while the structural characteristics of the test signal is well characterized in the decompositions at the higher (coarser) scales (scales 3 to 6). Furthermore, more noticeable here than in the periodic example, it can be seen that that the decomposition at each scale exhibits good signal structural localization. 
Application of the BRT on ECG signals. (a) A plot of the mean SNR improvement vs. the different input SNRs ranging from 12 dB to 2.5 dB for the MIT-BIH Normal Sinus Rhythm Database for the tested methods. Noise-suppression method using the BRT provided strong SNR improvements across all SNRs, with performance comparable to SURE and higher than the other 3 tested methods. (b) Noisy input signal with SNR=12 dB, and (c) noise-suppressed results using BRT for b . (d) Another noisy input signal with SNR=12 dB, and (e) noise-suppressed results using BRT for d . The results produced using the BRT has significantly reduced noise artifacts while the signal characteristics are preserved. 
Article
Multi-scale decomposition has been an invaluable tool for the processing of physiological signals. Much focus in multi-scale decomposition for processing such signals have been on based on scale-space theory and wavelet transforms. By contrast, Bayesian-based multi-scale decomposition for processing physiological signals is less explored and ripe for investigation. In this study, we investigate the feasibility of utilizing a Bayesian-based method for multi-scale signal decomposition called Bayesian Residual Transform (BRT) for the purpose of physiological signal processing. In BRT, a signal is modeled as the summation of stochastic residual processes, each characterizing information from the signal at different scales. A deep cascading framework is provided as a realization of the BRT. Signal-to-noise ratio (SNR) analysis using electrocardiography (ECG) signals was used to illustrate the feasibility of using the BRT for suppressing noise in physiological signals. Results in this study show that it is feasible to utilize the BRT for processing physiological signals for tasks such as noise suppression.
 
BER versus CR for SIMO-OFDM Communication Systems (P = 77,E b /N 0 = 27dB).
BER versus E b /N 0 for Multi-user clipping recovery scheme (CR = 1.61,P u = 75).
E b /N 0 MSE (dB) for data-aided CIR Estimation (CR = 1.73, Q = R = 16).
BER performance of the proposed scheme as a function of the channel estimation error (CR = 1.62, E b /N 0 = 20dB).
Geometrical representation of the adopted reliability criteria.
Article
Clipping is one of the simplest peak-to-average power ratio (PAPR) reduction schemes for orthogonal frequency division multiplexing (OFDM). Deliberately clipping the transmission signal degrades system performance, and clipping mitigation is required at the receiver for information restoration. In this work, we acknowledge the sparse nature of the clipping signal and propose a low-complexity Bayesian clipping estimation scheme. The proposed scheme utilizes a priori information about the sparsity rate and noise variance for enhanced recovery. At the same time, the proposed scheme is robust against inaccurate estimates of the clipping signal statistics. The undistorted phase property of the clipped signal, as well as the clipping likelihood, is utilized for enhanced reconstruction. Further, motivated by the nature of modern OFDM-based communication systems, we extend our clipping reconstruction approach to multiple antenna receivers, and multi-user OFDM. We also address the problem of channel estimation from pilots contaminated by the clipping distortion. Numerical findings are presented, that depict favourable results for the proposed scheme compared to the established sparse reconstruction schemes.
 
Article
The paper has two parts. The first one deals with how to use large random matrices as building blocks to model the massive data arising from the massive (or large-scale) MIMO system. As a result, we apply this model for distributed spectrum sensing and network monitoring. The part boils down to the streaming, distributed massive data, for which a new algorithm is obtained and its performance is derived using the central limit theorem that is recently obtained in the literature. The second part deals with the large-scale testbed using software-defined radios (particularly USRP) that takes us more than four years to develop this 70-node network testbed. To demonstrate the power of the software defined radio, we reconfigure our testbed quickly into a testbed for massive MIMO. The massive data of this testbed is of central interest in this paper. It is for the first time for us to model the experimental data arising from this testbed. To our best knowledge, we are not aware of other similar work.
 
Article
This paper presents a novel method for directly incorporating user-defined control input saturations into the calculation of a control Lyapunov function (CLF)-based walking controller for a biped robot. Previous work by the authors has demonstrated the effectiveness of CLF controllers for stabilizing periodic gaits for biped walkers, and the current work expands on those results by providing a more effective means for handling control saturations. The new approach, based on a convex optimization routine running at a 1 kHz control update rate, is useful not only for handling torque saturations but also for incorporating a whole family of user-defined constraints into the online computation of a CLF controller. The paper concludes with an experimental implementation of the main results on the bipedal robot MABEL.
 
The schematic model of an anti-degradable but conjugate degradable quantum channel. The channel output B can be used to express ¢ E , and from environment state E the conjugated environment state ¢ E can be expressed by a complex conjugation  on E. There is no exists degradation map
The isometries of a PD channel.
Figure S.1. The code structure of the polar code for quantum communication over a degradable PD channel. Set , PD degr in S will be used for quantum communication. For the frozen bits only set
Figure S.4. The information of logical channel AB  for a degradable PD channel . For the
Article
The quantum capacity of degradable quantum channels has been proven to be additive. On the other hand, there is no general rule for the behavior of quantum capacity for non-degradable quantum channels. We introduce the set of partially degradable (PD) quantum channels to answer the question of additivity of quantum capacity for a well-separable subset of non-degradable channels. A quantum channel is partially degradable if the channel output can be used to simulate the degraded environment state. PD channels could exist both in the degradable, non-degradable and conjugate degradable family. We define the term partial simulation, which is a clear benefit that arises from the structure of the complementary channel of a PD channel. We prove that the quantum capacity of an arbitrary dimensional PD channel is additive. We also demonstrate that better quantum data rates can be achieved over a PD channel in comparison to standard (non-PD) channels. Our results indicate that the partial degradability property can be exploited and yet still hold many benefits for quantum communications.
 
Article
Harvesting energy from ambient environment is a new promising solution to free electronic devices from electric wire or limited-lifetime battery, which may find very significant applications in sensor networks and body-area networks. This paper mainly investigate the fundamental limits of information transmission in wireless communication system with RF-based energy harvesting, in which a master node acts not only as an information source but also an energy source for child node while only information is transmitted back from child to master node. Three typical structures: optimum receiver, orthogonal receiver and power splitting receiver are considered where two way information transmission between two nodes under an unique external power supply constraint at master node are jointly investigated in the viewpoint of systemic level. We explicitly characterize the achievable capacity-rate region and also discuss the effect of signal processing power consumption at child node. The optimal transmission strategy corresponding to the most energy-efficient status, namely the point on the boundary of achievable capacity-rate region, is derived with help of conditional capacity function. Simulation confirms the substantial gains of employing optimal transmission strategy and optimum receiver structure. Besides, a typical application on minimizing required transmit power to green system is presented.
 
Article
Due to the rapid growth of internet broadband access and proliferation of modern mobile devices, various types of multimedia (e.g. text, images, audios and videos) have become ubiquitously available anytime. Mobile device users usually store and use multimedia contents based on their personal interests and preferences. Mobile device challenges such as storage limitation have however introduced the problem of mobile multimedia overload to users. In order to tackle this problem, researchers have developed various techniques that recommend multimedia for mobile users. In this survey paper, we examine the importance of mobile multimedia recommendation systems from the perspective of three smart communities, namely, mobile social learning, mobile event guide and context-aware services. A cautious analysis of existing research reveals that the implementation of proactive, sensor-based and hybrid recommender systems can improve mobile multimedia recommendations. Nevertheless, there are still challenges and open issues such as the incorporation of context and social properties, which need to be tackled in order to generate accurate and trustworthy mobile multimedia recommendations.
 
Article
There has been significant interest in the use of fully-connected graphical models and deep-structured graphical models for the purpose of structured inference. However, fully-connected and deep-structured graphical models have been largely explored independently, leaving the unification of these two concepts ripe for exploration. A fundamental challenge with unifying these two types of models is in dealing with computational complexity. In this study, we investigate the feasibility of unifying fully-connected and deep-structured models in a computationally tractable manner for the purpose of structured inference. To accomplish this, we introduce a deep-structured fully-connected random field (DFRF) model that integrates a series of intermediate sparse auto-encoding layers placed between state layers to significantly reduce computational complexity. The problem of image segmentation was used to illustrate the feasibility of using the DFRF for structured inference in a computationally tractable manner. Results in this study show that it is feasible to unify fully-connected and deep-structured models in a computationally tractable manner for solving structured inference problems such as image segmentation.
 
A pictorial description of an iteration of the graph partitioning. 
Article
This paper presents a novel meta algorithm, Partition-Merge (PM), which takes existing centralized algorithms for graph computation and makes them distributed and faster. In a nutshell, PM divides the graph into small subgraphs using our novel randomized partitioning scheme, runs the centralized algorithm on each partition separately, and then stitches the resulting solutions to produce a global solution. We demonstrate the efficiency of the PM algorithm on two popular problems: computation of Maximum A Posteriori (MAP) assignment in an arbitrary pairwise Markov Random Field (MRF), and modularity optimization for community detection. We show that the resulting distributed algorithms for these problems essentially run in time linear in the number of nodes in the graph, and perform as well -- or even better -- than the original centralized algorithm as long as the graph has geometric structures. Here we say a graph has geometric structures, or polynomial growth property, when the number of nodes within distance r of any given node grows no faster than a polynomial function of r. More precisely, if the centralized algorithm is a C-factor approximation with constant C \ge 1, the resulting distributed algorithm is a (C+\delta)-factor approximation for any small \delta>0; but if the centralized algorithm is a non-constant (e.g. logarithmic) factor approximation, then the resulting distributed algorithm becomes a constant factor approximation. For general graphs, we compute explicit bounds on the loss of performance of the resulting distributed algorithm with respect to the centralized algorithm.
 
Comparison of the average user throughput of an LTE system with unit power allocation for pilot and data symbols, and with power efficient power allocation. The SNR is set to 20 dB. The lower part indicates the power savings of the power efficient power allocation versus unit power allocation.
(a) The three performance evaluation positions in the receiver signal processing chain. (b) MSE performance of the carrier frequency synchronization scheme in [63]. (c) The predicted and simulated coded throughput loss resulting from the residual estimation error in (b).
Comparison of the area spectral efficiency obtained in a cellular network without (left-hand side) and with (right-hand side) distributed antennas versus the number of users per cell. The performance with different transceivers and CSI feedback algorithms is compared. The total amount of transmit antennas per cell equals eight. The users are equipped with fourss receive antennas.
Article
Cellular networks are an essential part of todays communication infrastructure. The ever-increasing demand for higher data-rates calls for a close cooperation between researchers and industry/standardization experts which hardly exists in practice. In this article we give an overview about our efforts in trying to bridge this gap. Our research group provides a standard-compliant open-source simulation platform for 3GPP LTE that enables reproducible research in a well-defined environment. We demonstrate that much innovative research under the confined framework of a real-world standard is still possible, sometimes even encouraged. With examplary samples of our research work we investigate on the potential of several important research areas under typical practical conditions.
 
Geometric interpretation and comparison. ['centroid': the template vector represented by the centroid of a group; 'maximin': the template vector obtained from the original MCA; 'r-maximin': the template vector returned by the proposed R-MCA] (a) MCA finds a vector whose direction minimizes the worst (i.e., maximum) angle between the vector and the class members. No outlier is assumed. (b) Adding outliers to (a) causes an abrupt swing in the traditional maximin that MCA returns. The resulting angle does not represent the class appropriately. In contrast, the r-maximin that R-MCA finds is more robust to outliers. (c) The character 'A' represented in 10 different fonts (the two boxed fonts can be considered outliers). Also shown are the r-maximin, maximin, and centroid aggregate templates of the ten images, respectively.
Data Used in Our Experiments
Derived flow of the proposed methodology.
Effect of kernelization (data: 3D-NUT). (a) The true membership (best viewed in color). (b) The membership retrieved by the proposed kernelized R-MCA. (c) The membership assigned by the original MCA.  
Comparison of execution times. Each time point represents the average of ten independent runs. (a) Varying n with fixed m = 784 (data: MNIST). (b) Varying m with fixed n = 606 (data: GEO).  
Article
Robust classification becomes challenging when classes contain multiple subclasses. Examples include multi-font optical character recognition and automated protein function prediction. In correlation-based nearest-neighbor classification, the maximin correlation approach (MCA) provides the worst-case optimal solution by minimizing the maximum misclassification risk through an iterative procedure. Despite the optimality, the original MCA has drawbacks that have limited its wide applicability in practice. That is, the MCA tends to be sensitive to outliers, cannot effectively handle nonlinearities in datasets, and suffers from having high computational complexity. To address these limitations, we propose an improved solution, named regularized maximin correlation approach (R-MCA). We first reformulate MCA as a quadratically constrained linear programming (QCLP) problem, incorporate regularization by introducing slack variables into the primal problem of the QCLP, and derive the corresponding Lagrangian dual. The dual formulation enables us to apply the kernel trick to R-MCA so that it can better handle nonlinearities. Our experimental results demonstrate that the regularization and kernelization make the proposed R-MCA more robust and accurate for various classification tasks than the original MCA. Furthermore, when the data size or dimensionality grows, R-MCA runs substantially faster by solving either the primal or dual (whichever has a smaller variable dimension) of the QCLP.
 
Article
In new product development, time to market (TTM) is critical for the success and profitability of next generation products. When these products include sophisticated electronics encased in 3D packaging with complex geometries and intricate detail, TTM can be compromised—resulting in lost opportunity. The use of advanced 3D printing technology enhanced with component placement and electrical interconnect deposition can provide electronic prototypes that now can be rapidly fabricated in comparable time frames as traditional 2D bread-boarded prototypes; however, these 3D prototypes include the advantage of being embedded within more appropriate shapes in order to authentically prototype products earlier in the development cycle. The fabrication freedom offered by 3D printing techniques, such as stereolithography and fused deposition modeling have recently been explored in the context of 3D electronics integration—referred to as 3D structural electronics or 3D printed electronics. Enhanced 3D printing may eventually be employed to manufacture end-use parts and thus offer unit-level customization with local manufacturing; however, until the materials and dimensional accuracies improve (an eventuality), 3D printing technologies can be employed to reduce development times by providing advanced geometrically appropriate electronic prototypes. This paper describes the development process used to design a novelty six-sided gaming die. The die includes a microprocessor and accelerometer, which together detect motion and upon halting, identify the top surface through gravity and illuminate light-emitting diodes for a striking effect. By applying 3D printing of structural electronics to expedite prototyping, the development cycle was reduced from weeks to hours.
 
Article
Cellular networks are a central part of today’s communication infrastructure. The global roll-out of 4G long-term evolution is underway, ideally enabling ubiquitous broadband Internet access. Mobile network operators, however, are currently facing an exponentially increasing demand for network capacity, necessitating densification of cellular base stations (keywords: small cells and heterogeneous networks) and causing a strongly deteriorated interference environment. Coordination among transmitters and receivers to mitigate and/or exploit interference is hence seen as a main path toward 5G mobile networks. We provide an overview of existing coordinated beamforming strategies for interference mitigation in broadcast and interference channels. To gain insight into their ergodic behavior in terms of signal to interference and noise ratio as well as achievable transmission rate, we focus on a simplified but representative scenario with two transmitters that serve two users. This analysis provides guidelines for selecting the best performing method depending on the particular transmission situation.
 
Article
Data center networks (DCNs) for 5G are expected to support a large number of different bandwidth-hungry applications with exploding data, such as real-time search and data analysis. As a result, significant challenges are imposed to identify the cause of link congestion between any pair of switch ports that may severely damage the overall network performance. Generally, it is expected that the granularity of the flow monitoring to diagnose network congestion in 5G DCNs needs to be down to the flow level on a physical port of a switch in real time with high-estimation accuracy, low-computational complexity, and good scalability. In this paper, motivated by a comprehensive study of a real DCN trace, we propose two sketch-based algorithms, called (alpha ) -conservative update (CU) and P( (d) )-CU, based on the existing CU approach. (alpha ) -CU adds no extra implementation cost to the traditional CU, but successfully trades off the achieved error with time complexity. P( (d) )-CU fully considers the amount of skew for different types of network services to aggregate traffic statistics of each type of network traffic at an individual, horizontally partitioned sketch. We also introduce a way to produce the real-time moving average of the reported results. By theoretical analysis and sufficient experimental results on a real DCN trace, we extensively evaluate the proposed and existing algorithms on their error performance, recall, space cost, and time complexity.
 
Article
Wireless systems have become more and more advanced in terms of handling the statistical properties of wireless channels. For example, the 4G long term evolution (LTE) system takes advantage of multiport antennas [multiple-input multiple-output (MIMO) technology] and orthogonal frequency division multiplexing (OFDM) to improve the detection probability of single bitstream by diversity in the spatial and frequency domains, respectively. The 4G system also supports transmission of two bitstreams by appropriate signal processing of the MIMO subchannels. The reverberation chamber emulates according to previous works rich isotropic multipath (RIMP) and has proven to be very useful for characterizing smart phones for LTE systems. The measured throughput can be accurately modeled by the simple digital threshold receiver, accounting accurately for both the MIMO and OFDM functions. The throughput is equivalent to the probability of detection (PoD) of the transmitted bitstream. The purpose of this paper is to introduce a systematic approach to include the statistical properties of the user and his or her terminal, when characterizing the performance. The user statistics will have a larger effect in environments with stronger line-of-sight (LOS), because the angle of arrival and the polarization of the LOS contribution vary due to the user’s orientation and practices. These variations are stochastic, and therefore, we introduce the term random-LOS to describe this. This paper elaborates on the characterization of an example antenna in both RIMP and random-LOS. The chosen antenna is a wideband microbase transceiver station (BTS) antenna. We show how to characterize the micro-BTS by the PoD of one and two bitstreams in both RIMP and random-LOS, by considering the user randomly located and oriented within the angular coverage sector. We limit the treatment to a wall-mounted BTS antenna, and assume a desired hemispherical coverage. The angular coverages of both one and tw- bitstreams for the random-LOS case are plotted as MIMO-coverage radiation patterns of the whole four-port digital antenna system. Such characterizations in terms of PoD have never been done before on any practical antenna system. The final results are easy to interpret, and they open up a new world of opportunities for designing and optimizing 5G antennas on system level.
 
Experiential results showing the current and focal spot for our new MBFEX tube. (a) Stable mean anode current (~20 mA) with respect to time at 160kV as measured by the anode power supply. (b) Pin-hole image of one of the focal spots of the high power tube (taken with 0.1 mm pin-hole) (b) (a)
Stability of the MBFEX tube over time at 3.2 kW output. (a) Longterm (mean of detector counts in a dataset). (b): Short-term (standard deviation of detector counts in a dataset )
Fig e . 6. Reconstructed slice of Article A phantom, which includes an acetal cylind r with four tungsten pins. All images are shown on a-750 to 1500 Hounsfield Units scale. (a) Medical CT reconstruction of phantom reconstructed using filtered back projection (FBP). (b)-(f) are all reconstructions using different algorithms based on data from the laboratory prototype. (b) FBP reconstruct reconstruction. (f) ADS-OSC with bilateral filtering reconstruction. ion. (c) ADS-POCS reconstruction. (d) OSC reconstruction. (e) ADS-OSC
Article
Carbon Nanotube (CNT) based multibeam X-ray tubes provide an array of individually controllable X-ray focal spots. The CNT tube allows for flexible placement and distribution of X-ray focal spots in a system. Using a CNT tube a computed tomography (CT) system with a non-circular geometry and a non-rotating gantry can be created. The non-circular CT geometry can be optimized around a specific imaging problem, utilizing the flexibility of CNT multibeam X-ray tubes to achieve the optimal focal spot distribution for the design constraints of the problem. Iterative reconstruction algorithms provide flexible CT reconstruction to accommodate the non-circular geometry. Compressed sensing-based iterative reconstruction algorithms apply a sparsity constraint to the reconstructed images that can partially account for missing angular coverage due to the non-circular geometry. In this paper, we present a laboratory prototype CT system that uses CNT multibeam X-ray tubes; a rectangular, non-rotating imaging geometry; and an accelerated compressed sensing-based iterative reconstruction algorithm. We apply a total variation minimization as our sparsity constraint. We present the advanced CNT multibeam tubes and show the stability and flexibility of these new tubes. We also present the unique imaging geometry and discuss the design constraints that influenced the specific system design. The reconstruction method is presented along with an overview of the acceleration of the algorithm to near real-time reconstruction. We demonstrate that the prototype reconstructed images have image quality comparable to a conventional CT system. The prototype is optimized for airport checkpoint baggage screening, but the concepts developed may apply to other application-specific CT imaging systems.
 
Article
The compressive sensing (CS) theory shows that real signals can be exactly recovered from very few samplings. Inspired by the CS theory, the interior problem in computed tomography is proved uniquely solvable by minimizing the region-of-interest’s total variation if the imaging object is piecewise constant or polynomial. This is called CS-based interior tomography. However, the CS-based algorithms require high computational cost due to their iterative nature. In this paper, a graphics processing unit (GPU)-based parallel computing technique is applied to accelerate the CS-based interior reconstruction for practical application in both fan-beam and cone-beam geometries. Our results show that the CS-based interior tomography is able to reconstruct excellent volumetric images with GPU acceleration in a few minutes.
 
Article
Wireless sensor networks (WSNs) have been proliferating due to their wide applications in both military and commercial use. However, one critical challenge to WSNs implementation is source location privacy. In this paper, we propose a novel tree-based diversionary routing scheme for preserving source location privacy using hide and seek strategy to create diversionary or decoy routes along the path to the sink from the real source, where the end of each diversionary route is a decoy (fake source node), which periodically emits fake events. Meanwhile, the proposed scheme is able to maximize the network lifetime of WSNs. The main idea is that the lifetime of WSNs depends on the nodes with high energy consumption or hotspot, and then the proposed scheme minimizes energy consumption in hotspot and creates redundancy diversionary routes in nonhotspot regions with abundant energy. Hence, it achieves not only privacy preservation, but also network lifetime maximization. Furthermore, we systematically analyze the energy consumption in WSNs, and provide guidance on the number of diversionary routes, which can be created in different regions away from the sink. In addition, we identify a novel attack against phantom routing, which is widely used for source location privacy preservation, namely, direction-oriented attack. We also perform a comprehensive analysis on how the direction-oriented attack can be defeated by the proposed scheme. Theoretical and experimental results show that our scheme is very effective to improve the privacy protection while maximizing the network lifetime.
 
Article
One of the principal issues of alternative combustion modes for diesel engines (such as HCCI, PCCI, and LTC) is caused by the imbalances in the distribution of air and EGR across the cylinders, which affects the combustion process and ultimately cause significant differences in the pressure trace and indicated torque for each cylinder. In principle, a cylinder-by-cylinder control approach could compensate for air, residuals, and temperature imbalance. However, in order to fully benefit from closed-loop combustion control, it is necessary to obtain feedback signals from each engine cylinder to reconstruct the pressure trace. Therefore, cylinder imbalance is an issue that can be detected in a laboratory environment, wherein each engine cylinder is instrumented with a dedicated pressure transducer. This paper describes the framework and preliminary results of a model-based estimation approach to predict the individual pressure traces in a multicylinder engine relying on a very restricted sensor set, namely, a crankshaft speed sensor, a single production-grade pressure sensor. The objective of the estimator is to reconstruct the complete pressure trace during an engine cycle with sufficient accuracy to allow for detection of cylinder to cylinder imbalances. Starting from a model of the engine crankshaft dynamics, an adaptive sliding mode observer is designed to estimate the cylinder pressure from the crankshaft speed fluctuation measurement. The results obtained by the estimator are compared with experimental data obtained on a four-cylinder diesel engine.
 
Article
We present a UWB and spread spectrum communications method based on the idea of time compression where a sampled message signal is transmitted at a higher sampling rate. Robustness is achieved by dividing the signal into overlapping segments, transmitting each segment fast enough so that the segments no longer overlap, receiving these segments and reconstructing the message by overlap-adding the segments. A key feature of this scheme is that an exact sample rate match is not required to recover the signal. This method is implemented in a custom wideband software defined radio, with good results in the presence of interference and multipath. This method, referred to as time compression overlap-add (TC-OLA), represents a new concept and design approach and an advance in fundamental technology of the air interface physical layer that may be relevant to 5G wireless technologies.
 
Article
Effectively confronting device and circuit parameter variations to maintain or improve the design of high performance and energy efficient systems while satisfying historical standards for reliability and lower costs is increasingly challenging with the scaling of technology. In this paper, we develop methods for robust and resilient six-transistor-cell static random access memory (6T-SRAM) designs that mitigate the effects of device and circuit parameter variations. Our interdisciplinary effort involves: 1) using our own recently developed VAR-TX model [1] to illustrate the impact of interdie (also known as die-to-die, D2D) and intradie (also know as within-die, WID) process and operation variations—namely threshold voltage (Vth), gate length (L), and supply voltage (Vdd)—on future different 16-nm architectures and 2) using modified versions of other well-received models to illustrate the impact of variability due to temperature, negative bias temperature instability, aging, and so forth, on existing and next-generation technology nodes. Our goal in combining modeling techniques is to help minimize all major types of variability and to consequently predict and optimize speed and yield for the next generation 6T-SRAMs.
 
Agent based model of an emergency department.
Agent based model of an emergency department using AnyLogic.
''Gamified'' simulation visualization using Flexsim.
Article
Agent-based modeling has become a viable alternative and complement-to-traditional analysis methods for studying complex social environments. In this paper, we survey the role of agent-based modeling within hospital settings, where agent-based models investigate patient flow and other operational issues as well as the dynamics of infection spread within hospitals or hospital units. While there is a rich history of simulation and modeling of hospitals and hospital units, relatively little work exists, which applies agent-based models to this context.
 
Article
Silane crosslinked polyethylene cable insulation occasionally fails to meet the aging requirements given in technical standards. The purpose of this paper is to investigate this phenomena and establish whether the safety margin of aging tests can be increased by changes in manufacture or test procedures. Using a number of cable types with different compositions and dimensions, the evolution of the absolute values of tensile strength and elongation at break upon aging was obtained. The results show that the major changes in mechanical properties happen within the first 24–48 h. This finding is valid both for ethylene vinylsilane copolymers and for grafted silane systems. In general, the effect is more pronounced for the 100$^{circ}{rm C}$ compatibility test. Statistical analysis shows that insulation crosslinked in a hot waterbath will exhibit this behavior to a lesser extent, thus increasing the safety margin in aging tests, compared with ambient curing. This paper demonstrates that preconditioning at 70$^{circ}{rm C}$ has no significant impact on aging properties. In addition, only small variations in mechanical properties were seen when changing the process parameters. It is concluded that further crosslinking is the principal cause of the phenomena under investigation.
 
Scenario of applying the DHA for finding the M K q,g-ary symbol that maximizes the CF in (16) in a rank-deficient DSS/USSCH SDMA-OFDM system where K q,g = K 1,1 = 3 users coexist on the first subcarrier having been allocated the first DSS code and they transmit QPSK symbols associated with M = 4 to P = 1 receive AE. Top: The database of symbol indices and the corresponding CF values. The rectangular box indicates the optimal symbol index associated with the maximum CF value, while the ellipse encircles the MMSE detector's output which is used as the initial guess in the DHA. Bottom: The DHA process. The circles indicate the best so far found symbols. Starting from the leftmost circle and moving counter-clockwise, the BBHT QSA is invoked for finding a symbol that has a higher CF value than the symbol in that particular circle. Once a better symbol is found, the BBHT QSA is restarted for the new symbol. Once the global best symbol has been
Flow Chart of the SO-DHA-MAA MUD with and without the NE modification.
Scenario of the SO-DHA QMUD with MUA. The set X 1,1,0 q,g was created based on the first DHA call with the MMSE detector's output 31 as the initial input, where only symbols with b (1) 1 = 0 were searched. The set
Flow Chart of the SO-DHA-MUA MUD with the NE modification, as well as the FKT and FBKT methodologies.
Scenario of the SO-DHA QMUD with MUA and $\((a)\)$ FKT or $\((b)\)$ FBKT in the rank-deficient DSS/USSCH SDMA-OFDM system of Fig. 4. Focusing on the DHA application for the $\(i=2\)$nd bit, the underlined indices represent the new symbols found during the current DHA search. The circled index 46 is selected to be the initial input of the DHA, because it has a higher CF value than that of the neighbour of the optimal symbol at the $\(i=2\)$nd bit, corresponding to the decimal index 6, according toFig. 4. When FKT is applied in $\((a)\)$ the sets of the subsequent bits $\(i=3,4,5,6\)$ are also updated with the new symbols found. Similarly, when FBKT is applied in $\((b)\)$, the sets of the preceding and the subsequent bits are updated along with the sets of the $\(i=2\)$nd bit for which the search was performed.
Article
Low-complexity suboptimal multiuser detectors (MUDs) are widely used in multiple access communication systems for separating users, since the computational complexity of the maximum likelihood (ML) detector is potentially excessive for practical implementation. Quantum computing may be invoked in the detection procedure, by exploiting its inherent parallelism for approaching the ML MUDs performance at a substantially reduced number of cost function evaluations. In this contribution, we propose a soft-output (SO) quantum-assisted MUD achieving a near-ML performance and compare it to the corresponding SO ant colony optimization MUD. We investigate rank deficient direct-sequence spreading (DSS) and slow subcarrier-hopping aided (SSCH) spatial division multiple access orthogonal frequency division multiplexing systems, where the number of users to be detected is higher than the number of receive antenna elements used. We show that for a given complexity budget, the proposed SO-Dürr-Høyer algorithm (DHA) QMUD achieves a better performance. We also propose an adaptive hybrid SO-ML/SO-DHA MUD, which adapts itself to the number of users equipped with the same spreading sequence and transmitting on the same subcarrier. Finally, we propose a DSS-based uniform SSCH scheme, which improves the system's performance by 0.5 dB at a BER of 10−5, despite reducing the complexity required by the MUDs employed.
 
Article
The use of computer-based and online education systems has made new data available that can describe the temporal and process-level progression of learning. To date, machine learning research has not considered the impacts of these properties on the machine learning prediction task in educational settings. Machine learning algorithms may have applications in supporting targeted intervention approaches. The goals of this paper are to: 1) determine the impact of process-level information on machine learning prediction results and 2) establish the effect of type of machine learning algorithm used on prediction results. Data were collected from a university level course in human factors engineering $(n=35)$, which included both traditional classroom assessment and computer-based assessment methods. A set of common regression and classification algorithms were applied to the data to predict final course score. The overall prediction accuracy as well as the chronological progression of prediction accuracy was analyzed for each algorithm. Simple machine learning algorithms (linear regression, logistic regression) had comparable performance with more complex methods (support vector machines, artificial neural networks). Process-level information was not useful in post-hoc predictions, but contributed significantly to allowing for accurate predictions to be made earlier in the course. Process level information provides useful prediction features for development of targeted intervention techniques, as it allows more accurate predictions to be made earlier in the course. For small course data sets, the prediction accuracy and simplicity of linear regression and logistic regression make these methods preferable to more complex algorithms.
 
Article
Smartphones and tablets are finding their way into healthcare delivery to the extent that mobile health (mHealth) has become an identifiable field within eHealth. In prior work, a mobile app to document chronic wounds and wound care, specifically pressure ulcers (bedsores) was developed for Android smartphones and tablets. One feature of the mobile app allowed users to take images of the wound using the smartphone or tablet’s integrated camera. In a user trial with nurses at a personal care home, this feature emerged as a key benefit of the mobile app. This paper developed image analysis algorithms that facilitate noncontact measurements of irregularly shaped images (e.g., wounds), where the image is taken with a sole smartphone or tablet camera. The image analysis relies on the sensors integrated in the smartphone or tablet with no auxiliary or add-on instrumentation on the device. Three approaches to image analysis were developed and evaluated: 1) computing depth using autofocus data; 2) a custom sensor fusion of inertial sensors and feature tracking in a video stream; and 3) a custom pinch/zoom approach. The pinch/zoom approach demonstrated the strongest potential and thus developed into a fully functional prototype complete with a measurement mechanism. While image analysis is a very well developed field, this paper contributes to image analysis applications and implementation in mHealth, specifically for wound care.
 
Article
Recent technological advancements have led to a deluge of data from distinctive domains (e.g., health care and scientific sensors, user-generated data, Internet and financial companies, and supply chain systems) over the past two decades. The term big data was coined to capture the meaning of this emerging trend. In addition to its sheer volume, big data also exhibits other unique characteristics as compared with traditional data. For instance, big data is commonly unstructured and require more real-time analysis. This development calls for new system architectures for data acquisition, transmission, storage, and large-scale data processing mechanisms. In this paper, we present a literature survey and system tutorial for big data analytics platforms, aiming to provide an overall picture for nonexpert readers and instill a do-it-yourself spirit for advanced audiences to customize their own big-data solutions. First, we present the definition of big data and discuss big data challenges. Next, we present a systematic framework to decompose big data systems into four sequential modules, namely data generation, data acquisition, data storage, and data analytics. These four modules form a big data value chain. Following that, we present a detailed survey of numerous approaches and mechanisms from research and industry communities. In addition, we present the prevalent Hadoop framework for addressing big data challenges. Finally, we outline several evaluation benchmarks and potential research directions for big data systems.
 
Processing scheme of the internal control tasks.
Bus enhanced MinRoot NoC with root injection.
Silicon area comparison of the PORC, the Cortex-M0, and the vFSMC.
Framework of the novel architectural approach reduced to the interface unit, the RX unit, and the TX unit.
Hierarchical modular based architectural approach for RF transceivers.
Article
The introduction of new mobile communication standards, enabling the ever growing amount of data transmitted in mobile communication networks, continuously increases the complexity of control processing within radio frequency (RF) transceivers. Since this complexity cannot be handled by traditional approaches, this paper focuses on the partitioning of RF transceiver systems and on the implementation of application-specific components to introduce an advanced multiprocessor system-on-chip interface and control architecture which is able to fulfill the requirements of future RF transceiver integrations. The proposed framework demonstrates a high degree of scalability, flexibility, and reusability. Consequently, the time to market for products can be reduced and fast adaptations to the requirements of the market are feasible. In addition, the developed application-specific components achieve improved or at least equivalent performance results compared with common architectures while the silicon area can be reduced. This characteristic has positive effects on the costs as well as on the power consumption of the RF transceiver.
 
Article
Pervasive computing and Internet of Things (IoTs) paradigms have created a huge potential for new business. To fully realize this potential, there is a need for a common way to abstract the heterogeneity of devices so that their functionality can be represented as a virtual computing platform. To this end, we present novel semantic level interoperability architecture for pervasive computing and IoTs. There are two main principles in the proposed architecture. First, information and capabilities of devices are represented with semantic web knowledge representation technologies and interaction with devices and the physical world is achieved by accessing and modifying their virtual representations. Second, global IoT is divided into numerous local smart spaces managed by a semantic information broker (SIB) that provides a means to monitor and update the virtual representation of the physical world. An integral part of the architecture is a resolution infrastructure that provides a means to resolve the network address of a SIB either using a physical object identifier as a pointer to information or by searching SIBs matching a specification represented with SPARQL. We present several reference implementations and applications that we have developed to evaluate the architecture in practice. The evaluation also includes performance studies that, together with the applications, demonstrate the suitability of the architecture to real-life IoT scenarios. In addition, to validate that the proposed architecture conforms to the common IoT-A architecture reference model (ARM), we map the central components of the architecture to the IoT-ARM.
 
Article
Generating synthetic data traffic, which statistically resembles its recorded counterpart is one of the main goals of network traffic modeling. Equivalently, one or several random processes shall be created, exhibiting multiple prescribed statistical measures. In this paper, we present a framework enabling the joint representation of distributions, autocorrelations and cross-correlations of multiple processes. This is achieved by so called transformed Gaussian autoregressive moving-average models. They constitute an analytically tractable framework, which allows for the separation of the fitting problems into subproblems for individual measures. Accordingly, known fitting techniques and algorithms can be deployed for the respective solution. The proposed framework exhibits promising properties: 1) relevant statistical properties such as heavy tails and long-range dependences are manageable; 2) the resulting models are parsimonious; 3) the fitting procedure is fully automatic; and 4) the complexity of generating synthetic traffic is very low. We evaluate the framework with traced traffic, i.e., aggregated traffic, online gaming, and video streaming. The queueing responses of synthetic and recorded traffic exhibit identical statistics. This paper provides guidance for high-quality modeling of network traffic. It proposes a unifying framework, validates several fitting algorithms, and suggests combinations of algorithms suited best for specific traffic types.
 
Article
In this paper, we propose an application specific instrument (ASIN)-based ultrawideband (UWB) radar system for sludge monitoring from scattering signatures from the bottom of industrial oil tanks. The method is validated by successful estimation of sludge volume in oil tanks using simulated and real data. First, as a demonstration of the conventional system, image reconstruction algorithms are used for tank-bottom sludge profile imaging for symmetrical and asymmetrical sludge profiles, where the setup is modeled in finite difference time domain method with reduced dimensions of the tank. A 3-D imaging algorithm is used for the 3-D simulation of real life targets. To get the volume of the sludge, ASIN-based UWB radar system is then applied and its effectiveness is demonstrated. In this framework, to get information about the sludge at the bottom of industrial tank, first, a scheme is proposed to differentiate between two sets of data which correspond to two different set of volumes. This method is validated using a commercial UWB kit, in which, practical experiments were performed. The data obtained is visualized using multidimensional scaling procedure and analyzed. Then, regression analysis using radial basis function artificial neuron network is performed, so that given a particular data, it can be predicted that, to which volume it best corresponds.
 
Article
Spinning plasma toroids, or spinning spheromaks, are reported as forming in partial atmosphere during high-power electric arc experiments. They are a new class of spheromaks because they are observed to be stable in partial atmosphere with no confining external toroidal magnetic fields, and are observed to endure for more than 600 ms. Included in this paper is a model that explains these stable plasma toroids (spheromaks); they are hollow plasma toroids with a thin outer shell of electrons and ions that all travel in parallel paths orthogonal to the toroid circumference—in effect, spiraling around the toroid. These toroids include sufficient ions to neutralize the space charge of the electrons. This model leads to the name Electron Spiral Toroid Spheromak (ESTS). The discovery of this new class of spheromaks resulted from work to explain ball lightning. A comparison is made between the experimental observations of spheromaks in partial atmosphere and reported ball lightning observations; strong similarities are reported. The ESTS is also found to have a high ion density of ${>}10^{{19}}~{rm ions}/{rm cm}^{{3}}$ without needing any external toroidal magnetic field for containment, compared, for example, to tokamaks, with ion density limits of ${sim}10^{{15}}~{rm ions}/{rm cm}^{{3}}$. This high ion density is a defining characteristic and opens the potential to be useful in applications. The ESTS is a field reversed configuration plasma toroid.
 
Article
The large variety of network traffic sets many challenges in modeling the essential aspects of network traffic flows. Analyzing and collecting features for the model creation process from the network traffic traces is a time-consuming and error-prone task. Automating these procedures are a challenge. The research problem discussed in this paper concentrates on the analysis and collection of features from the network traffic traces for the model development process, by automating the analysis and collection. The proposed system of this paper, called MGtoolV2, supports the model development process through the automation of collection and analysis in the actual model creation procedures. The model development process aims to enhance the development of a model by reducing the development cost and time. The proposed tool automatically creates large sets of models according to the network traffic traces and minimizes the errors of manual modeling. The experiments conducted with MGtoolV2 indicate that the tool is able to create the models from the traffic traces cost effectively. MGtoolV2 is able to unify similarities between packets, to create very detailed models describing specific information, and to raise the abstraction level of the created models. The research is based on the constructive method of the related publications and technologies, and the results are established from the testing, validation, and analysis of the implemented MGtoolV2.
 
Article
For three decades, sudden acceleration (SA) incidents have been reported, where automobiles accelerate without warning. These incidents are often diagnosed as no fault found. Investigators, who follow the line of diagnostic reasoning from the 1989 National Highway Traffic Safety Administration (NHTSA) SA report, tend to conclude that SAs are caused by driver pedal error. This paper reviews the diagnostic process in the NHTSA report and finds that: 1) it assumes that an intermittent electronic malfunction should be reproducible either through in-vehicle or laboratory bench tests without saying why and 2) the consequence of this assumption, for which there appears to be no forensic precedent, is to recategorize possible intermittent electronic failures as proven to be nonelectronic. Showing that the supposedly inescapable conclusions of the NHTSA report concerning electronic malfunctions are without foundation opens the way for this paper to discuss electronic intermittency as a potential factor in SA incidents. It then reports a simple practical experiment that shows how mechanically induced electrical contact intermittencies can generate false speed signals that an automobile speed control system may accept as true and that do not trigger any diagnostic fault codes. Since the generation of accurate speed signals is essential for the proper functioning of a number of other automobile safety-critical control systems, the apparent ease with which false speed signals can be generated by vibration of a poor electrical contact is obviously a matter of general concern. Various ways of reducing the likelihood of SAs are discussed, including electrical contact improvements to reduce the likelihood of generating false speed signals, improved battery maintenance, and the incorporation of an independent fail-safe that reduces engine power in an emergency, such as a kill switch.
 
Article
This paper presents a systematic methodology to develop compact MOSFET models for process variability-aware VLSI circuit design. Process variability in scaled CMOS technologies severely impacts the functionality, yield, and reliability of advanced integrated circuit devices, circuits, and systems. Therefore, variability-aware circuit design techniques are required for realistic assessment of the impact of random and systematic process variability in advanced VLSI circuit performance. However, variability-aware circuit design requires compact MOSFET variability models for computer analysis of the impact of process variability in VLSI circuit design. This paper describes a generalized methodology to determine the major set of device parameters sensitive to random and systematic process variability in nanoscale MOSFET devices, map each variability-sensitive device parameter to the corresponding compact model parameter of the target compact model, and generate statistical compact MOSFET models for variability-aware VLSI circuit design.
 
An overview of the MapReduce framework.
An illustration of an RBM network.
An illustration of a DBN with stacked RBMs.
Filters obtained by (a) and (b) RBM and the distributed RBM at epcoch 50. 
Article
Deep belief nets (DBNs) with restricted Boltzmann machines (RBMs) as the building block have recently attracted wide attention due to their great performance in various applications. The learning of a DBN starts with pretraining a series of the RBMs followed by fine-tuning the whole net using backpropagation. Generally, the sequential implementation of both RBMs and backpropagation algorithm takes significant amount of computational time to process massive data sets. The emerging big data learning requires distributed computing for the DBNs. In this paper, we present a distributed learning paradigm for the RBMs and the backpropagation algorithm using MapReduce, a popular parallel programming model. Thus, the DBNs can be trained in a distributed way by stacking a series of distributed RBMs for pretraining and a distributed backpropagation for fine-tuning. Through validation on the benchmark data sets of various practical problems, the experimental results demonstrate that the distributed RBMs and DBNs are amenable to large-scale data with a good performance in terms of accuracy and efficiency.
 
Article
This paper is about how the SP theory of intelligence and its realization in the SP machine may, with advantage, be applied to the management and analysis of big data. The SP system—introduced in this paper and fully described elsewhere—may help to overcome the problem of variety in big data; it has potential as a universal framework for the representation and processing of diverse kinds of knowledge, helping to reduce the diversity of formalisms and formats for knowledge, and the different ways in which they are processed. It has strengths in the unsupervised learning or discovery of structure in data, in pattern recognition, in the parsing and production of natural language, in several kinds of reasoning, and more. It lends itself to the analysis of streaming data, helping to overcome the problem of velocity in big data. Central in the workings of the system is lossless compression of information: making big data smaller and reducing problems of storage and management. There is potential for substantial economies in the transmission of data, for big cuts in the use of energy in computing, for faster processing, and for smaller and lighter computers. The system provides a handle on the problem of veracity in big data, with potential to assist in the management of errors and uncertainties in data. It lends itself to the visualization of knowledge structures and inferential processes. A high-parallel, open-source version of the SP machine would provide a means for researchers everywhere to explore what can be done with the system and to create new versions of it.
 
Article
Deep learning is currently an extremely active research area in machine learning and pattern recognition society. It has gained huge successes in a broad area of applications such as speech recognition, computer vision, and natural language processing. With the sheer size of data available today, big data brings big opportunities and transformative potential for various sectors; on the other hand, it also presents unprecedented challenges to harnessing data and information. As the data keeps getting bigger, deep learning is coming to play a key role in providing big data predictive analytics solutions. In this paper, we provide a brief overview of deep learning, and highlight current research efforts and the challenges to big data, as well as the future trends.
 
Article
One of the goals of neuromorphic engineering is to imitate the brain’s ability to recognize and count the number of individual objects as entities based on the global consistency of the information from the population of activated tactile (or visual) sensory neurons whatever the objects’ shapes are. To achieve this flexibility, it may be worth examining an unconventional algorithm such as topological methods. Here we propose a fully parallelized algorithm for a shape-invariant touch counter for 2D pixels. The number of touches is counted by the Euler integral, a generalized integral, in which a connected component counter (Betti number) for the binary image was used as elemental module. Through examples of touches, we demonstrate transparently how the proposed circuit architecture embodies the Euler integral in the form of recurrent neural networks for iterative vector operations. Our parallelization can lead the way to FPGA or DSP implementations of topological algorithms with scalability to high resolutions of pixels.
 
Article
In this paper, we introduce a novel but intuitive scheme to recover multiple signals of interest (SoI) from multiple emitters in signal collection applications such as signal intelligence, electronic intelligence, and communications intelligence. We consider a case where the SoIs form a heavy interference environment. The scheme, which is referred to as reference-based successive interference cancellation (RSIC), involves a combination of strategic receiver placement and signal processing techniques. The scheme works by placing a network of cooperative receivers where each receiver catches its own SoI (despite multiple interferences). The first receiver demodulates the initial SoI (called a reference signal) and forwards it to the second receiver. The second receiver collects a received signal containing the second SoI but is interfered with by the initial SoI, which is a problem called co-channel interference in cellular communications. Unfortunately, the amplitude scaling of the interference is unknown in the second receiver and therefore has to be estimated via least squares error. It turns out that the estimation requires a priori knowledge of the second SoI, which is the very signal it tries to demodulate, thereby yielding a Catch-22 problem. We propose using an initial guess on the second SoI to form an amplitude estimate such that the interference is subtracted (cancelled) from the collected measurement at the second receiver. The procedure is applied to a third receiver (or multiple receivers) until the last of the desired SoI is separated from all of the co-channel interferences. The RSIC scheme performs well. Using quaternary phase shift keying as example modulation, we present major symbol error rate (SER) performance improvements with the use of RSIC over the highly degraded SER of receivers that are heavily interfered and do not employ any cancellation technique.
 
Article
Standardized turbo codes (TCs) use recursive systematic convolutional transducers of rate b/(b + d), having a single feedback polynomial (b+dRSCT). In this paper, we investigate the realizability of the b+dRSCT set through two single shift register canonical forms (SSRCFs), called, in the theory of linear systems, constructibility, and controllability. The two investigated SSRCF are the adaptations, for the implementation of b+dRSCT, of the better-known canonical forms controller (constructibility) and observer (controllability). Constructibility is the implementation form actually used for convolutional transducers in TCs. This paper shows that any b+1RSCT can be implemented in a unique SSRCF observer. As a result, we build a function, ξ:H → G, which has as definition domain the set of encoders in SSRCF constructibility, denoted by H, and as codomain a subset of encoders in SSRCF observer, denoted by G. By proving the noninjectivity and nonsurjectivity properties of the function ξ, we prove that H is redundant and incomplete in comparison with G, i.e., the SSRCF observer is more efficient than the SSRCF constructibility for the implementation of b+1RSCT. We show that the redundancy of the set H is dependent on the memory m and on the number of inputs b of the considered b+1RSCT. In addition, the difference between G and ξ(H) contains encoders with very good performance, when used in a TC structure. This difference is consistent for m ≈ b > 1. The results on the realizability of the b+1RSCT allowed us some considerations on b+dRSCT, with b, d > 1, as well, for which we proposed the SSRCF controllability. These results could be useful in the design of TC based on exhaustive search. So, the proposed implementation form permits the design of new TCs, which cannot be conceived based on the actual form. It is possible, even probable, among these new TCs to find better performance than in the current communication standards- such as LTE, DVB, or deep-space communications.
 
Article
This paper proposes a hybrid gate-level leakage model for the use with the Monte Carlo (MC) analysis approach, which combines a lookup table (LUT) model with a first-order exponential-polynomial model (first-order model, herein). For the process parameters having strong nonlinear relationships with the logarithm of leakage current, the proposed model uses the LUT approach for the sake of modeling accuracy. For the other process parameters, it uses the first-order model for increased efficiency. During the library characterization for each type of logic gates, the proposed approach determines the process parameters for which it will use the LUT model. And, it determines the number of LUT data points, which can maximize analysis efficiency with acceptable accuracy, based on the user-defined threshold. The proposed model was implemented for gate-level MC leakage analysis using three graphic processing units. In experiments, the proposed approach exhibited the average errors of ¡5% in both mean and standard deviation with reference to SPICE-level MC leakage analysis. In comparison, MC analysis with the first-order model exhibited more than 90% errors. In CPU times, the proposed hybrid approach took only two to five times longer runtimes. In comparison with the full LUT model, the proposed hybrid model was up to one hundred times faster while increasing the average errors by only 3%. Finally, the proposed approach completed a leakage analysis of an OpenSparc T2 core of 4.5 million gates with a runtime of ${<}{rm 5}~{rm min}$.
 
Article
This paper presents the latest progress on cloud RAN (C-RAN) in the areas of centralization and virtualization. A C-RAN system centralizes the baseband processing resources into a pool and virtualizes soft base-band units on demand. The major challenges for C-RAN including front-haul and virtualization are analyzed with potential solutions proposed. Extensive field trials verify the viability of various front-haul solutions, including common public radio interface compression, single fiber bidirection and wavelength-division multiplexing. In addition, C-RANs facilitation of coordinated multipoint (CoMP) implementation is demonstrated with 50%–100% uplink CoMP gain observed in field trials. Finally, a test bed is established based on general purpose platform with assisted accelerators. It is demonstrated that this test bed can support multi-RAT, i.e., Time-Division Duplexing Long Term Evolution, Frequency-Division Duplexing Long Term Evolution, and Global System for Mobile Communications efficiently and presents similar performance to traditional systems.
 
Top-cited authors
T.S. Rappaport
  • New York University
Shu Sun
  • New York University
Mathew Samimi
  • Polytechnic Institute of New York University
Rimma Mayzus
  • Stevens Institute of Technology
Mohsen Guizani
  • Mohamed bin Zayed University of Artificial Intelligence