added an update
Updates
0 new
11
Recommendations
0 new
4
Followers
0 new
12
Reads
1 new
319
Project log
Machine-type communications and large-scale information processing architectures are among key (r)evolutionary enhancements of emerging fifth-generation (5G) mobile cellular networks. Massive data acquisition and processing will make 5G network an ideal platform for large-scale system monitoring and control with applications in future smart transportation, connected industry, power grids, etc. In this work, we investigate a capability of such a 5G network architecture to provide the state estimate of an underlying linear system from the input obtained via large-scale deployment of measurement devices. Assuming that the measurements are communicated via densely deployed cloud radio access network (C-RAN), we formulate and solve the problem of estimating the system state from the set of signals collected at C-RAN base stations. Our solution, based on the Gaussian Belief-Propagation (GBP) framework, allows for large-scale and distributed deployment within the emerging 5G information processing architectures. The presented numerical study demonstrates the accuracy, convergence behavior and scalability of the proposed GBP-based solution to the large-scale state estimation problem.
Semi-supervised binary classifier learning is a fundamental machine learning task where only partial binary labels are observed, and labels of the remaining data need to be interpolated. Leveraging on the advances of graph signal processing (GSP), recently binary classifier learning is posed as a signal restoration problem regularized using a graph smoothness prior, where the undirected graph consists of a set of vertices and a set of weighted edges connecting vertices with similar features. In this paper, we improve the performance of such a graph-based classifier by simultaneously optimizing the feature weights used in the construction of the similarity graph. Specifically, we start by interpolating missing labels by first formulating a boolean quadratic program with a graph signal smoothness objective, then relax it to a convex semi-definite program, solvable in polynomial time. Next, we optimize the feature weights used for construction of the similarity graph by reusing the smoothness objective but with a convex set constraint for the weight vector. The reposed convex but non-differentiable problem is solved via an iterative proximal gradient descent algorithm. The two steps are solved alternately until convergence. Experimental results show that our alternating classifier / graph learning algorithm outperforms existing graph-based methods and support vector machines with various kernels.
In digital signal processing, shift-invariant filters can be represented as a polynomial expansion of a shift operation,that is, the Z-transform representation. When extended to graph signal processing (GSP), this would mean that a shift-invariant graph filter can be represented as a polynomial of the adjacency (shift) matrix of the graph. However, the characteristic and minimum polynomials of the adjacency matrix must be identical for the property to hold. While it has been suggested that this condition might be ignored as it is always possible to find a polynomial transform to represent the original adjacency matrix by another adjacency matrix that satisfies the condition, this letter shows that a filter that is shift invariant in terms of the original graph may not be shift invariant anymore under the modified graph and vice versa. We introduce the notion of "shift-enabled graph" for graphs that satisfy the aforementioned condition, and present a concrete example of a graph that is not "shift-enabled" and a shift-invariant filter that is not a polynomial of the shift operation matrix. The result provides a deeper understanding of shift-invariant filters when applied in GSP and shows that further investigation of shift-enabled graphs is needed to make it applicable to practical scenarios.
Most current non-intrusive load monitoring (NILM)
algorithms disaggregate one appliance at a time, remove the
appliance contribution towards the total load, and then move on
to the next appliance. On one hand, this is effective since it avoids
multi-class classification, and analytical models for each appliance
can be developed independently of other appliances, and
thus potentially transferred to unseen houses that have different
sets of appliances. On the other hand, however, these methods
can significantly under/over estimate the total consumption since
they do not minimise the difference between the measured
aggregate readings and the sum of estimated individual loads.
By considering this difference, we propose a post-processing
approach for improving the accuracy of event-based NILM. We
pose an optimisation problem to refine the original disaggregation
result and propose a heuristic to solve a (combinatorial) boolean
quadratic problem through relaxing zero-one constraint sets to
compact zero-one intervals. We propose a method to set the
regularization term, based on the appliance working power. We
demonstrate high performance of the proposed post-processing
method compared with the simulated annealing method and
original disaggregation results, for three houses in the REFIT
dataset using two state-of-the-art event-based NILM methods.
We study detection of random signals corrupted by noise that over time switch their values (states) from a finite set of possible values, where the switchings occur at unknown points in time. We model such signals by means of a random duration model that to each possible state assigns a probability mass function which controls the statistics of durations of that state occurrences. Assuming two possible signal states and Gaussian noise, we derive optimal likelihood ratio test and show that it has a computationally tractable form of a matrix product, with the number of matrices involved in the product being the number of process observations. Each matrix involved in the product is of dimension equal to the sum of durations spreads of the two states, and it can be decomposed as a product of a diagonal random matrix controlled by the process observations and a sparse constant matrix which governs transitions in the sequence of states. Using this result, we show that the Neyman-Pearson error exponent is equal to the top Lyapunov exponent for the corresponding random matrices. Using theory of large deviations, we derive a lower bound on the error exponent. Finally, we show that this bound is tight by means of numerical simulations.
Low-cost depth sensors, such as Microsoft Kinect, have potential for non-contact health monitoring that is robust to ambient lighting conditions. However, captured depth images typically suer from high acquisition noise, and hence processing them to estimate biometrics is difficult. In this paper, we propose to capture depth video of a human subject using Kinect 2.0 to estimate his/her heart rate and rhythm; as blood is pumped from the heart to circulate through the head, tiny oscillatory head motion due to Newtonian mechanics can be detected for periodicity analysis. Specifically, we first restore a captured depth video via a joint bit-depth enhancement / denoising procedure, using a graph-signal smoothness prior for regularization. Second, we track an automatically detected head region throughout the depth video to deduce 3D motion vectors. The detected vectors are fed back to the depth restoration module in a loop to ensure that the motion information in two modules are consistent, improving performance of both restoration and motion tracking. Third, the computed 3D motion vectors are projected onto its principal component for 1D signal analysis, composed of trend removal, band-pass filtering, and wavelet-based motion denoising. Finally, the heart rate is estimated via Welch power spectrum analysis, and the heart rhythm is computed via peak detection. Experimental results show accurate estimation of the heart rate and rhythm using our proposed algorithm as compared to rate and rhythm estimated by a portable oximeter.