Conference Paper

Channel Charting for Beam Management in Sub-THz Systems

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

Article
Full-text available
While 5G is being tested worldwide and anticipated to be rolled out gradually in 2019, researchers around the world are beginning to turn their attention to what 6G might be in 10+ years time, and there are already initiatives in various countries focusing on the research of possible 6G technologies. This article aims to extend the vision of 5G to more ambitious scenarios in a more distant future and speculates on the visionary technologies that could provide the step changes needed for enabling 6G.
Article
Full-text available
This paper considers the design of optimal resource allocation policies in wireless communication systems which are generically modeled as a functional optimization problem with stochastic constraints. These optimization problems have the structure of a learning problem in which the statistical loss appears as a constraint, motivating the development of learning methodologies to attempt their solution. To handle stochastic constraints, training is undertaken in the dual domain. It is shown that this can be done with small loss of optimality when using near-universal learning parameterizations. In particular, since deep neural networks (DNN) are near-universal their use is advocated and explored. DNNs are trained here with a model-free primal-dual method that simultaneously learns a DNN parametrization of the resource allocation policy and optimizes the primal and dual variables. Numerical simulations demonstrate the strong performance of the proposed approach on a number of common wireless resource allocation problems.
Article
Full-text available
We propose channel charting (CC), a novel framework in which a multi-antenna network element learns a chart of the radio geometry in its surrounding area. The channel chart captures the local spatial geometry of the area so that points that are close in space will also be close in the channel chart and vice versa. CC works in a fully unsupervised manner, i.e., learning is only based on channel state information (CSI) that is passively collected at a single point in space, but from multiple transmit locations in the area over time. The method then extracts channel features that characterize large-scale fading properties of the wireless channel. Finally, the channel charts are generated with tools from dimensionality reduction, manifold learning, and deep neural networks. The network element performing CC may be, for example, a multi-antenna base-station in a cellular system and the charted area in the served cell. Logical relationships related to the position and movement of a transmitter, e.g., a user equipment (UE), in the cell can then be directly deduced from comparing measured radio channel characteristics to the channel chart. The unsupervised nature of CC enables a range of new applications in UE localization, network planning, user scheduling, multipoint connectivity, hand-over, cell search, user grouping, and other cognitive tasks that rely on CSI and UE movement relative to the base-station, without the need of information from global navigation satellite systems.
Conference Paper
Full-text available
In frequency division duplex massive MIMO systems, one critical challenge is that the mobiles need to feed back a large downlink channel matrix to the base station, creating large signaling overhead. Estimating a large downlink channel matrix at the mobile may also be costly in terms of power and memory consumption. Prior work addresses these issues using appropriate angle parameterization and compressed sensing techniques, but this approach involves solving a challenging, and sometimes extremely large, sparse inverse problem---which is difficult to solve to global optimality, and often leads to unaffordable memory and computational costs. In this work, we propose an alternative framework that explores the fact that double directional channels for mmWave massive MIMO usually have low rank. The base station estimates the downlink channel via recovering a low-rank matrix, utilizing samples of the channel matrix compressed and fed back from the mobiles. This way, the mobile users can avoid performing resource-consuming tasks. In addition, the number of feedback measurements can be much smaller than the size of the channel matrix without losing channel recovery guarantees. Further, the low-rank estimation problem at the base station has a manageable size that scales gracefully with the channel size. Based on the new model, we propose two methods for channel estimation, which are based on iterative optimization and deep learning, respectively. Compared with the state-of-the-art, the optimization method obtains 10x improvement and the deep learning approach achieves more than 10000x improvement in computational complexity, while achieving high estimation quality in very low sample region.
Article
Full-text available
UMAP (Uniform Manifold Approximation and Projection) is a novel manifold learning technique for dimension reduction. UMAP is constructed from a theoretical framework based in Riemannian geometry and algebraic topology. The result is a practical scalable algorithm that applies to real world data. The UMAP algorithm is competitive with t-SNE for visualization quality, and arguably preserves more of the global structure with superior run time performance. Furthermore, UMAP as described has no computational restrictions on embedding dimension, making it viable as a general purpose dimension reduction technique for machine learning.
Article
Full-text available
Massive MIMO is considered as one of the key enablers of the next generation 5G networks.With a high number of antennas at the BS, both spectral and energy efficiencies can be improved. Unfortunately, the downlink channel estimation overhead scales linearly with the number of antenna. This does not create complications in Time Division Duplex (TDD) systems since the channel estimate of the uplink direction can be directly utilized for link adaptation in the downlink direction. However, this channel reciprocity is unfeasible for the Frequency Division Duplex (FDD) systems where different physical transmission channels are existent for the uplink and downlink. In the aim of reducing the amount of Channel State Information (CSI) feedback for FDD systems, the promising method of two stage beamforming transmission was introduced. The performance of this transmission scheme is however highly influenced by the users grouping and selection mechanisms. In this paper, we first introduce a new similarity measure coupled with a novel clustering technique to achieve the appropriate users partitioning. We also use graph theory to develop a low complexity groups scheduling scheme that outperforms currently existing methods in both sum-rate and throughput fairness. This performance gain is demonstrated through computer simulations.
Conference Paper
Full-text available
In ultra-dense heterogeneous networks, caching popular contents at small base stations is considered as an effective way to reduce latency and redundant data transmission. Optimization of caching placement/replacement and content delivering can be computationally heavy, especially for large-scale networks. The provision of both time-efficient and high-quality solutions is challenging. Conventional iterative optimization methods, either optimal or heuristic, typically require a large number of iterations to achieve satisfactory performance, and result in considerable computational delay. This may limit their applications in practical network operations where online decisions have to be made. In this paper, we provide a viable alternative to the conventional methods for caching optimization, from a deep learning perspective. The idea is to train the optimization algorithms through a deep neural network (DNN) in advance, instead of directly applying them in real-time caching or scheduling. This allows significant complexity reduction in the delay-sensitive operation phase since the computational burden is shifted to the DDN training phase. Numerical results demonstrate that the DNN is of high computational efficiency. By training the designed DNN over a massive number of instances, the solution quality of the energy-efficient content delivering can be progressively approximated to around 90% of the optimum.
Article
Full-text available
For decades, optimization has played a central role in addressing wireless resource management problems such as power control and beamformer design. However, these algorithms often require a considerable number of iterations for convergence, which poses challenges for real-time processing. In this work, we propose a new learning-based approach for wireless resource management. The key idea is to treat the input and output of a resource allocation algorithm as an unknown non-linear mapping and to use a deep neural network (DNN) to approximate it. If the non-linear mapping can be learned accurately and effectively by a DNN of moderate size, then such DNN can be used for resource allocation in almost \emph{real time}, since passing the input through a DNN to get the output only requires a small number of simple operations. In this work, we first characterize a class of `learnable algorithms' and then design DNNs to approximate some algorithms of interest in wireless communications. We use extensive numerical simulations to demonstrate the superior ability of DNNs for approximating two considerably complex algorithms that are designed for power allocation in wireless transmit signal design, while giving orders of magnitude speedup in computational time.
Article
Full-text available
Millimeter wave (mmWave) MIMO will likely use hybrid analog and digital precoding, which uses a small number of RF chains to avoid energy consumption associated with mixed signal components like analog-to-digital components not to mention baseband processing complexity. However, most hybrid precoding techniques consider a fully-connected architecture requiring a large number of phase shifters, which is also energyintensive. In this paper, we focus on the more energy-efficient hybrid precoding with sub-connected architecture, and propose a successive interference cancelation (SIC)-based hybrid precoding with near-optimal performance and low complexity. Inspired by the idea of SIC for multi-user signal detection, we first propose to decompose the total achievable rate optimization problem with non-convex constraints into a series of simple sub-rate optimization problems, each of which only considers one sub-antenna array. Then, we prove that maximizing the achievable sub-rate of each sub-antenna array is equivalent to simply seeking a precoding vector sufficiently close (in terms of Euclidean distance) to the unconstrained optimal solution. Finally, we propose a low-complexity algorithm to realize SICbased hybrid precoding, which can avoid the need for the singular value decomposition (SVD) and matrix inversion. Complexity evaluation shows that the complexity of SIC-based hybrid precoding is only about 10% as complex as that of the recently proposed spatially sparse precoding in typical mmWave MIMO systems. Simulation results verify the near-optimal performance of SIC-based hybrid precoding.
Article
Full-text available
Recent works on massive multiple-input multiple-output (MIMO) have shown that a potential breakthrough in capacity gains can be achieved by deploying a very large number of antennas at the basestation. In order to achieve optimal performance of massive MIMO systems, accurate transmit-side channel state information (CSI) should be available at the basestation. While transmit-side CSI can be obtained by employing channel reciprocity in time division duplexing (TDD) systems, explicit feedback of CSI from the user terminal to the basestation is needed for frequency division duplexing (FDD) systems. In this paper, we propose an antenna grouping based feedback reduction technique for FDD-based massive MIMO systems. The proposed algorithm, dubbed antenna group beamforming (AGB), maps multiple correlated antenna elements to a single representative value using pre-designed patterns. The proposed method introduces the concept of using a header of overall feedback resources to select a suitable group pattern and the payload to quantize the reduced dimension channel vector. Simulation results show that the proposed method achieves significant feedback overhead reduction over conventional approach performing the vector quantization of whole channel vector under the same target sum rate requirement.
Conference Paper
Full-text available
The correlation matrix distance (CMD), an earlier introduced measure for characterization of non-stationary MIMO channels, is analyzed regarding its capability to predict performance degradation in MIMO transmission schemes. For that purpose we consider the performance reduction that a prefiltering MIMO transmission scheme faces due to non-stationary changes of the MIMO channel. We show that changes in the spatial structure of the channel corresponding to high values in the CMD also show up as a significant reduction in performance of the considered MIMO transmission scheme. Such significant changes in the spatial structure of the mobile radio channel are shown to appear also for small movements within an indoor environment. Stationarity can therefore not always be assumed for indoor MIMO radio channels.
Article
In this paper, deep power control (DPC), which is the first transmit power control framework based on a convolutional neural network (CNN), is proposed. In DPC, the transmit power control strategy to maximize either spectral efficiency (SE) or energy efficiency (EE) is learned by means of a CNN. While conventional power control schemes require a considerable number of computations, in DPC the transmit power of users can be determined using far fewer computations enabling real-time processing.We also propose a form of DPC that can be performed in a distributed manner with local channel state information (CSI), allowing the signaling overhead to be greatly reduced. Through simulations, we show that the DPC can achieve almost the same or even higher SE and EE than a conventional power control scheme, with a much lower computation time.
Article
We consider detection based on deep learning, and show it is possible to train detectors that perform well, without any knowledge of the underlying channel models. Moreover, when the channel model is known, we demonstrate that it is possible to train detectors that do not require channel state information (CSI). In particular, a technique we call sliding bidirectional recurrent neural network (SBRNN) is proposed for detection where, after training, the detector estimates the data in real-time as the signal stream arrives at the receiver. We evaluate this algorithm, as well as other neural network (NN) architectures, using the Poisson channel model, which is applicable to both optical and chemical communication systems. In addition, we also evaluate the performance of this detection method applied to data sent over a chemical communication platform, where the channel model is difficult to model analytically. We show that SBRNN is computationally efficient, and can perform detection under various channel conditions without knowing the underlying channel model. We also demonstrate that the bit error rate (BER) performance of the proposed SBRNN detector is better than that of a Viterbi detector with imperfect CSI as well as that of other NN detectors that have been previously proposed.
Article
This paper gives an overview of the majorization-minimization (MM) algorithmic framework, which can provide guidance in deriving problem-driven algorithms with low computational cost. A general introduction of MM is presented, including a description of the basic principle and its convergence results. The extensions, acceleration schemes, and connection to other algorithmic frameworks are also covered. To bridge the gap between theory and practice, upperbounds for a large number of basic functions, derived based on the Taylor expansion, convexity, and special inequalities, are provided as ingredients for constructing surrogate functions. With the pre-requisites established, the way of applying MM to solving specific problems is elaborated by a wide range of applications in signal processing, communications, and machine learning.
Article
We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions. The method is straightforward to implement and is based an adaptive estimates of lower-order moments of the gradients. The method is computationally efficient, has little memory requirements and is well suited for problems that are large in terms of data and/or parameters. The method is also ap- propriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The method exhibits invariance to diagonal rescaling of the gradients by adapting to the geometry of the objective function. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. We demonstrate that Adam works well in practice when experimentally compared to other stochastic optimization methods.
Article
The problem of dimension reduction is introduced as a way to overcome the curse of the dimen-sionality when dealing with vector data in high-dimensional spaces and as a modelling tool for such data. It is defined as the search for a low-dimensional manifold that embeds the high-dimensional data. A classification of dimension reduction problems is proposed. A survey of several techniques for dimension reduction is given, including principal component analysis, projection pursuit and projection pursuit regression, principal curves and methods based on topologically continuous maps, such as Kohonen's maps or the generalised topographic mapping. Neural network implementations for several of these techniques are also reviewed, such as the projec-tion pursuit learning network and the BCM neuron with an objective function. Several appendices complement the mathematical treatment of the main text.
Article
Motivated by applications to sensor, peer-to-peer, and ad hoc networks, we study distributed algorithms, also known as gossip algorithms, for exchanging information and for computing in an arbitrarily connected network of nodes. The topology of such networks changes continuously as new nodes join and old nodes leave the network. Algorithms for such networks need to be robust against changes in topology. Additionally, nodes in sensor networks operate under limited computational, communication, and energy resources. These constraints have motivated the design of gossip algorithms: schemes which distribute the computational burden and in which a node communicates with a randomly chosen neighbor. We analyze the averaging problem under the gossip constraint for an arbitrary network graph, and find that the averaging time of a gossip algorithm depends on the second largest eigenvalue of a doubly stochastic matrix characterizing the algorithm. Designing the fastest gossip algorithm corresponds to minimizin