Enabling high data-rate uplink cellular connectivity for drones is a challenging problem, since a flying drone has a higher likelihood of having line-of-sight propagation to base stations that terrestrial UEs normally do not have line-of-sight to. This may result in uplink inter-cell interference and uplink performance degradation for the neighboring ground UEs when drones transmit at high data-rates (e.g., video streaming). We address this problem from a cellular operator's standpoint to support drone-sourced video streaming of a point of interest. We propose a low-complexity, closed-loop control system for Open-RAN architectures that jointly optimizes the drone's location in space and its transmission directionality to support video streaming and minimize its uplink interference impact on the network. We prototype and experimentally evaluate the proposed control system on a dedicated outdoor multi-cell RAN testbed, which is the first measurement campaign of its kind. Furthermore, we perform a large-scale simulation assessment of the proposed control system using the actual cell deployment topologies and cell load profiles of a major US cellular carrier. The proposed Open-RAN control scheme achieves an average 19% network capacity gain over traditional BS-constrained control solutions and satisfies the application data-rate requirements of the drone (e.g., to stream an HD video).
With the emergence of 5G, network densification, richer and more demanding applications, the Radio Access Network (RAN)---a key component of the cellular network infrastructure---will become increasingly complex. To tackle this complexity, it is critical for the RAN to be able to automate the process of deploying, optimizing, and operating while leveraging novel data-driven technologies to ultimately improve the end user Quality of Experience (QoE). In this article, we disaggregate the traditional monolithic control plane RAN architecture and introduce a RAN Intelligent Controller (RIC) platform decoupling the control and data planes of the RAN driving an intelligent and continuously evolving radio network by fostering network openness and empowering network intelligence with AI-enabled applications. We provide functional and software architectures of the RIC and discuss its design challenges. We elaborate how the RIC can enable near-real-time network optimization in 5G for Dual Connectivity use-case using machine learning control loops.
Mobile devices aggregate various types of data from sensitive corporate documents to personal content. While users desire to access this content on a single device via a unified user experience and through any mobile app, protecting this data is challenging. Even though different data types have different security and privacy needs, mobile operating systems include only few, if any, functionalities for fine-grained data protection. We present SWIRLS, an Android-based mobile OS that provides a policy-based information-flow data protection abstraction for mobile apps to support BYOD (bring-your-own-device) use cases. SWIRLS attaches security policies to individual pieces of data and enforces these policies as the data flows through the device. Unlike current BYOD solutions like VMs that create duplication overload, SWIRLS provides a single environment to access content from different security contexts using the same applications while monitoring for malicious data leakage. SWIRLS leverages a two-level hybrid information flow tracking (IFT) mechanism to track both intra-application flows and a higher level IFT based on processes for applications isolation. Our evaluation presents BYOD data protection use-cases such as limiting document sharing, preventing leakage based on document classification and security policies based on geo-fencing. SWIRLS only imposes a low battery consumption and performance overhead.
The fifth generation of cellular networks (5G) will rely on edge cloud deployments to satisfy the ultra-low latency demand of future applications. In this paper, we argue that such deployments can also be used to enable advanced data-driven and Machine Learning (ML) applications in mobile networks. We propose an edge-controller-based architecture for cellular networks and evaluate its performance with real data from hundreds of base stations of a major U.S. operator. In this regard, we will provide insights on how to dynamically cluster and associate base stations and controllers, according to the global mobility patterns of the users. Then, we will describe how the controllers can be used to run ML algorithms to predict the number of users in each base station, and a use case in which these predictions are exploited by a higher-layer application to route vehicular traffic according to network Key Performance Indicators (KPIs). We show that the prediction accuracy improves when based on machine learning algorithms that rely on the controllers' view and, consequently, on the spatial correlation introduced by the user mobility, with respect to when the prediction is based only on the local data of each single base station.
Real graphs often contain edge and node weights, representing, for instance, penalty, distance or uncertainty. We study the problem of keyword search over weighted node-labeled graphs, in which a query consists of a set of keywords and an answer is a subgraph whose nodes contain the keywords. We evaluate answers using three ranking strategies: optimizing edge weights, optimizing node weights, and a bi-objective combination of both node and edge weights. We prove that optimizing node weights and the bi-objective function are NP-hard. We propose an algorithm that optimizes edge weights and has an approximation ratio of two for the unique node enumeration paradigm. To optimize node weights and the bi-objective function, we propose transformations that distribute node weights onto the edges. We then prove that our transformations allow our algorithm to also optimize node weights and the bi-objective function with the same approximation ratio of two. Notably, the proposed transformations are compatible with existing algorithms that only optimize edge weights. We empirically show that in many natural examples, incorporating node weights (both keyword holders and middle nodes) produces more relevant answers than ranking methods based only on edge weights. Extensive experiments over real-life datasets verify the effectiveness and efficiency of our solution.
Adaptive bitrate streaming (ABR) has become the de-facto technique for video streaming over the Internet. Despite a flurry of techniques, achieving high quality ABR streaming over cellular networks remains a tremendous challenge. ABR streaming can be naturally modeled as a control problem. There has been some initial work on using PID, a widely used feedback control technique, for ABR streaming. Existing studies, however, either use PID control directly without fully considering the special requirements of ABR streaming, leading to suboptimal results, or conclude that PID is not a suitable approach. In this paper, we take a fresh look at PID-based control for ABR streaming. We design a framework called PIA (PID-control based ABR streaming) that strategically leverages PID control concepts and incorporates several novel strategies to account for the various requirements of ABR streaming. We evaluate PIA using simulation based on LTE network traces, as well as using real DASH implementation. The results demonstrate that PIA outperforms state-of-the-art schemes in providing high average bitrate with significantly lower bitrate changes (reduction up to 40%) and stalls (reduction up to 85%), while incurring very small runtime overhead. We further design PIA-E to improve the performance of PIA in the initial playback phase.
Traffic for internet video streaming has been rapidly increasing and is further expected to increase with the higher definition videos and IoT applications, such as 360 degree videos and augmented virtual reality applications. While efficient management of heterogeneous cloud resources to optimize the quality of experience is important, existing work in this problem space often left out important factors. In this paper, we present a model for describing a today's representative system architecture for video streaming applications, typically composed of a centralized origin server and several CDN sites. Our model comprehensively considers the following factors: limited caching spaces at the CDN sites, allocation of CDN for a video request, choice of different ports from the CDN, and the central storage and bandwidth allocation. With the model, we focus on minimizing a performance metric, stall duration tail probability (SDTP), and present a novel, yet efficient, algorithm to solve the formulated optimization problem. The theoretical bounds with respect to the SDTP metric are also analyzed and presented. Our extensive simulation results demonstrate that the proposed algorithms can significantly improve the SDTP metric, compared to the baseline strategies. Small-scale video streaming system implementation in a real cloud environment further validates our results.
Concern about how to aggregate sensitive user data without compromising individual privacy is a major barrier to greater availability of data. Differential privacy has emerged as an accepted model to release sensitive information while giving a statistical guarantee for privacy. Many different algorithms are possible to address different target functions. We focus on the core problem of count queries, and seek to design mechanisms to release data associated with a group of n individuals. Prior work has focused on designing mechanisms by raw optimization of a loss function, without regard to the consequences on the results. This can leads to mechanisms with undesirable properties, such as never reporting some outputs (gaps), and overreporting others (spikes). We tame these pathological behaviors by introducing a set of desirable properties that mechanisms can obey. Any combination of these can be satisfied by solving a linear program (LP) which minimizes a cost function, with constraints enforcing the properties. We focus on a particular cost function, and provide explicit constructions that are optimal for certain combinations of properties, and show a closed form for their cost. In the end, there are only a handful of distinct optimal mechanisms to choose between: one is the well-known (truncated) geometric mechanism; the second a novel mechanism that we introduce here, and the remainder are found as the solution to particular LPs. These all avoid the bad behaviors we identify. We demonstrate in a set of experiments on real and synthetic data which is preferable in practice, for different combinations of data distributions, constraints, and privacy parameters.
LTE evolved Multimedia Broadcast/Multicast Service (eMBMS) is an attractive solution for video delivery to very large groups in crowded venues. However, the deployment and management of eMBMS systems are challenging, due to the lack of real-time feedback from the user equipments (UEs). Therefore, we present the Dynamic Monitoring (DyMo) system for low-overhead feedback collection. DyMo leverages eMBMS for broadcasting stochastic group instructions to all UEs. These instructions indicate the reporting rates as a function of the observed quality of service (QoS). This simple feedback mechanism collects very limited QoS reports from the UEs. The reports are used for network optimization, thereby ensuring high QoS to the UEs. We present the design aspects of DyMo and evaluate its performance analytically and via extensive simulations. Specifically, we show that DyMo infers the optimal eMBMS settings with extremely low overhead while meeting strict QoS requirements under different UE mobility patterns and presence of network component failures. For instance, DyMo can detect the eMBMS signal-to-noise ratio experienced by the 0.1th percentile of the UEs with a root mean square error of 0.05% with only 5 to 10 reports per second regardless of the number of UEs.
Objective: The current state of the art for compartment modeling of dynamic PET data can be described as a two-stage approach. In Stage 1, individual estimates of kinetic parameters are obtained by fitting models using standard techniques, such as nonlinear least squares (NLS), to each individual's data one subject at a time. Population-level effects, such as the difference between diagnostic groups, are analyzed in Stage 2 using standard statistical methods by treating the individual estimates as if they were observed data. While this approach is generally valid, it is possible to increase efficiency and precision of the analysis, allow more complex models to be fit, and also to permit parameter-specific investigation by fitting data across subjects simultaneously. We explore the application of nonlinear mixed-effects (NLME) models for estimation and inference in this setting. Methods: In the NLME framework, subjects are modeled simultaneously through the inclusion of random effects of subjects for each kinetic parameter; meanwhile, population parameters are estimated directly in a joint model. Results: Simulation results indicate that NLME outperforms the two-stage approach in estimating group-level effects and also has improved power to detect differences across groups. We applied our NLME approach to clinical PET data and found effects not detected by the two-stage approach. Conclusion: The proposed NLME approach is more accurate and correspondingly more powerful than the two-stage approach in compartment modeling of PET data. Significance: The NLME method can broaden the methodological scope of PET modeling because of its efficiency and stability.
Causal consistency has emerged as an attractive middle-ground to architecting cloud storage systems, as it allows for high availability and low latency, while supporting stronger-than-eventual-consistency semantics. However, causally-consistent cloud storage systems have seen limited deployment in practice. A key factor is these systems employ full replication of all the data in all the data centers (DCs), incurring high cost. A simple extension of current causal systems to support partial replication by clustering DCs into rings incurs availability and latency problems. We propose Karma, the first system to enable causal consistency for partitioned data stores while achieving the cost advantages of partial replication without the availability and latency problems of the simple extension. Our evaluation with 64 servers emulating 8 geo-distributed DCs shows that Karma (i) incurs much lower cost than a fully-replicated causal store (obviously due to the lower replication factor); and (ii) offers higher availability and better performance than the above partial-replication extension at similar costs.
Erasure-coded storage systems have gained considerable adoption recently since they can provide the same level of reliability with significantly lower storage overhead compared to replicated systems. However, background traffic of such systems – e.g., repair, rebalance, backup and recovery traffic – often has large volume and consumes significant network resources. Independently scheduling such tasks and selecting their sources can easily create interference among data flows, causing severe deadline violation. We show that the well-known heuristic scheduling algorithms fail to consider important constraints, thus resulting in unsatisfactory performance. In this paper, we claim that an optimal scheduling algorithm, which aims to maximize the number of background tasks completed before deadlines, must simultaneously consider task deadline, network topology, chunk placement, and time-varying resource availability. We first show that the corresponding optimization problem is NP-hard. Then we propose a novel algorithm, called Linear Programming for Selected Tasks (LPST) to maximize the number of successful tasks and improve overall utilization of the datacenter network. It jointly schedules tasks and selects their sources based on a notion of Remaining Time Flexibility, which measures the slackness of the starting time of a task. We evaluated the efficacy of our algorithm using extensive simulations and validate the results with experiments in a real cloud environment. Our results show that, under certain scenarios, LPST can perform 7x $\sim$ 10x better than the heuristics which blindly treat the infrastructure as a collection of homogeneous resources, and 21.7 $\sim$ 65.9 percent better than the algorithms that only take the network topology into account.
The Chord distributed hash table (DHT) is well-known and often used to implement peer-to-peer systems. Chord peers find other peers, and access their data, through a ring-shaped pointer structure in a large identifier space. Despite claims of proven correctness, i.e., eventual reachability, previous work has shown that the Chord ring-maintenance protocol is not correct under its original operating assumptions. Previous work has not, however, discovered whether Chord could be made correct under the same assumptions. The contribution of this paper is to provide the first specification of correct operations and initialization for Chord, an inductive invariant that is necessary and sufficient to support a proof of correctness, and two independent proofs of correctness. One proof is informal and intuitive, and applies to networks of any size. The other proof is based on a formal model in Alloy, and uses fully automated analysis to prove the assertions for networks of bounded size. The two proofs complement each other in several important ways.
A method of interacting with data at a wireless communication device is provided. The wireless communication device has access to a first set of capabilities. Data is received at the wireless communication device via a wireless transmission. The data represents visual content that is viewable via a display device. A graphical user interface, including a delayed action selector, is provided via the display device. An input is received within a limited period of time after displaying the delayed action selector. The input is associated with a command to delay execution of an action with respect to the data until the wireless communication device has access to a second set of capabilities. The action is not supported by the first set of capabilities but is supported by the second set of capabilities. An indication of receipt of the input is provided at the wireless communication device.
Methods, devices, and computer program products for providing instant messaging in conjunction with an audiovisual, video, or audio program are provided. The methods include providing an audiovisual, video, or audio program to a user. Viewer/listener input is received requesting activation of a program-based instant messaging function. A viewer/listener identifier corresponding to the viewer/listener is associated with a program identifier that uniquely identifies the audiovisual, video, or audio program being provided to the user to thereby generate a program viewer/listener record. The program viewer/listener record is transmitted to an electronic database. A list of other users who are viewing or listening to the program in addition to the viewer/listener is acquired from the electronic database. The list of other users is transmitted to the viewer/listener.
A system that incorporates teachings of the present disclosure may include, for example, a set top box (STB) comprising a controller programmed to receive measurement data stored in a first wireless device serving as a portable monitoring gateway that collects subscriber collected data and store and analyze the measurement data at the STB using the received measurement data received from the first wireless device and optionally from a remote server or a local storage space having stored measurement data to provide analyzed results. Other embodiments are disclosed.
A Virtual Single Account (VSA) system and method that provides a mobile user with automatic authentication and connection to a remote network via local access networks with a single password, where the local access networks may be independent of the remote network. A mobile user has a single authentication credential for one VSA that is utilized by a VSA client installed on a mobile computing device. The VSA client provides for automatically authenticating and connecting the user's mobile device to a current local access network, and the target remote network such as the user's office network. All authentication credentials are encrypted using a key generated from the user's VSA password that is generated from the user's single password. The VSA client derives the key from the submitted VSA password and decrypts all authentication credentials that are required in order to connect the mobile device to the current local access network and thereafter to the office network.
Disclosed herein are systems, computer-implemented methods, and tangible computer-readable media for using alternate recognition hypotheses to improve whole-dialog understanding accuracy. The method includes receiving an utterance as part of a user dialog, generating an N-best list of recognition hypotheses for the user dialog turn, selecting an underlying user intention based on a belief distribution across the generated N-best list and at least one contextually similar N-best list, and responding to the user based on the selected underlying user intention. Selecting an intention can further be based on confidence scores associated with recognition hypotheses in the generated N-best lists, and also on the probability of a user's action given their underlying intention. A belief or cumulative confidence score can be assigned to each inferred user intention.
Methods and systems of filtering spam messages for cellular network subscribers are provided that may include receiving a message from a decoy subscriber number. The method and systems may further be adapted to determine whether the message at the decoy subscriber number may be spam. If the message at the decoy number tends to be spam, the message may be output to a filtering service for further analysis. In the final analysis if a message is determined to be spam, then new rules may be created and distributed to front end spam and/or virus engines to restrict such traffic from reaching subscribers.
A method, computer readable medium and apparatus for correlating measures of wireless traffic are disclosed. For example, the method obtains the wireless traffic, and processes the wireless traffic by a plurality of probe servers, where each of the plurality of probe servers generates a plurality of feeds, wherein the plurality of feeds comprises a data feed and a control feed. The method correlates the plurality of feeds from the plurality of probe servers, where the data feed and the control feed of each of the plurality of probe servers are correlated with at least one other probe server of the plurality of probe servers to provide a correlated control plane and a correlated data plane, and extracts at least partial path information of a flow from the correlated control plane. The method then correlates performance information from the correlated data plane for the flow.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
Dallas, United States