Conference Paper

A Time Dependent Performance Model for Multihop Wireless Networks with CBR Traffic

Grad. Telecommun. & Networking Program, Univ. of Pittsburgh, Pittsburgh, PA, USA
DOI: 10.1109/PCCC.2010.5682301 Conference: Performance Computing and Communications Conference (IPCCC), 2010 IEEE 29th International
Source: IEEE Xplore

ABSTRACT In this paper, we develop a performance modeling technique for analyzing the time varying network layer queueing behavior of multihop wireless networks with constant bit rate traffic. Our approach is a hybrid of fluid flow queueing modeling and a time varying connectivity matrix. Network queues are modeled using fluid-flow based differential equation models which are solved using numerical methods, while node mobility is modeled using deterministic or stochastic modeling of adjacency matrix elements. Numerical and simulation experiments show that the new approach can provide reasonably accurate results with significant improvements in the computation time compared to standard simulation tools.

Full-text

Available from: David Tipper, Jun 12, 2015
0 Followers
 · 
155 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: People are facing a flood of data today. Data are being collected at unprecedented scale in many areas, such as networking, image processing, virtualization, scientific computation, and algorithms. The huge data nowadays are called Big Data. Big data is an all encompassing term for any collection of data sets so large and complex that it becomes difficult to process them using traditional data processing applications. In this article, the authors present a unique way which uses network simulator and tools of image processing to train students abilities to learn, analyze, manipulate, and apply Big Data. Thus they develop students handson abilities on Big Data and their critical thinking abilities. The authors used novel image based rendering algorithm with user intervention to generate realistic 3D virtual world. The learning outcomes are significant.
  • [Show abstract] [Hide abstract]
    ABSTRACT: With the motivation of seamlessly extending wireless sensor networks to the external environment, service-oriented architecture comes up as a promising solution. However, as sensor nodes are failure prone, this consequently renders the whole wireless sensor network to seriously faulty. When a particular node is faulty, the service on it should be migrated into those substitute sensor nodes that are in a normal status. Currently, two kinds of approaches exist to identify the substitute sensor nodes: the most common approach is to prepare redundancy nodes, though the involved tasks such as maintaining redundancy nodes, i.e., relocating the new node, lead to an extra burden on the wireless sensor networks. More recently, other approaches without using redundancy nodes are emerging, and they merely select the substitute nodes in a sensor node’s perspective i.e., migrating the service of faulty node to it’s nearest sensor node, though usually neglecting the requirements of the application level. Even a few work consider the need of the application level, they perform at packets granularity and don’t fit well at service granularity. In this paper, we aim to remove these limitations in the wireless sensor network with the service-oriented architecture. Instead of deploying redundancy nodes, the proposed mechanism replaces the faulty sensor node with consideration of the similarity on the application level, as well as on the sensor level. On the application level, we apply the Bloom Filter for its high efficiency and low space costs. While on the sensor level, we design an objective solution via the coefficient of a variation as an evaluation for choosing the substitute on the sensor level.
    Journal of Network and Systems Management 01/2014; DOI:10.1007/s10922-014-9302-z · 0.44 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: With the explosive growth of Web-based cameras and mobile devices, billions of photographs are uploaded to the Internet. We can trivially collect a huge number of photo streams for various goals, such as 3D scene reconstruction and other big data applications. However, this is not an easy task due to the fact the retrieved photos are neither aligned nor calibrated. Furthermore, with the occlusion of unexpected foreground objects like people, vehicles, it is even more challenging to find feature correspondences and reconstruct realistic scenes. In this paper, we propose a structure-based image completion algorithm for object removal that produces visually plausible content with consistent structure and scene texture. We use an edge matching technique to infer the potential structure of the unknown region. Driven by the estimated structure, texture synthesis is performed automatically along the estimated curves. We evaluate the proposed method on different types of images: from highly structured indoor environment to the natural scenes. Our experimental results demonstrate satisfactory performance that can be potentially used for subsequent big data processing: 3D scene reconstruction and location recognition.
    IEEE INFOCOM 2014 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS); 04/2014