In the past decade, there has been a huge proliferation of wireless local area networks (WLANs) based on the IEEE 802.11 WLAN standard. As 802.11 connectivity becomes more ubiquitous, multihop communications will be increasingly used for access point range extension and coverage enhancement. In this paper, we present a design for an IEEE 802. 11 -based power saving access point (PSAP), intended for use in multihop battery and solar/battery powered applications. These types of APs have many practical applications and can be deployed very quickly and inexpensively to provide coverage enhancement in situations such as campuses, building complexes, and fast deployment scenarios. Unlike conventional wired access points, in this type of system, power saving on the AP itself is an important objective. A key design constraint is that the proposed PSAP be backward compatible to a wide range of IEEE 802.11 functionality and existing wired access points. In this paper, we introduce the protocols required to achieve this compatibility, show the constraints imposed by this restriction, and present performance results for the proposed system.
Continued advances in mobile networks and positioning technologies have created a strong market push for location-based applications. Examples include location-aware emergency response, location-based advertisement, and location-based entertainment. An important challenge in the wide deployment of location-based services (LBSs) is the privacy-aware management of location information, providing safeguards for location privacy of mobile clients against vulnerabilities for abuse. This paper describes a scalable architecture for protecting the location privacy from various privacy threats resulting from uncontrolled usage of LBSs. This architecture includes the development of a personalized location anonymization model and a suite of location perturbation algorithms. A unique characteristic of our location privacy architecture is the use of a flexible privacy personalization framework to support location k-anonymity for a wide range of mobile clients with context-sensitive privacy requirements. This framework enables each mobile client to specify the minimum level of anonymity that it desires and the maximum temporal and spatial tolerances that it is willing to accept when requesting k-anonymity-preserving LBSs. We devise an efficient message perturbation engine to implement the proposed location privacy framework. The prototype that we develop is designed to be run by the anonymity server on a trusted platform and performs location anonymization on LBS request messages of mobile clients such as identity removal and spatio-temporal cloaking of the location information. We study the effectiveness of our location cloaking algorithms under various conditions by using realistic location data that is synthetically generated from real road maps and traffic volume data. Our experiments show that the personalized location k-anonymity model, together with our location perturbation engine, can achieve high resilience to location privacy threats without introducing any significant-
A Mobile Ad hoc NETwork (MANET) is a group of mobile nodes that form a multihop wireless network. The topology of the network can change randomly due to unpredictable mobility of nodes and propagation characteristics. Previously, it was assumed that the nodes in the network were assigned IP addresses a priori. This may not be feasible as nodes can enter and leave the network dynamically. A dynamic IP address assignment protocol like DHCP requires centralized servers that may not be present in MANETs. Hence, we propose a distributed protocol for dynamic IP address assignment to nodes in MANETs. The proposed solution guarantees unique IP address assignment under a variety of network conditions including message losses, network partitioning and merging. Simulation results show that the protocol incurs low latency and communication overhead for an IP address assignment.
In this paper, we present the challenges in supporting multimedia, in particular, VoIP services over multihop wireless networks using commercial IEEE 802.11 MAC DCF hardware, and propose a novel software solution, called Layer 2.5 SoftMAC. Our proposed SoftMAC resides between the IEEE 802.11 MAC layer and the IP layer to coordinate the real-time (RT) multimedia and best-effort (BE) data packet transmission among neighboring nodes in a multihop wireless network. To effectively ensure acceptable VoIP services, channel busy time and collision rate need to be well controlled below appropriate levels. Targeted at this, our SoftMAC architecture employs three key mechanisms: 1) distributed admission control for regulating the load of RT traffic, 2) rate control for minimizing the impact of BT traffic on RT one, and 3) nonpreemptive priority queuing for providing high priority service to VoIP traffic. To evaluate the efficacy of these mechanisms, extensive simulations are conducted using the network simulator NS2. We also implement our proposed SoftMAC as a Windows network driver interlace specification (NDIS) driver and build a multihop wireless network testbed with 32 wireless nodes equipped with IEEE 802.11 a/b/g combo cards. Our evaluation and testing results demonstrate the effectiveness of our proposed software solution. Our proposed collaborative SoftMAC framework can also provide good support for A/V streaming in home networks where the network consists of hybrid WLAN (wireless LAN) and Ethernet
We analyze various critical transmitting/sensing ranges for connectivity and coverage in three-dimensional sensor networks. As in other large-scale complex systems, many global parameters of sensor networks undergo phase transitions. For a given property of the network, there is a critical threshold, corresponding to the minimum amount of the communication effort or power expenditure by individual nodes, above (respectively, below) which the property exists with high (respectively, a low) probability. For sensor networks, properties of interest include simple and multiple degrees of connectivity/coverage. First, we investigate the network topology according to the region of deployment, the number of deployed sensors, and their transmitting/sensing ranges. More specifically, we consider the following problems: assume that n nodes, each capable of sensing events within a radius of r, are randomly and uniformly distributed in a 3-dimensional region R of volume V, how large must the sensing range R<sub>SENSE</sub> be to ensure a given degree of coverage of the region to monitor? For a given transmission range R<sub>TRANS</sub>, what is the minimum (respectively, maximum) degree of the network? What is then the typical hop diameter of the underlying network? Next, we show how these results affect algorithmic aspects of the network by designing specific distributed protocols for sensor networks.
This paper presents the findings of an extensive measurement study on multiple commercial third-generation (3G) networks. We have investigated the performance of those 3G networks in terms of their data throughput, latency, video and voice call handling capacities, and their ability to provide service guarantees to different traffic classes under saturated and lightly loaded network conditions. Our findings point to the diverse nature of the network resources allocation mechanisms and the call admission control policies adopted by different operators. It is also found that the 3G network operators seem to have extensively customized their network configurations in a cell-by-cell manner according to the individual site's local demographics, projected traffic demand, and the target coverage area of the cell. As such, the cell capacity varies widely not only across different operators but also across different measurement sites of the same operator. The results also show that it is practically impossible to predict the actual capacity of a cell based on known theoretical models and standard parameters, even when supplemented by key field measurements such as the received signal-to-noise ratio E<sub>c</sub>/N<sub>0</sub>.
Providing support for TCP with good quality link connection is a key issue for future wireless networks in which Internet access is going to be one of the most important data services. A number of schemes have been proposed in literature to improve the TCP performance over wireless links. In this paper, we study the performance of a particular combination of link layer protocol (e.g., radio link protocol or RLP) and MAC retransmissions to support the TCP connections over third generation (3G) wireless CDMA networks. We specifically investigate two metrics - the packet error rate and the delay provided by RLP and MAC retransmissions - both of which are important for TCP performance. For independent and identically distributed (i.i.d) error channels, we propose an analytical model for RLP performance with MAC retransmission. The segmentation of TCP/IP packets into smaller RLP frames, as well as the RLP buffering process, is modeled using a Markov chain. For correlated fading channels, we introduce an analytical metric called RLP retransmission efficiency. We show that: 1) the RLP frame size has significant impact on the overall 3G system performance, 2) MAC layer retransmissions significantly improve the TCP performance, and 3) the RLP retransmission scheme performs better in highly correlated channels, while other scheme performs better in low correlated channels. Simulation results also confirm these conclusions.
Wireless networks beyond 2G aim at supporting real-time applications such as VoIP. Before a user can start a VoIP session, the end-user terminal has to establish the session using signaling protocols such as H.323 and session initiation protocol (SIP) in order to negotiate media parameters. The time interval to perform the session setup is called the session setup time. It can be affected by the quality of the wireless link, measured in terms of frame error rate (FER), which can result in retransmissions of packets lost and can lengthen the session setup time. Therefore, such protocols should have a session setup time optimized against loss. One way to do so is by choosing the appropriate retransmission timer and the underlying protocols. In this paper, we focus on SIP session setup delay and propose optimizing it using an adaptive retransmission timer. We also evaluate SIP session setup performances with various underlying protocols (transport control protocol (TCP), user datagram protocol (UDP), radio link protocols (RLPs)) as a function of the FER. For 19.2 Kbps channel, the SIP session setup time can be up to 6.12s with UDP and 7s with TCP when the FER is up to 10 percent. The use of RLP (1, 2, 3) and RLP (1, 1, 1, 1, 1, 1) puts the session setup time down to 3.4s under UDP and 4s under TCP for the same FER and the same channel bandwidth. We also compare SIP and H.323 performances using an adaptive retransmission timer: SIP outperforms H.323, especially for a FER higher than 2 percent.
This paper addresses the problem of frequency domain packet scheduling (FDPS) incorporating spatial division multiplexing (SDM) multiple input multiple output (MIMO) techniques on the 3GPP Long Term Evolution (LTE) downlink. We impose the LTE MIMO constraint of selecting only one MIMO mode (spatial multiplexing or transmit diversity) per user per transmission time interval (TTI). First, we address the optimal MIMO mode selection (multiplexing or diversity) per user in each TTI in order to maximize the proportional fair (PF) criterion adapted to the additional frequency and spatial domains. We prove that both single-user (SU-) and multi-user (MU-) MIMO FDPS problems under the LTE requirement are NP-hard. We therefore develop two types of approximation algorithms (ones with full channel feedback and the others with partial channel feedback), all of which guarantee provable performance bounds for both SU- and MU-MIMO cases. Based on 3GPP LTE system model simulations, our approximation algorithms that take into account both spatial and frequency diversity gains outperform the exact algorithms that do not exploit the potential spatial diversity gain. Moreover, the approximation algorithms with partial channel feedback achieve comparable performance (with only 1-6% performance degradation) to the ones with full channel feedback, while significantly reducing the channel feedback overhead by nearly 50%.
Fourth generation (4G) wireless networks will provide high-bandwidth connectivity with quality-of-service (QoS) support to mobile users in a seamless manner. In such a scenario, a mobile user will be able to connect to different wireless access networks such as a wireless metropolitan area network (WMAN), a cellular network, and a wireless local area network (WLAN) simultaneously. We present a game-theoretic framework for radio resource management (that is, bandwidth allocation and admission control) in such a heterogeneous wireless access environment. First, a noncooperative game is used to obtain the bandwidth allocations to a service area from the different access networks available in that service area (on a long-term basis). The Nash equilibrium for this game gives the optimal allocation which maximizes the utilities of all the connections in the network (that is, in all of the service areas). Second, based on the obtained bandwidth allocation, to prioritize vertical and horizontal handoff connections over new connections, a bargaining game is formulated to obtain the capacity reservation thresholds so that the connection-level QoS requirements can be satisfied for the different types of connections (on a long-term basis). Third, we formulate a noncooperative game to obtain the amount of bandwidth allocated to an arriving connection (in a service area) by the different access networks (on a short-term basis). Based on the allocated bandwidth and the capacity reservation thresholds, an admission control is used to limit the number of ongoing connections so that the QoS performances are maintained at the target level for the different types of connections.
Layer-based video coding, together with adaptive modulation and coding, is a promising technique for providing real-time video multicast services on heterogeneous mobile devices. With the rapid growth of data communications for emerging applications, reducing the energy consumption of mobile devices is a major challenge. This paper addresses the problem of resource allocation for video multicast in fourth-generation wireless systems, with the objective of minimizing the total energy consumption for data reception. First, we consider the problem when scalable video coding is applied. We prove that the problem is NP-hard and propose a 2-approximation algorithm to solve it. Then, we investigate the problem under multiple description coding, and show that it is also NP-hard and cannot be approximated in polynomial time with a ratio better than 2, unless P=NP. To solve this case, we develop a pseudo-polynomial time 2 approximation algorithm. The results of simulations conducted to compare the proposed algorithms with a brute-force optimal algorithm and a conventional approach are very encouraging.
In today's IEEE 802.11 Wireless LANs (WLANs), e.g., the popular IEEE 802.11 b, stations support multiple transmission rates, and use them adaptively depending on the underlying channel condition via link adaptation. It has been known that when some stations use low transmission rates due to bad channel conditions, the performance of the stations using high rates is heavily degraded, and this phenomenon is often referred to as performance anomaly. In this paper, we model the WLAN incorporating stations with multiple transmission rates in order to demonstrate the performance anomaly analytically. Note that all the previously proposed models of the IEEE 802.11 assume a single transmission rate. We also develop possible remedies to improve the performance. Our solution is basically to control the access parameters such as the initial backoff window, the frame size, and the maximum backoff stage, depending on the employed transmission rate. Throughout simulations, we demonstrate that our analytical model is accurate, and the proposed mechanism can indeed provide the remedies to the performance anomaly by increasing the aggregate throughput up to six times.
In this research, we first investigate the cross-layer interaction between TCP and routing protocols in the IEEE 802.11 ad hoc network. On-demand ad hoc routing protocols respond to network events such as channel noise, mobility, and congestion in the same manner, which, in association with TCP, deteriorates the quality of an existing end-to-end connection. The poor end-to-end connectivity deteriorates TCP's performance in turn. Based on the well-known TCP-friendly equation, we conduct a quantitative study on the TCP operation range using static routing and long-lived TCP flows and show that the additive-increase, multiplicative-decrease (AIMD) behavior of the TCP window mechanism is aggressive for a typical multihop IEEE 802.11 network with a low-bandwidth-delay product. Then, to address these problems, we propose two complementary mechanisms, that is, the TCP fractional window increment (FeW) scheme and the Route-failure notification using BUIk-losS Trigger (ROBUST) policy. The TCP FeW scheme is a preventive solution used to reduce the congestion-driven wireless link loss. The ROBUST policy is a corrective solution that enables on-demand routing protocols to suppress overreactions induced by the aggressive TCP behavior. It is shown by computer simulation that these two mechanisms result in a significant improvement of TCP throughput without modifying the basic TCP window or the wireless MAC mechanisms.
IEEE 802.11 works properly only if the stations respect the MAC protocol. We show in this paper that a greedy user can substantially increase his share of bandwidth, at the expense of the other users, by slightly modifying the driver of his network adapter. We explain how easily this can be performed, in particular, with the new generation of adapters. We then present DOMINO (detection of greedy behavior in the MAC layer of IEEE 802.11 public networks), a piece of software to be installed in or near the access point. DOMINO can detect and identify greedy stations without requiring any modification of the standard protocol. We illustrate these concepts by simulation results and by the description of a prototype that we have recently implemented
We present an analytical model for the IEEE 802.11 DCF in multihop wireless networks that considers hidden terminals and accurately works for a large range of traffic loads. An energy model, which considers energy consumption due to collisions, retransmissions, exponential backoff and freezing mechanisms, and overhearing of nodes, and the proposed IEEE 802.11 DCF analytical model are used to analyze the energy consumption of various relaying strategies. The results show that the energy-efficient relaying strategy depends significantly on the traffic load. Under light traffic, energy spent during idle mode dominates, making any relaying strategy nearly optimal. Under moderate traffic, energy spent during idle and receive modes dominates and multihop transmissions become more advantageous where the optimal hop number varies with processing power consumed at relay nodes. Under very heavy traffic, where multihopping becomes unstable due to increased collisions, direct transmission becomes more energy efficient. The choice of relaying strategy is observed to affect energy efficiency more for large and homogeneous networks where it is beneficial to use multiple short hops each covering similar distances. The results indicate that a cross-layered relaying approach, which dynamically changes the relaying strategy, can substantially save energy as the network traffic load changes in time.
The IEEE 802.11 standard for wireless local area networks (WLANs) employs a medium access control (MAC), called distributed coordination function (DCF), which is based on carrier sense multiple access with collision avoidance (CSMA/CA). The collision avoidance mechanism utilizes the random backoff prior to each frame transmission attempt. The random nature of the backoff reduces the collision probability, but cannot completely eliminate collisions. It is known that the throughput performance of the 802.11 WLAN is significantly compromised as the number of stations increases. In this paper, we propose a novel distributed reservation-based MAC protocol, called early backoff announcement (EBA), which is backward compatible with the legacy DCF. Under EBA, a station announces its future backoff information in terms of the number of backoff slots via the MAC header of its frame being transmitted. All the stations receiving the information avoid collisions by excluding the same backoff duration when selecting their future backoff value. Through extensive simulations, EBA is found to achieve a significant increase in the throughput performance as well as a higher degree of fairness compared to the 802.11 DCF.
We implement a new software-based multihop TDMA MAC protocol (Soft-TDMAC) with microsecond synchronization using a novel system interface for development of 802.11 overlay TDMA MAC protocols (SySI-MAC). SySI-MAC provides a kernel independent message-based interface for scheduling transmissions and sending and receiving 802.11 packets. The key feature of SySI-MAC is that it provides near deterministic timers and transmission times, which allows for implementation of highly synchronized TDMA MAC protocols. Building on SySI-MAC's predictable transmission times, we implement Soft-TDMAC, a software-based 802.11 overlay multihop TDMA MAC protocol. Soft-TDMAC has a synchronization mechanism, which synchronizes all pairs of network clocks to within microseconds of each other. Building on pairwise synchronization, Soft-TDMAC achieves tight network-wide synchronization. With network-wide synchronization independent of data transmissions, Soft-TDMAC can schedule arbitrary TDMA transmission patterns. For example, Soft-TDMAC allows schedules that decrease end-to-end delay and take end-to-end rate demands into account. We summarize hundreds of hours of testing Soft-TDMAC on a multihop testbed, showing the synchronization capabilities of the protocol and the benefits of flexible scheduling.
IEEE 802.11 WLAN has high data rates (e.g., 11 Mbps for 802.11b and 54 Mbps for 802.11g), while voice streams of VoIP typically have low-data-rate requirements (e.g., 29.2 Kbps). One may, therefore, expect WLAN to be able to support a large number of VoIP sessions (e.g., 200 and 900 sessions in 802.11b and 802.11g, respectively). Prior work by one of the authors, however, indicated that 802.11 is extremely inefficient for VoIP transport. Only 12 and 60 VoIP sessions can be supported in an 802.11b and an 802.11g WLAN, respectively. This paper shows that the bad news does not stop there. When there are multiple WLANs in the vicinity of each other-a common situation these days-the already low VoIP capacity can be further eroded in a significant manner. For example, in a 5 times 5, 25-cell multi-WLAN network, the VoIP capacities for 802.11b and 802.11g are only 1.63 and 10.34 sessions per AP, respectively. This paper investigates several solutions to improve the VoIP capacity. Based on a conflict graph model, we propose a clique-analytical call admission scheme, which increases the VoIP capacity by 52 percent from 1.63 to 2.48 sessions per AP in 802.11b. For 11g, the call admission scheme can also increase the capacity by 37 percent from 10.34 to 14.14 sessions per AP. If all the three orthogonal frequency channels available in 11b and 11g are used to reduce interferences among adjacent WLANs, clique-analytical call admission scheme can boost the capacity to 7.39 VoIP sessions per AP in 11b and 44.91 sessions per AP in 11g. Last but not least, this paper expounds for the first time the use of coarse-grained time-division multiple access (CoTDMA) in conjunction with the basic 802.11 CSMA to eliminate the performance-degrading exposed-node and hidden-node problems in 802.11. A two-layer coloring problem (which is distinct from the classical graph coloring problem) is formulated to assign coarse time slots and frequency channels to VoIP sessions, taking into account the in-
tricacies of the carrier-sensing operation of 802.11. We find that CoTDMA can further increase the VoIP capacity in the multi-WLAN scenario by an additional 35 percent, so that 10 and 58 sessions per AP can be supported in 802.11b and 802.11g, respectively.
In this paper, we propose an enhancement to the IEEE 802.11 distributed coordination function (DCF). The enhancement improves the level of channel spatial reuse; thus, it improves overall network data throughput in dense deployments. Our modification, named the location-enhanced DCF (LED), incorporates location information in DCF frame exchange sequences so that stations sharing the communication channel are able to make better interference predictions and blocking assessments. Hence, more concurrent transmissions can be conducted in densely deployed wireless LANs. The potential performance enhancement of LED is studied both analytically and via ns-2 simulations. The results show that the LED method achieves significant throughput improvements over the original DCF.
Under a multirate network scenario, the IEEE 802.11 DCF MAC fails to provide airtime fairness for all competing stations since the protocol is designed for ensuring max-min throughput fairness. As such, the maximum achievable throughput by any station gets bounded by the slowest transmitting peer. In this paper, we present an analytical model to study the delay and throughput characteristics of such networks so that the rate anomaly problem of IEEE DCF multirate networks could be mitigated. We call our proposal time fair CSMA (TFCSMA) which utilizes an interesting baseline property for estimating a target throughput for each competing station so that its minimum contention window could be adjusted in a distributed manner. As opposed to the previous work in this area, TFCSMA is ideally suited for practical scenarios where stations frequently adapt their data rates to changing channel conditions. In addition, TFCSMA also accounts for packet errors due to the time varying properties of the wireless channel. We thoroughly compare the performance of our proposed protocol with IEEE 802.11 and other existing protocols under different network scenarios and traffic conditions. Our comprehensive simulations validate the efficacy of our method toward providing high throughput and time fair channel allocation.
We propose an approximate model for a nonsaturated IEEE 802.11 DCF network. This model captures the significant influence of an arbitrary node transmit buffer size on the network performance. We find that increasing the buffer size can improve the throughput slightly but can lead to a dramatic increase in the packet delay without necessarily a corresponding reduction in the packet loss rate. This result suggests that there may be little benefit in provisioning very large buffers, even for loss-sensitive applications. Our model outperforms prior models in terms of simplicity, computation speed, and accuracy. The simplicity stems from using a renewal theory approach for the collision probability instead of the usual multidimensional Markov chain, and it makes our model easier to understand, manipulate and extend; for instance, we are able to use our model to investigate the important problem of convergence of the collision probability calculation. The remarkable improvement in the computation speed is due to the use of an efficient numerical transform inversion algorithm to invert generating functions of key parameters of the model. The accuracy is due to a carefully constructed model for the service time distribution. We verify our model using ns-2 simulation and show that our analytical results based on an M/G/1/K queuing model are able to accurately predict a wide range of performance metrics, including the packet loss rate and the waiting time distribution. In contradiction to claims by other authors, we show that 1) a nonsaturated DCF model like ours that makes use of decoupling assumptions for the collision probability and queuing dynamics can produce accurate predictions of metrics other than just the throughput, and 2) the actual service time and waiting time distributions for DCF networks have truncated heavy-tailed shapes (i.e., appear initially straight on a log-log plot) rather than exponential shapes. Our work will help developers select appropriate buffe-
r sizes for 802.11 devices, and will help system administrators predict the performance of applications.
We propose a packet-level model to investigate the impact of channel error on the transmission control protocol (TCP) performance over IEEE-802.11-based multihop wireless networks. A Markov renewal approach is used to analyze the behavior of TCP Reno and TCP Impatient NewReno. Compared to previous work, our main contributions are listed as follows: 1) modeling multiple lossy links, 2) investigating the interactions among TCP, Internet Protocol (IP), and media access control (MAC) protocol layers, specifically the impact of 802.11 MAC protocol and dynamic source routing (DSR) protocol on TCP throughput performance, 3) considering the spatial reuse property of the wireless channel, the model takes into account the different proportions between the interference range and transmission range, and 4) adopting more accurate and realistic analysis to the fast recovery process and showing the dependency of throughput and the risk of experiencing successive fast retransmits and timeouts on the packet error probability. The analytical results are validated against simulation results by using GloMoSim. The results show that the impact of the channel error is reduced significantly due to the packet retransmissions on a per-hop basis and a small bandwidth delay product of ad hoc networks. The TCP throughput always deteriorates less than ~ 10 percent, with a packet error rate ranging from 0 to 0.1. Our model also provides a theoretical basis for designing an optimum long retry limit for IEEE 802.11 in ad hoc networks.
The lifetime of a mobile ad hoc network (MANET) depends on the durability of the mobile hosts' battery resources. In the IEEE 802.11 Power Saving Mode, a host must wake up at every beacon interval, to check if it should remain awake. Such a scheme fails to adjust a host's sleep duration according to its traffic, thereby reducing its power efficiency. This paper presents new MAC protocols for power saving in a single hop MANET. The essence of these protocols is a quorum-based sleep/wake-up mechanism, which conserves energy by allowing the host to sleep for more than one beacon interval, if few transmissions are involved. The proposed protocols are simple and energy-efficiency. Simulation results showed that our protocols conserved more energy and extended the lifetime of a MANET.
Optimizing spectral reuse is a major issue in large-scale IEEE 802.11 wireless networks. Power control is an effective means for doing so. Much previous work simply assumes that each transmitter should use the minimum transmit power needed to reach its receiver, and that this would maximize the network capacity by increasing spectral reuse. It turns out that this is not necessarily the case, primarily because of hidden nodes. This paper shows that, in a network with power control, avoiding hidden nodes can achieve higher overall network capacity compared with the minimum-transmit-power approach. It is not always best to use the minimum transmit powers even from the network capacity viewpoint. Specifically, we propose and investigate two distributed adaptive power control algorithms that minimize mutual interferences among links while avoiding hidden nodes. Different power control schemes have different numbers of exposed nodes and hidden nodes, which in turn result in different network capacities and fairness. Although there is usually a fundamental trade-off between network capacity and fairness, we show that, interestingly, this is not always the case. In addition, our power control algorithms can operate at desirable network-capacity-fairness trade-off points and can boost the capacity of ordinary non-power-controlled 802.11 networks by two times while eliminating hidden nodes.
There is a vast literature on the throughput analysis of the IEEE 802.11 media access control (MAC) protocol. However, very little has been done on investigating the interplay between the collision avoidance mechanisms of the 802.11 MAC protocol and the dynamics of upper layer transport protocols. In this paper, we tackle this issue from an analytical, simulative, and experimental perspective. Specifically, we develop Markov chain models to compute the distribution of the number of active stations in an 802.11 wireless local area network (WLAN) when long-lived transmission control protocol (TCP) connections compete with finite-load user datagram protocol (UDP) flows. By embedding these distributions in the MAC protocol modeling, we derive approximate but accurate expressions of the TCP and UDP throughput. We validate the model accuracy through performance tests carried out in a real WLAN for a wide range of configurations. Our analytical model and the supporting experimental outcomes show that 1) the total TCP throughput is basically independent of the number of open TCP connections and the aggregate TCP traffic can be equivalently modeled as two saturated flows; and 2) in the saturated regime, n UDP flows obtain about n times the aggregate throughput achieved by the TCP flows, which is independent of the overall number of persistent TCP connections.
Since 2005, IEEE 802.11-based networks are able to provide a certain level of Quality of Service by the means of service differentiation, thanks to the IEEE 802.11e amendment. However, no mechanism or method has been standardized to accurately evaluate the amount of resources remaining on a given channel. Such evaluation would however be a good asset for bandwidth-constrained applications. In multi-hop ad hoc networks, such evaluation becomes even more difficult. Consequently, despite the various contributions around this research topic, the estimation of the available bandwidth still represents one of the main issues in this field. In this article, we propose an improved mechanism to estimate the available bandwidth in IEEE 802.11-based ad hoc networks. Through simulations, we compare the accuracy of the estimation we propose to the estimation performed by others state of the art QoS protocols, BRuIT, AAC and QoS-AODV.
Two well-known problems that can cause performance degradations in IEEE 802.11 wireless networks are the exposed-node (EN) and hidden-node (HN) problems. Although there have been isolated and incidental studies of EN and HN, a comprehensive treatment has not been attempted. The contributions of this paper are threefold: First, we provide rigorous mathematical definitions for EN and HN in wireless networks (including wireless local area networks (WLANs) with multiple access points (APs) and ad hoc networks). Second, we relate EN to the nonscalability of network throughput and HN to unfair throughput distributions. Third, we provide schemes to eliminate EN and HN, respectively. We show that the standard 802.11 technology is not scalable because, due to EN, more APs do not yield higher total throughput. By removing EN, our schemes make it possible to achieve scalable throughput commensurate with the seminal theoretical results in  and . In addition, by removing HN, our schemes solve the performance problems triggered by HN, including throughput unfairness/starvation and rerouting instability.
The performance of the Distributed Coordination Function (DCF) of the IEEE 802.11 protocol has been shown to heavily depend on the number of terminals accessing the distributed medium. The DCF uses a carrier sense multiple access scheme with collision avoidance (CSMA/CA), where the backoff parameters are fixed and determined by the standard. While those parameters were chosen to provide a good protocol performance, they fail to provide an optimum utilization of the channel in many scenarios. In particular, under heavy load scenarios, the utilization of the medium can drop tenfold. Most of the optimization mechanisms proposed in the literature are based on adapting the DCF backoff parameters to the estimate of the number of competing terminals in the network. However, existing estimation algorithms are either inaccurate or too complex. In this paper, we propose an enhanced version of the IEEE 802.11 DCF that employs an adaptive estimator of the number of competing terminals based on sequential Monte Carlo methods. The algorithm uses a Bayesian approach, optimizing the backoff parameters of the DCF based on the predictive distribution of the number of competing terminals. We show that our algorithm is simple yet highly accurate even at small time scales. We implement our proposed new DCF in the ns-2 simulator and show that it outperforms existing methods. We also show that its accuracy can be used to improve the results of the protocol even when the terminals are not in saturation mode. Moreover, we show that there exists a Nash equilibrium strategy that prevents rogue terminals from changing their parameters for their own benefit, making the algorithm safely applicable in a complete distributed fashion.
The quality-of-service (QoS) guarantees enabled by the new IEEE 802.11 a/e Wireless LAN (WLAN) standard are specifically targeting the real-time transmission of multimedia content over the wireless medium. Since video data consume the largest part of the available bitrate compared to other media, optimization of video streaming for this new standard is a significant factor for the successful deployment of practical systems. Delay-constrained streaming of fully-scalable video over IEEE 802.11 a/e WLANs is of great interest for many multimedia applications. The new medium access control (MAC) protocol of IEEE 802.11e is called the Hybrid Coordination Function (HCF) and, in this paper, we will specifically consider the problem of video transmission over HCF Controlled Channel Access (HCCA). A cross-layer optimization across the MAC and application layers of the OSI stack is used in order to exploit the features provided by the combination of the new HCCA standard with new versatile scalable video coding algorithms. Specifically, we propose an optimized and scalable HCCA-based admission control for delay-constrained video streaming applications that leads to a larger number of stations being simultaneously admitted (without quality reduction to any video flow). Subsequently, given the allocated transmission opportunity, each station deploys an optimized Application-MAC-PHY adaptation, scheduling, and protection strategy that is facilitated by the fine-grain layering provided by the scalable bitstream. Given that each video flow needs to always comply with the predetermined (a priori negotiated) traffic specification parameters, this cross-layer strategy enables graceful quality degradation whenever the channel conditions or the video sequence characteristics change. For instance, it is demonstrated that the proposed cross-layer protection and bitstream adaptation strategies facilitate QoS token rate adaptation under link adaptation mechanisms that utilize different physical layer transmission rates. The expected gains offered by the optimized solutions proposed in this paper are established theoretically, as well as through simulations.
Link adaptation to dynamically select the data transmission rate at a given time has been recognized as an effective way to improve the goodput performance of the IEEE 802.11 wireless local-area networks (WLANs). Recently, with the introduction of the new high-speed 802.11a physical layer (PHY), it is even more important to have a well-designed link adaptation scheme work with the 802.11a PHY such that its multiple transmission rates can be exploited. In this paper, we first present a generic method to analyze the goodput performance of an 802.11a system under the distributed coordination function (DCF) and express the expected effective goodput as a closed-form function of the data payload length, the frame retry count, the wireless channel condition, and the selected data transmission rate. Then, based on the theoretical analysis, we propose a novel MPDU (MAC protocol data unit)-based link adaptation scheme for the 802.11a systems. It is a simple table-driven approach and the basic idea is to preestablish a best PHY mode table by applying the dynamic programming technique. The best PHY mode table is indexed by the system status triplet that consists of the data payload length, the wireless channel condition, and the frame retry count. At runtime, a wireless station determines the most appropriate PHY mode for the next transmission attempt by a simple table lookup, using the most up-to-date system status as the index. Our in-depth simulation shows that the proposed MPDU-based link adaptation scheme outperforms the single-mode schemes and the autorate fallback (ARF) scheme-which is used in Lucent Technologies' WaveLAN-II networking devices-significantly in terms of the average goodput, the frame drop rate, and the average number of transmission attempts per data frame delivery.
In mobile devices, the wireless network interface card (WNIC) consumes a significant portion of overall system energy. One way to reduce energy consumed by a device is to transition its WNIC to a lower-power steep mode when data is not being received or transmitted. In this paper, we investigate client-centered techniques for energy efficient communication, using IEEE 802.11b, within the network layer. The basic idea is to conserve energy by keeping the WNIC in high-power mode only when necessary. We track each connection, which allows us to determine inactive intervals during which to transition the WNIC to sleep mode. Whenever necessary, we also shape the traffic from the client side to maximize sleep intervals-convincing the server to send data in bursts. This trades lower WNIC energy consumption for an increase in transmission time. Our techniques are compatible with standard TCP and do not rely on any assistance from the server or network infrastructure. Results show that during Web browsing, our client-centered technique saved 21 percent energy compared to PSM and incurred less than a 1 percent increase in transmission time compared to regular TCP. For a large file download, our scheme saved 27 percent energy on average with a transmission time increase of only 20 percent
We develop performance models for delay-sensitive uplink media streaming over a wireline-cum-WiFi network. Since the wireless channel is normally a bottleneck for such streaming, we modify the traditional 802.11e block acknowledgment (B-ACK) scheme to work with wireless fountain coding (WFC)-a packet-level coding scheme which codes packets in a similar manner to intrasession random network coding but delivers them in a manner similar to fountain coding. By using this modified B-ACK scheme, protocol complexity and wireless link-layer delay are potentially reduced. We analytically quantify this delay and use it to derive end-to-end packet loss/late probabilities when automatic repeat request (ARQ) and forward error correction (FEC) are jointly employed at the application-layer. We develop an integrated ns-3/EvalVid simulator to validate our models and compare them with the case when the traditional 802.11e B-ACK scheme is employed. Through simulations of video streaming, we observe that the modified B-ACK scheme does not always perform better than the traditional B-ACK scheme in terms of end-to-end packet loss/late probability and video distortion under certain conditions of the wireless channel. This observation leads us to propose a hybrid scheme that switches between the modified and traditional B-ACK strategies according to the conditions of the wireless channel and the number of packets to transmit in a block. Via simulations, we show the benefits of the hybrid scheme when compared to the traditional IEEE 802.11e B-ACK scheme under different network settings.
The emergence of video streaming over wireless home networks creates renewed interests in design and analysis of new MAC protocols toward QoS provisioning for video applications. IEEE 802.11e Hybrid coordination function Controlled Channel Access (HCCA) exhibits good QoS provisioning for constant bit rate (CBR) video streams in a single collision domain. However, its performance degrades significantly for variable bit rate (VBR) video streams particularly in multicollision domains. In addition, HCCA has the disadvantage of high complexity. In this paper, we introduce a deterministic backoff (DEB) method into the HCCA mechanism, which achieves virtual polling via carrier sense on the wireless channel. DEB intentionally sets each station's backoff counter to a different value, thus stations can access the shared wireless channel at different time slots, which avoids network collisions. By proper controlling of each station's backoff counter, DEB achieves polling like HCCA, but in a more flexible and efficient way. It considerably mitigates inter-AP interference as well due to its carrier sense nature. Results show that, compared to HCCA, DEB always exhibits improvement in performance, particularly in multicollision domains where improvement is remarkable.
We analyze the MAC access delay of the IEEE 802.11e EDCA mechanism under saturation. We develop a detailed analytical model to evaluate the influence of all EDCA differentiation parameters, namely AIFS, CWmin, CWmax and TXOP limit, as well as the backoff multiplier beta. Explicit expressions for the mean, standard deviation and generating function of the access delay distribution are derived. By applying numerical inversion on the generating function, we are able to efficiently compute values of the distribution. Comparison with simulation confirms the accuracy of our analytical model over a wide range of operating conditions. We derive simple asymptotics and approximations for the mean and standard deviation of the access delay, which reveal the salient model parameters for performance under different differentiation mechanisms. We also use the model to numerically study the differentiation performance and find that beta differentiation, though rejected during the standardization process, is an effective differentiation mechanism that has some advantages over the other mechanisms.
Bandwidth allocation schemes have been well studied for mobile cellular networks. However, there is no study about this aspect reported for IEEE 802.11 contention-based distributed wireless LANs. In cellular networks, bandwidth is deterministic in terms of the number of channels by frequency division, time division, or code division. On the contrary, bandwidth allocation in contention- based distributed wireless LANs is extremely challenging due to its contention-based nature, packet-based network, and the most important aspect: only one channel is available, competed for by an unknown number of stations. As a consequence, guaranteeing bandwidth and allocating bandwidth are both challenging issues. In this paper, we address these difficult issues. We propose and study nine bandwidth allocation schemes, called sharing schemes, with guaranteed Quality of Service (QoS) for integrated voice/video/data traffic in IEEE 802.11e contention-based distributed wireless LANs. A guard period is proposed to prevent bandwidth allocation from overprovisioning and is for best-effort data traffic. Our study and analysis show that the guard period is a key concept for QoS guarantees in a contention-based channel. The proposed schemes are compared and evaluated via extensive simulations.
IEEE 802.15.4 standard specifies physical layer (PHY) and medium access control (MAC) sublayer protocols for low-rate and low-power communication applications. In this protocol, every 4-bit symbol is encoded into a sequence of 32 chips that are actually transmitted over the air. The 32 chips as a whole is also called a pseudonoise code (PN-Code). Due to complex channel conditions such as attenuation and interference, the transmitted PN-Code will often be received with some PN-Code chips corrupted. In this paper, we conduct a systematic analysis on these errors occurring at chip level. We find that there are notable error patterns corresponding to different cases. We then show that recognizing these patterns enables us to identify the channel condition in great details. We believe that understanding what happened to the transmission in our way can potentially bring benefit to channel coding, routing, and error correction protocol design. Finally, we propose Simple Rule, a simple yet effective method based on the chip error patterns to infer the link condition with an accuracy of over 96 percent in our evaluations.
This paper studies efficient and simple data broadcast in IEEE 802.15.4-based ad hoc networks (e.g., ZigBee). Since finding the minimum number of rebroadcast nodes in general ad hoc networks is NP-hard, current broadcast protocols either employ heuristic algorithms or assume extra knowledge such as position or two-hop neighbor table. However, the ZigBee network is characterized as low data rate and low cost. It cannot provide position or two-hop neighbor information, but it still requires an efficient broadcast algorithm that can reduce the number of rebroadcast nodes with limited computation complexity and storage space. To this end, this paper proposes self-pruning and forward node selection algorithms that exploit the hierarchical address space in ZigBee networks. Only one-hop neighbor information is needed; a partial list of two-hop neighbors is derived without exchanging messages between neighboring nodes. The ZigBee forward node selection algorithm finds the minimum rebroadcast nodes set with polynomial computation time and memory space. Using the proposed localized algorithms, it is proven that the entire network is covered. Simulations are conducted to evaluate the performance improvement in terms of the number of rebroadcast nodes, number of duplicated receivings, coverage time, and communication overhead
IEEE 802.16 standard defines the air interface specifications for broadband access in wireless metropolitan area networks. Although the medium access control signaling has been well-defined in the IEEE 802.16 specifications, resource management and scheduling, which are crucial components to guarantee quality of service performances, still remain as open issues. In this paper, we propose adaptive queue-aware uplink bandwidth allocation and rate control mechanisms in a subscriber station for polling service in IEEE 802.16 broadband wireless networks. While the bandwidth allocation mechanism adaptively allocates bandwidth for polling service in the presence of higher priority unsolicited grant service, the rate control mechanism dynamically limits the transmission rate for the connections under polling service. Both of these schemes exploit the queue status information to guarantee the desired quality of service (QoS) performance for polling service. We present a queuing analytical framework to analyze the proposed resource management model from which various performance measures for polling service in both steady and transient states can be obtained. We also analyze the performance of best-effort service in the presence of unsolicited grant service and polling service. The proposed analytical model would be useful for performance evaluation and engineering of radio resource management alternatives in a subscriber station so that the desired quality of service performances for polling service can be achieved. Analytical results are validated by simulations and typical numerical results are presented.
The IEEE 802.16 is a standard for broadband wireless communication in metropolitan area networks (MAN). To meet the QoS requirements of multimedia applications, the IEEE 802.16 standard provides four different scheduling services: unsolicited grant service (UGS), real-time polling service (rtPS), non-real-time polling service (nrtPS), and Best Effort (BE). The paper is aimed at verifying, via simulation, the effectiveness of rtPS, nrtPS, and BE (but UGS) in managing traffic generated by data and multimedia sources. Performance is assessed for an IEEE 802.16 wireless system working in point-to-multipoint (PMP) mode, with frequency division duplex (FDD), and with full-duplex subscriber stations (SSs). Our results show that the performance of the system, in terms of throughput and delay, depends on several factors. These include the frame duration, the mechanisms for requesting uplink bandwidth, and the offered load partitioning, i.e., the way traffic is distributed among SSs, connections within each SS, and traffic sources within each connection. The results also highlight that the rtPS scheduling service is a very robust scheduling service for meeting the delay requirements of multimedia applications
The IEEE 802.16 WirelessMAN standard provides a comprehensive quality-of-service (QoS) control structure to enable flow isolation and service differentiation over the common wireless network interface. By specifying a particular set of service parameters, the media access control (MAC) mechanisms defined in the standard are able to offer predefined QoS provisioning on a per-connection basis. However, the design of efficient, flexible, and yet robust MAC scheduling algorithms for such QoS provisioning still remains an open topic. This paper proposes a new QoS control scheme for single-carrier point-to-multipoint mode wireless metropolitan area network (WirelessMAN) systems, which enables the predefined service parameters to control the service provided to each uplink and downlink connection. By MAC-PHY cross-layer resource allocation, the proposed scheme is robust against particular wireless link degradation. Detailed simulation experiments are presented to study the performance and to validate the effectiveness of the proposed QoS control scheme.
In this paper, we define a new problem that has not been addressed in the past: the trade-off between energy efficiency and throughput for multicast services in 802.16e or similar mobile networks. In such networks, the mobile host can reduce its energy consumption by entering the sleep mode when it is not supposed to receive or transmit information. For unicast applications, the tradeoff between delay and energy efficiency has been extensively researched. However, for mobile hosts running multicast (usually push- based) applications, it is much more difficult to determine when data should be transmitted by the base station and when each host should enter the sleep mode. In order to maximize the channel throughput while limiting the energy consumption, a group of hosts needing similar data items should be active during the same time intervals. We define this as an optimization problem and present several algorithms for it. We show that the most efficient solution is the one that employs cross-layer optimization by dividing the hosts into groups according to the quality of their downlink physical (PHY) channels.
This paper presents an energy conservation scheme, Maximum Unavailability Interval (MUI), to improve the energy efficiency for the Power Saving Class of Type II in IEEE 802.16e. By applying the Chinese Remainder Theorem, the proposed MUI is guaranteed to find the maximum Unavailability Interval, during which the transceiver can be powered down. We also propose new mathematical techniques to reduce the computational complexity when solving the Chinese Remainder Theorem problem. Because the computational complexity is reduced significantly, the proposed MUI can be practically implemented in real systems. The proposed MUI is fully compatible with the 802.16e standard. It provides a systematic way to determine the start frame number, one of the important parameters defined in the standard. In addition to analyzing the computational complexity, simulations and experiments are conducted to evaluate the performance of the proposed algorithms.
This paper is concerned with caching support of access points (APs) for fast handoff within IEEE 802.11 networks. A common flavor of current schemes is to let a mobile station preauthenticate or distribute the security context of the station proactively to neighboring APs. Each target AP caches the received context beforehand and can save itself backend-network authentication if the station reassociates. We present an approach to ameliorating cache effectiveness under the least recently used (LRU) replacement policy, additionally allowing for distinct cache miss penalty indicative of authentication delay. We leverage the widely used LRU caching techniques to effect a new model where high-penalty cache entries are prevented from being prematurely evicted under the conventional replacement policy so as to save frequent, expensive authentications with remote sites. This is accomplished by introducing software-generated reference requests that trigger cache hardware machinery in APs to refresh certain entries in an automated manner. Performance evaluations are conducted using simulation and analytical modeling. Performance results show that our approach, when compared with the base LRU scheme, reduces authentication delay by more than 51 percent and cache miss ratio by over 28 percent on average. Quantitative and qualitative discussions indicate that our approach is applicable in pragmatic settings
Cooperative (or distributed) sensing has been recognized as a viable means to enhance the incumbent signal detection by exploiting the diversity of sensors. However, it is challenging to secure such distributed sensing due mainly to the unique features of dynamic spectrum access networks-openness of low-layer protocol stacks in software-defined radio devices and the absence of interactions/coordination between primary and secondary devices. To meet this challenge, we propose an attack-tolerant distributed sensing protocol (ADSP) for DTV signal detection in IEEE 802.22 WRANs, under which sensors in close proximity are grouped as a cluster, and sensors within a cluster cooperate to safeguard the integrity of sensing. The heart of ADSP is a novel filter based on shadow-fading correlation, by which the fusion center cross-validates reports from the sensors to identify and penalize abnormal sensing reports. By realizing this correlation filter, ADSP significantly reduces the impact of an attack on the performance of distributed sensing, while incurring minimal processing and communication overheads. ADSP also guarantees the detectability requirements of 802.22 to be met even with the presence of sensing report manipulation attacks by scheduling sensing within the framework of sequential hypothesis testing. The efficacy of ADSP is validated on a realistic 2D shadow-fading field. Our extensive simulation-based study shows that ADSP reduces the false-alarm rate by 99.2 percent while achieving 97.4 percent of maximum achievable detection rate, and meets the detection requirements of IEEE 802.22 in various attack scenarios.
Context prediction is the task of inferring information about the progression of an observed context time series based on its previous behaviour. Prediction methods can be applied at several abstraction levels in the context processing chain. In a theoretical analysis as well as by means of experiments we show that the nature of the input data, the quality of the output, and finally the flow of processing operations used to make a prediction, are correlated. A comprehensive discussion of basic concepts in context prediction domains and a study on the effects of the context abstraction level on the context prediction accuracy in context prediction scenarios is provided. We develop a set of formulae that link scenario-dependent parameters to a probability for the context prediction accuracy. It is demonstrated that the results achieved in our theoretical analysis can also be confirmed in simulations as well as in experimental studies.
It is well known that IEEE 802.11 provides a physical layer multirate capability and, hence, MAC layer mechanisms are needed to exploit this capability. Several solutions have been proposed to achieve this goal. However, these solutions only consider how to exploit good channel quality for the direct link between the sender and the receiver. Since IEEE 802.11 supports multiple transmission rates in response to different channel conditions, data packets may be delivered faster through a relay node than through the direct link if the direct link has low quality and low rate. In this paper, we propose a novel MAC layer relay-enabled distributed coordination function (DCF) protocol, called rDCF, to further exploit the physical layer multirate capability. We design a protocol to assist the sender, the relay node, and the receiver to reach an agreement on which data rate to use and whether to transmit the data through a relay node. Considering various issues, such as, bandwidth utilization, channel errors, and security, we propose techniques to further improve the performance of rDCF. Simulation results show that rDCF can significantly reduce the packet delay, improve the system throughput, and alleviate the impact of channel errors on fairness.
In wireless ad hoc networks (WANets), multihop routing may result in a radio knowing the content of transmissions of nearby radios. This knowledge can be used to improve spatial reuse in the network, thereby enhancing network throughput. Consider two radios, Alice and Bob, that are neighbors in a WANet not employing spread-spectrum multiple access. Suppose that Alice transmits a packet to Bob for which Bob is not the final destination. Later, Bob forwards that packet on to the destination. Any transmission by Bob not intended for Alice usually causes interference that prevents Alice from receiving a packet from any of her neighbors. However, if Bob is transmitting a packet that he previously received from Alice, then Alice knows the content of the interfering packet, and this knowledge can allow Alice to receive a packet from one of her neighbors during Bob's transmission. In this paper, we develop overlapped transmission techniques based on this idea and analyze several factors affecting their performance. We then develop a MAC protocol based on the IEEE 802.11 standard to support overlapped transmission in a WANet. The resulting overlapped CSMA (OCSMA) protocol improves spatial reuse and end-to-end throughput in several scenarios.
The concept of cooperative relaying promises gains in robustness and energy-efficiency in wireless networks. Although protocols for cooperative relay selection were proposed recently, their analysis was made without consideration of the energy required for receiving. Such an analysis is unfair, as relaying requires more receptions than direct source-destination transmission. We consider this lack of analysis and propose two refinements of cooperative relaying. Using ldquorelay selection on demand,rdquo relays are only selected if required by the destination. Using ldquoearly retreat,rdquo each potential relay assesses the channel state and decides whether to participate in the relay selection process or not. Simulation results show that these enhancements reduce the overall energy consumption significantly.
We present an analytical framework to assess the link layer throughput of multichannel Opportunistic Spectrum Access (OSA) ad hoc networks. Specifically, we focus on analyzing various combinations of collaborative spectrum sensing and Medium Access Control (MAC) protocol abstractions. We decompose collaborative spectrum sensing into layers, parametrize each layer, classify existing solutions, and propose a new protocol called Truncated Time Division Multiple Access (TTDMA) that supports efficient distribution of sensing results in "k out of N" fusion rule. In case of multichannel MAC protocols we evaluate two main approaches of control channel design with (i) dedicated and (ii) hopping channel. We propose to augment these protocols with options of handling secondary user (SU) connections preempted by primary user (PU) by (i) connection buffering until PU departure and (ii) connection switching to a vacant PU channel. By comparing and optimizing different design combinations we show that (i) it is generally better to buffer preempted SU connections than to switch them to PU vacant channels and (ii) TTDMA is a promising design option for collaborative spectrum sensing process when k does not change over time.