We present a prototype for a new architecture, MCN (multihop
cellular network), implemented over a wireless LAN platform. MCN
preserves the virtue of traditional single-hop cellular networks where
the service infrastructure is constructed by many bases, but it also
adds the flexibility of ad-hoc networks where wireless transfer through
mobile stations in multiple hops is allowed. The MCN can reduce the
number of required bases or improve the throughput performance. On IEEE
802.11 compliant wireless LAN products, a bridging protocol, our BMBP
(base-driven multihop bridging protocol), runs between mobile stations
and access points to build bridging tables. The demonstration shows that
MCN is a feasible architecture for wireless LANs
Reliable data transmission over a wireless multi-hop network,
called the ad hoc network, has proven to be non-trivial. The TCP
(transmission control protocol), a widely used end-to-end reliable
transport protocol in a wired network, is not entirely suitable when
applied to a wireless ad hoc network due to TCP's congestion control
schemes. In particular, the TCP at the source considers the network as
congested when detecting packet losses or timeouts. However, in a
wireless ad hoc network when a route disconnection occurs because of
node movement, the network mistakes this as a congestion. Therefore, the
conventional TCP congestion control mechanism cannot be applied, because
a route disconnection must be handled differently from a network
congestion. We propose a new mechanism that improves the TCP performance
in a wireless ad hoc network where each node can buffer packets during a
route disconnection and reestablishment. Additionally, we incorporate
new measures to deal with the reliable transmission of important control
messages. Our simulation results further confirmed these
advantages
In an IP-based network, automated dynamic assignment of IP addresses is preferable. In most wired networks, a node relies on a centralized server by using dynamic host configuration protocol (DHCP) to obtain a dynamic IP address. However, the DHCP-based approach cannot be employed in a mobile ad hoc network (MANET) due to the uncertainty of any centralized DHCP server. That is, a MANET may become partitioned due to host mobility. Therefore, there is no guarantee to access a DHCP server. A general approach to address this issue is to allow a mobile host to pick a tentative address randomly, and then use duplicate address resolution (DAR) protocol to resolve any duplicate addresses. In this paper, an innovative distributed dynamic host configuration protocol designed to configure nodes in MANET is presented. The proposed protocol not only can detect the duplicate address, but also can resolve the problem caused by duplicate address. It shows that the proposed protocol works correctly and is more universal than earlier approaches. An enhanced version of DAR scheme is also proposed in this paper to solve the situation of duplicate MAC address. The new and innovative approach proposed in this paper can make the nodes in MANET provide services to other networks and avoid packets from being delivered to incorrect destinations.
In the near future packet networks should support applications which can not predict their traffic requirements in advance, but still have tight quality of service requirements, e.g., guaranteed bandwidth, jitter, and packet loss. These dynamic characteristics mean that the sources can be made to modify their data transfer rates according to network conditions. Depending on the customer's needs, network operator can differentiate incoming connections and handle those in the buffers and the interfaces in different ways. In this paper, dynamic QoS-aware scheduling algorithm is presented and investigated in the single node case. The purpose of the algorithm is — in addition to fair resource sharing to different types of traffic classes with different priorities — to maximize revenue of the service provider. It is derived from the linear type of revenue target function, and closed form globally optimal formula is presented. The method is computationally inexpensive, while still producing maximal revenue. Due to the simplicity of the algorithm, it can operate in the highly nonstationary environments. In addition, it is nonparametric and deterministic in the sense that it uses only the information about the number of users and their traffic classes, not about call density functions or duration distributions. Also, Call Admission Control (CAC) mechanism is used by hypothesis testing.
The BcN architecture of VPWS is introduced as a service model. The network elements forming the BcN data plane and a centralized controller forming the BcN control plane functionality are briefly described. A mapping architecture which transports the VPWS signal is presented. We examine the propagation of the maintenance signal under the network failure condition. The main purpose of this architecture is to find out the relationship between the maintenance signals in each layer. Finally, we briefly review the MPLS OAM ITU-T Rec. Y.1711 and Ethernet OAM being recommended in ITU-T Rec. Y.1731
In this paper, we report our investigation into the effect of using fixed multibeam antennas on the performance of the slotted ALOHA protocol with capture in a mobile communications environment with Rayleigh and Log-normal fading. We consider the configuration where multiple receivers are present at each base station and calculate the capture probability, its asymptotic value as the number of colliding packets tends to infinity, and throughput when multiple receivers are used. The results demonstrate that by using fixed multibeam antennas, we can achieve higher performance in terms of capture probability and throughput when compared to a conventional antenna system using the slotted ALOHA protocol.
The performance implications of retransmission diversity packet
combining on the RLC (radio link control)/MAC (medium access control)
layer and transport layer protocol performance are investigated for
three different heuristic-based RLC/MAC layer access control schemes in
a CDMA S-ALOHA network under frequency selective Rayleigh fading. The
transport layer protocol implements a two-level error recovery mechanism
for reliable data transmission. Two different transport layer timer
control mechanisms are considered. Implications of some physical layer
parameters on system performance are discussed. It is observed that, for
two-level error recovery through a reliable transport protocol, the
achieved throughput depends on the transport protocol timer control
mechanism and a suitable mechanism can be identified for an underlying
RLC/MAC layer access control scheme and a particular physical layer
design
The regrowth of OQPSK power spectral sidelobes from AM/AM and AM/PM amplifier nonlinearity is analyzed. The time-domain expression for amplifier output shows how spectral regrowth will depend on the cubic coefficient of the Taylor's series of the amplifier nonlinearity as well as input amplitude ripple. Closed form spectrum calculations show that the spectral sidelobes produced by AM/PM take the same form as those produced by AM/AM. The rate of growth of AM/PM sidelobes is, however, not as great as for AM/AM.
We propose to use turbo codes for wireless communication systems
with multiple transmit and receive antennas over Rayleigh fading
channels. We show that a simple, arbitrarily picked, turbo coded
modulation scheme with a sub-optimal decoding algorithm outperforms the
space-time codes significantly. We present examples for both block and
fast fading channels, and observe gains as high as 8 dB at a bit error
rate of 10<sup>-5</sup> for large interleaver lengths, suitable for data
communications. Furthermore, we show that, depending on the channel
model, the turbo code block size can be chosen small enough to be
suitable for speech applications, and still offer a significant
performance improvement in terms of bit and frame error rates
We propose a parallel and distributed routing algorithm with a
hierarchical connection management architecture for the ATM/B-ISDN
transport networks. In the proposed routing algorithm, a hierarchical
connection management architecture is assumed where each subnetwork has
its own routing functions to find the shortest route for the request
subnetwork connections, and the routing information is merged by the
upper-level domain subnetwork to find shortest path in the domain. This
subnetwork routing is performed in each subnetwork in hierarchy,
providing maximized parallel and distributed processing capability. The
proposed parallel and distributed routing algorithm can reduce the
routing time in a large network, such as a public B-ISDN
We provide the cross-layer analysis of wireless TCP systems. The effect of error correlation on the behavior of link retransmission strategy and the end-to-end throughput of TCP layer are investigated. Based on the cross-layer analysis, a refinement of link layer protocol is proposed by consciously utilizing the information of channel correlations, which leads to the performance improvement of wireless TCP systems.
We propose a framework to study how to route packets efficiently
in multipath communication networks. Two traffic congestion control
techniques, namely, flow assignment and packet scheduling, have been
investigated. The flow assignment mechanism defines an optimal splitting
of data traffic on multiple disjoint paths. The resequencing delay and
the usage of the resequencing buffer can be reduced significantly by
properly scheduling the sending order of all packets, say, according to
their expected arrival times at the destination. We consider a
multiple-node M/M/1 tandem network with a delay line as the path model.
When end-to-end path delays are all Gaussian distributed, our analytical
results show that the techniques are very effective in reducing the
average end-to-end path delay, the average packet resequencing delay,
and the average resequencing buffer occupancy for various path
configurations. These promising results can form a basis for designing
future adaptive multipath protocols
In cellular mobile communications, how to achieve optimum system capacity with limited frequency spectrum is one of the main research issues. Many dynamic channel assignment (DCA) schemes have been proposed and studied to increase the capacity of cellular systems. Reuse partitioning (RP) is another technique to achieve higher capacity by reducing the overall reuse distance. In this paper, a new network-based DCA scheme with the use of RP technique is proposed, namely dynamic reuse partitioning with interference information (DRP-WI). The scheme aims to minimize the effect of assigned channels on the availability of channels for use in the interfering cells and to reduce their overall reuse distances. Simulation results have confirmed the effectiveness of DRP-WI scheme. Under both uniform and nonuniform traffic distributions, DRP-WI exhibits outstanding performance in improving the system capacity. It can provide about 100% capacity improvement as compared to conventional fixed channel assignment scheme.
The capacity and the interference statistics of the sectors of the cigar-shaped W-CDMA microcell are studied. A model of multimicrocells is used to analyze the uplink when the users are within an under-ground train. The microcells are assumed to exist in a long under-ground tunnel. The capacity and the interference statistics of the microcells are studied for different propagation exponent, different antenna side lobe levels and different bends loss. The capacity for the best case and worst case are given.
It is well known that MIMO systems offer the promise of achieving very high spectrum efficiencies (many tens of bits/Hz) in a mobile environment. The gains in MIMO capacity are sensitive to the presence of spatial and temporal correlation introduced by the radio environment. In this paper we examine how MIMO capacity is influenced by a number of factors, e.g.: a) temporal correlation, b) various combinations of low/high spatial correlations at either end, c) combined spatial and temporal correlations, In all cases we compare the channel capacity that would be achievable under independent fading. We investigate the behaviour of "capacity fades", examine how often the capacity experiences the fades, develop a method to determine level crossing rates and average data durations and relate these to antenna numbers.
We consider a power controlled CDMA system with N nodes and F flow types, and with the constraint that each node uses the same power level for all flows. For the uplink case with F=1, the optimum sequences to minimize the total power are found and proved. For the uplink problem with N=2, the necessary and sufficient condition to have a solution is found and proved, and the iterative algorithm to find the optimal solution is provided. For the uplink problem with arbitrary N the iterative algorithm in finding the optimal solution is provided and its convergence is proved. For the downlink case, the power assignment problem is solved and some properties of the optimum sequences are proved.
We propose a new channel reservation protocol using a counter for
detection of a source conflict in a WDM single-hop network with
non-equivalent propagation delay. A source conflict occurs when a source
node has the right to transmit more than two messages to their
destination nodes using different wavelengths in the same time slot. In
the proposed protocol, a source node can detect a source conflict before
the assignment of wavelengths by investigating information about the
final message which has succeeded in reservation. We approximately
analyze the throughput considering the effect of a source conflict.
Also, we show by computer simulation that our proposed protocol can
reduce mean message delay dramatically without degrading the throughput
performance as the offered load becomes large
This paper deals with the problem of channel identification for
single input multiple output (SIMO) slow fading channels using
clustering algorithms. The received data vectors of the SIMO model are
spread in clusters because of the AWGN. Each cluster is centered around
the ideal channel output labels without noise. Starting from the Markov
SIMO channel model, simultaneous maximum-likelihood estimation of the
input vector and the channel coefficients reduces to one of obtaining
the values of this pair that minimizes the sum of the Euclidean norms
between the received and the estimated output vectors. The Viterbi
algorithm can be used for this purpose provided the trellis diagram of
the Markov model can be labeled with the noiseless channel outputs. The
problem of identification of the ideal channel outputs, which is the
focus of this paper, is then equivalent to designing a vector quantizer
(VQ) from a training set corresponding to the observed noisy channel
outputs. The Linde-Buzo-Gray (1980) type clustering algorithms could be
used to obtain the noiseless channel output labels from the noisy
received vectors. This paper looks at two critical issues with regards
to the use of VQ for channel identification. The first has to deal with
the applicability of this technique in general. We present theoretical
results showing the conditions under which the technique may be
applicable. The second aims at overcoming the codebook initialization
problem by proposing a novel approach which attempts to make the first
phase of the channel estimation faster than the classical codebook
initialization methods
Cooperative transmission protocols are always designed to reach the largest diversity gain and the largest network capacity simultaneously. The concept of diversity-multiplexing tradeoff (DMT) in MIMO systems put forward by Zheng and Tse has been extended to this field. However, the concept of multiplexing gain in DMT constrains a better understanding of the asymptotic interplay between transmission rate, frame error probability (FEP) and signal-to-noise ratio (SNR), and also fails to predict FEP curves accurately. Two improved methods are then put forward. One is by Narasimhan who proposes finite-SNR diversity-multiplexing gain tradeoff which gives a tighter lower bound of the FEP curves by applying nonlinear programming in MIMO systems, and the other is by Azarian and Gamal who propose a new rule called the throughput-reliability tradeoff (TRT) which avoids the limitation of the conception of multiplexing and elucidates the linearly asymptotic trends exhibited by the FEP curves in block-fading MIMO channels. The finite-SNR diversity-multiplexing gain tradeoff has already been applied to cooperative relay channels. However, this method is time-consuming in computation since nonlinear programming is used, especially in large networks. In this paper, we will use TRT rule to give the relationship between transmission rate, FEP and SNR in decode-and-forward (DF) cooperative protocols. We also exhibit the FEP curves predicted by TRT. To do this, We first propose a symbol based slotted decode-and-forward (SSDF) protocol as the infrastructure. Network information theory is also used to bound the capacity of the protocol.
Space-time coding is a bandwidth and power efficient method of
communication over fading channels that realizes the benefits of
multiple transmit and receive antennas. This novel technique has
attracted much attention. However, currently the only analytical guide
to the performance of space-time codes is an upper bound which could be
quite loose in many cases. In this paper, an exact pairwise error
probability is derived for space-time codes operating over Rayleigh
fading channels. Based on this expression, an analytical estimate for
the bit error probability is computed, taking into account dominant
error events. Simulation results indicate that the estimates are of high
accuracy in a broad range of signal-to-noise ratios
In cognitive radio systems, unlicensed users can use the frequency bands when the licensed users are not present. Hence reliable detection of available spectrum is the foundation of cognitive radio technology. To ensure that the unimpaired operation of licensed users and to improve the spectrum sensing performance, a novel cooperative spectrum sensing algorithm based on credibility is proposed. In particular, the close-form expressions for the probability of the detection and the false-alarm is derived for the novel algorithm, and the expression for the average overhead used for cooperation is given in the performance analysis. The conclusion is proved by computer simulations.
Ultra-wideband (UWB) radios have attracted great interest for their potential application in short-range high-data- rate wireless communications. High received signal noise ratio and compliance with the FCC spectral mask call for judicious design of UWB pulse shapers. In this paper, even and odd order derivatives of Gaussian pulse are used respectively as base waveforms to produce two synthesized pulses. Our method can realize high efficiency of spectral utilization in terms of normalized effective signal power (NESP). The waveform design problem can be converted into linear programming problem, which can be efficiently solved. The waveform based on even order derivatives is orthogonal with the one based on odd order derivatives.
We propose a delayed multiple copy retransmission (DMCR) scheme for data communication in wireless networks, by which multiple copies of a lost link layer frame are retransmitted at the link layer one-by-one with a delay in between. The number of the copies gradually increases as the number of retransmissions increases. For implementing DMCR scheme in a typical mobile communication system, an interleaving scheme is also proposed. Moreover, a simplified method called polling is suggested when retransmission times are very limited. We compare our scheme with the previous non-delayed retransmission scheme on the performance of both channel capacity and total transmission time. Numerical results show that the DMCR can achieve higher performance. The effect of the delay time on end-to-end TCP throughput is investigated as well.
The layer 3 switch enables us to transmit IP datagrams using the
cut-through technique. There are mainly two schemes of connection setup;
one is the flow-driven case and another is the topology-driven one. In
this paper, we analyze the cut-through rate, the datagram waiting time
and the mis-ordered rate as performance measures of both cases and
compare these performances. In the analysis, by using the interrupted
Bernoulli process (IBP), we model the arrival process of the IP flow and
the IP datagram from each source. Furthermore, we investigate the
impacts of the arrival rate and the average IP flow length on
performance
To provide network services consistently under various network failures, enterprise networks increasingly utilize path diversity through multi-homing. As a result, multi-homed non-transit autonomous systems (ASes) has surpassed the single-homed networks in number. In this paper, we address an inevitable problem that occurs when networks with multiple entry points deploy stateful inspection firewalls in their borders. In this paper, we formulate this phenomenon into a state-sharing problem among multiple firewalls under the asymmetric routing condition. To solve this problem, we propose a stateful inspection protocol that requires a very low processing and messaging overhead. Our protocol consists of the following two phases: 1) generation of a TCP SYN cookie marked with the firewall identification number upon a SYN packet arrival, and 2) state sharing triggered by a SYN/ACK packet arrival in the absence of the trail of its initial SYN packet. We demonstrate that our protocol is scalable, robust, and simple enough to be deployed for high speed networks. It also transparently works under any client-server configurations. Last but not the least, we present the experimental results through a prototype implementation.
The controlled load service defined within the IETF's Integrated
Services architecture for QoS in the Internet requires source points to
regulate the traffic while the network provides a soft guarantee on
performance. Packets sent in violation of the traffic are marked so that
the network may give them lower priority. We have defined the
requirements of a scheduler serving packets belonging to the controlled
load service. Besides efficiency and throughput goals, we define another
important requirement to bound the additional delay of unmarked packets
caused due to the transmission of marked packets. For any given desired
bound α on this additional delay, we present the CL(α)
scheduler which achieves the bound while also achieving a per-packet
work complexity of O(1). We also provide analytical proofs of these
results on the CL(α) scheduler. The principle used in this
algorithm can also be used to schedule flows with multilevel priorities,
such as in some real-time video streams as well as in other emerging
service models of the Internet that mark packets to identify drop
precedences
In this paper, a multilevel-quantized soft-limiting (SL-MQ)
detector for frequency hopping spread spectrum multiple access (FH-SSMA)
systems is proposed and analyzed. Numerical and simulation results in
frequency selective Rayleigh fading channels show that compared to the
hard-limiting (HL) detector, the new SL-MQ with M=4 can improve the
system capacity by almost 10% at the bit error rate level of 10<sup>-3
</sup>. Furthermore, the performance of the SL-MQ has low sensitivity to
the optimum value of the amplitude threshold so that it can tolerate an
inaccurate estimate of the optimum in practice
The differentiated services architecture provides router
mechanisms for aggregate traffic, and edge mechanisms for individual
flows, that together can be used to build services with varying delay
and loss behavior. We compare the loss and delay behavior that can be
provided using the services based on combinations of two router
mechanisms, threshold dropping and priority scheduling and two packet
marking mechanisms, edge-discarding and edge-marking. We compare the
delay and loss behavior of the two router mechanisms coupled with
edge-discarding for a wide range of traffic arrivals. We observe that
priority scheduling provides lower expected delays to preferred traffic
than threshold dropping. In addition, we find that a considerable
additional link bandwidth is needed with threshold dropping to provide
same delay behavior as priority scheduling. We further observe little
difference in the loss incurred by preferred traffic under both router
mechanisms, except when sources are extremely bursty, in which case
threshold dropping performs better. We examine the throughput of a TCP
connection that uses a service built upon threshold dropping and
edge-marking. Our analysis shows that a significant improvement in
throughput can be achieved. However, we find that in order to fully
achieve the benefit of such a packet marking, the TCP window must take
the edge-marking mechanism into consideration
In this paper, we propose a new scheme for small group multicast in mobile ad hoc networks (MANETs) named extended explicit multicast (E<sup>2</sup>M), which is built on top of Xcast and introduces mechanisms to make it scalable with number of group members for a given multicast session. E<sup>2</sup>M is based on the novel concept of dynamic selection of Xcast forwarder (XF) between a source and its potential destinations. The XF selection is carried out based on group membership and the processing overhead involved in supporting the Xcast procedure at a given node. If the number of members in a given session is small, E<sup>2</sup>M works just like the basic Xcast scheme with no intermediate XFs. As group membership increases, the overhead involved in supporting an Xcast session also increases. Therefore, to reduce the overhead and provide scalability, in E<sup>2</sup>M nodes may dynamically decide to become a XF. This scheme, which can work with few E<sup>2</sup>M aware nodes in the network, provides the transparency of stateless multicast, reduces header processing overhead and makes Xcast scalable with the number of group members.
The maximum likelihood detection with QR decomposition and M-algorithm (QRM-MLD) has been presented as a sub- optimum multiple input-multiple output (MIMO) detection scheme which can provide almost the same performance as the optimum maximum likelihood (ML) MIMO detection scheme but with the reduced complexity. However, due to the lack of parallelism and the regularity in the decoding structure, the conventional QRM-MLD which uses the tree-structure still has very high complexity for the very large scale integration (VLSI) implementation. In this paper, we modify the tree- structure of conventional QRM-MLD into trellis-structure in order to obtain high operational parallelism and regularity and then apply the Viterbi algorithm to the QRM-MLD to ease the burden of the VLSI implementation. We show from our selected numerical examples that, by using the QRM-MLD with our proposed trellis-structure, we can reduce the complexity significantly comparing to the tree-structure based QRM-MLD while the performance degradation of our proposed scheme is negligible.
The application of mathematical analysis to the study of wireless ad hoc networks has met with limited success due to the
complexity of mobility, traffic models and the dynamic topology. A scenario based UMTS TDD opportunistic cellular system with
an ad hoc behaviour that operates over UMTS FDD licensed cellular network is considered. In this paper, using an ah-hoc behavior
in opportunistic Radio, we present how the overall system performance effect in terms of interference and routing.Therefore
we develop a simulation tool that addresses the goal of analysis and assessment of UMTS TDD opportunistic radio system with
ad hoc behaviour in coexistence with a UMTS FDD primary cellular networks.
Supporting quality of service (QoS) guarantees for diverse multimedia
services are the primary concerns for WiMAX (IEEE 802.16) networks. A
scheduling scheme that satisfies QoS requirements has become more important for
wireless communications. We propose a downlink scheduling scheme called
adaptive priority-based downlink scheduling (APDS) for providing QoS guarantees
in IEEE 802.16 networks. APDS comprises two major components: priority
assignment and resource allocation. Different service-type connections
primarily depend on their QoS requirements to adjust priority assignments and
dispatch bandwidth resources dynamically. We consider both starvation avoidance
and resource management. Simulation results show that our APDS methodology
outperforms the representative scheduling approaches in QoS satisfaction and
maintains fairness in starvation prevention.
Multiple-input, multiple-output (MIMO) technology provides high data rate and
enhanced QoS for wireless com- munications. Since the benefits from MIMO result
in a heavy computational load in detectors, the design of low-complexity
sub-optimum receivers is currently an active area of research.
Lattice-reduction-aided detection (LRAD) has been shown to be an effective
low-complexity method with near-ML performance. In this paper we advocate the
use of systolic array architectures for MIMO receivers, and in particular we
exhibit one of them based on LRAD. The "LLL lattice reduction algorithm" and
the ensuing linear detections or successive spatial-interference cancellations
can be located in the same array, which is con- siderably hardware-efficient.
Since the conventional form of the LLL algorithm is not immediately suitable
for parallel processing, two modified LLL algorithms are considered here for
the systolic array. LLL algorithm with full-size reduction (FSR-LLL) is one of
the versions more suitable for parallel processing. Another variant is the
all-swap lattice-reduction (ASLR) algorithm for complex-valued lattices, which
processes all lattice basis vectors simultaneously within one iteration. Our
novel systolic array can operate both algorithms with different external logic
controls. In order to simplify the systolic array design, we replace the
Lov\'asz condition in the definition of LLL-reduced lattice with the looser
Siegel condition. Simulation results show that for LR- aided linear detections,
the bit-error-rate performance is still maintained with this relaxation.
Comparisons between the two algorithms in terms of bit-error-rate performance,
and average FPGA processing time in the systolic array are made, which shows
that ASLR is a better choice for a systolic architecture, especially for
systems with a large number of antennas.
This paper considers a cooperative OFDMA-based cognitive radio network where
the primary system leases some of its subchannels to the secondary system for a
fraction of time in exchange for the secondary users (SUs) assisting the
transmission of primary users (PUs) as relays. Our aim is to determine the
cooperation strategies among the primary and secondary systems so as to
maximize the sum-rate of SUs while maintaining quality-of-service (QoS)
requirements of PUs. We formulate a joint optimization problem of PU
transmission mode selection, SU (or relay) selection, subcarrier assignment,
power control, and time allocation. By applying dual method, this mixed integer
programming problem is decomposed into parallel per-subcarrier subproblems,
with each determining the cooperation strategy between one PU and one SU. We
show that, on each leased subcarrier, the optimal strategy is to let a SU
exclusively act as a relay or transmit for itself. This result is fundamentally
different from the conventional spectrum leasing in single-channel systems
where a SU must transmit a fraction of time for itself if it helps the PU's
transmission. We then propose a subgradient-based algorithm to find the
asymptotically optimal solution to the primal problem in polynomial time.
Simulation results demonstrate that the proposed algorithm can significantly
enhance the network performance.
We introduce a novel packet retransmission technique which improves the efficiency of automatic retransmission query (ARQ) protocols in the context of satellite broadcast/multicast systems. The proposed coded ARQ technique, similarly to fountain coding, performs transmission of redundant packets, which are made by linear combinations of the packets composing the source block. Differently from fountain codes, the packets for the linear combinations are selected on the basis of the retransmission requests coming from the user terminals. The selection is performed in a way that, at the terminals, the source packets can be recovered iteratively by means of simple back-substitutions. This work aims at providing a simple and efficient alternative to reliable multicast protocols based on erasure correction coding techniques.
We study the problem of assigning $K$ identical servers to a set of $N$
parallel queues in a time-slotted queueing system. The connectivity of each
queue to each server is randomly changing with time; each server can serve at
most one queue and each queue can be served by at most one server during each
time slot. Such a queueing model has been used in addressing resource
allocation problems in wireless networks. It has been previously proven that
Maximum Weighted Matching (MWM) is a throughput-optimal server assignment
policy for such a queueing system. In this paper, we prove that for a system
with i.i.d. Bernoulli packet arrivals and connectivities, MWM minimizes, in
stochastic ordering sense, a broad range of cost functions of the queue lengths
such as total queue occupancy (which implies minimization of average queueing
delays). Then, we extend the model by considering imperfect services where it
is assumed that the service of a scheduled packet fails randomly with a certain
probability. We prove that the same policy is still optimal for the extended
model. We finally show that the results are still valid for more general
connectivity and arrival processes which follow conditional permutation
invariant distributions.
Energy harvesting has emerged as a powerful technology for complementing
current battery-powered communication systems in order to extend their
lifetime. In this paper a general framework is introduced for the optimization
of communication systems in which the transmitter is able to harvest energy
from its environment. Assuming that the energy arrival process is known
non-causally at the transmitter, the structure of the optimal transmission
scheme, which maximizes the amount of transmitted data by a given deadline, is
identified. Our framework includes models with continuous energy arrival as
well as battery constraints. A battery that suffers from energy leakage is
studied further, and the optimal transmission scheme is characterized for a
constant leakage rate.
This paper analyzes a broadcasting technique for wireless multi-hop sensor networks that uses a form of cooperative diversity called opportunistic large arrays (OLAs). We propose a method for autonomous scheduling of the nodes, which limits the nodes that relay and saves as much as 32% of the transmit energy compared to other broadcast approaches, without requiring Global Positioning System (GPS), individual node addressing, or inter-node interaction. This energy-saving is a result of cross-layer interaction, in the sense that the Medium Access Control (MAC) and routing functions are partially executed in the Physical (PHY) layer. Our proposed method is called OLA with a transmission threshold (OLA-T), where a node compares its received power to a threshold to decide if it should forward. We also investigate OLA with variable threshold (OLA-VT), which optimizes the thresholds as a function of level. OLA-T and OLA-VT are compared with OLA broadcasting without a transmission threshold, each in their minimum energy configuration, using an analytical method under the orthogonal and continuum assumptions. The trade-off between the number of OLA levels (or hops) required to achieve successful network broadcast and transmission energy saved is investigated. The results based on the analytical assumptions are confirmed with Monte Carlo simulations.
In randomly deployed networks, such as sensor networks, an important problem
for each node is to discover its \textit{neighbor} nodes so that the
connectivity amongst nodes can be established. In this paper, we consider this
problem by incorporating the physical layer parameters in contrast to the most
of the previous work which assumed a collision channel. Specifically, the pilot
signals that nodes transmit are successfully decoded if the strength of the
received signal relative to the interference is sufficiently high. Thus, each
node must extract signal parameter information from the superposition of an
unknown number of received signals. This problem falls naturally in the purview
of random set theory (RST) which generalizes standard probability theory by
assigning \textit{sets}, rather than values, to random outcomes. The
contributions in the paper are twofold: first, we introduce the realistic
effect of physical layer considerations in the evaluation of the performance of
\textit{logical} discovery algorithms; such an introduction is necessary for
the accurate assessment of how an algorithm performs. Secondly, given the
\textit{double} uncertainty of the environment (that is, the lack of knowledge
of the number of neighbors along with the lack of knowledge of the individual
signal parameters), we adopt the viewpoint of RST and demonstrate its advantage
relative to classical matched filter detection method.
In this paper, we investigate two new candidate transmission schemes,
Non-Orthogonal Frequency Reuse (NOFR) and Beam-Hoping (BH). They operate in
different domains (frequency and time/space, respectively), and we want to know
which domain shows overall best performance. We propose a novel formulation of
the Signal-to-Interference plus Noise Ratio (SINR) which allows us to prove the
frequency/time duality of these schemes. Further, we propose two novel capacity
optimization approaches assuming per-beam SINR constraints in order to use the
satellite resources (e.g. power and bandwidth) more efficiently. Moreover, we
develop a general methodology to include technological constraints due to
realistic implementations, and obtain the main factors that prevent the two
technologies dual of each other in practice, and formulate the technological
gap between them. The Shannon capacity (upper bound) and current
state-of-the-art coding and modulations are analyzed in order to quantify the
gap and to evaluate the performance of the two candidate schemes. Simulation
results show significant improvements in terms of power gain, spectral
efficiency and traffic matching ratio when comparing with conventional systems,
which are designed based on uniform bandwidth and power allocation. The results
also show that BH system turns out to show a less complex design and performs
better than NOFR system specially for non-real time services.
Three areas of ongoing research in channel coding are surveyed, and recent
developments are presented in each area: spatially coupled Low-Density
Parity-Check (LDPC) codes, non-binary LDPC codes, and polar coding.
Massive multiple-input multiple-output (MIMO) is a promising approach for
cellular communication due to its energy efficiency and high achievable data
rate. These advantages, however, can be realized only when channel state
information (CSI) is available at the transmitter. Since there are many
antennas, CSI is too large to feed back without compression. To compress CSI,
prior work has applied compressive sensing (CS) techniques and the fact that
CSI can be sparsified. The adopted sparsifying bases fail, however, to reflect
the spatial correlation and channel conditions or to be feasible in practice.
In this paper, we propose a new sparsifying basis that reflects the long-term
characteristics of the channel, and needs no change as long as the spatial
correlation model does not change. We propose a new reconstruction algorithm
for CS, and also suggest dimensionality reduction as a compression method. To
feed back compressed CSI in practice, we propose a new codebook for the
compressed channel quantization assuming no other-cell interference. Numerical
results confirm that the proposed channel feedback mechanisms show better
performance in point-to-point (single-user) and point-to-multi-point
(multi-user) scenarios.
This paper considers a two-user Gaussian interference channel with energy
harvesting transmitters. Different than conventional battery powered wireless
nodes, energy harvesting transmitters have to adapt transmission to
availability of energy at a particular instant. In this setting, the optimal
power allocation problem to maximize the sum throughput with a given deadline
is formulated. The convergence of the proposed iterative coordinate descent
method for the problem is proved and the short-term throughput maximizing
offline power allocation policy is found. Examples for interference regions
with known sum capacities are given with directional water-filling
interpretations. Next, stochastic data arrivals are addressed. Finally online
and/or distributed near-optimal policies are proposed. Performance of the
proposed algorithms are demonstrated through simulations.
This paper considers the scenario in which a set of nodes share a common
channel. Some nodes have a rechargeable battery and the others are plugged to a
reliable power supply and, thus, have no energy limitations. We consider two
source-destination pairs and apply the concept of cognitive radio communication
in sharing the common channel. Specifically, we give high-priority to the
energy-constrained source-destination pair, i.e., primary pair, and
low-priority to the pair which is free from such constraint, i.e., secondary
pair. In contrast to the traditional notion of cognitive radio, in which the
secondary transmitter is required to relinquish the channel as soon as the
primary is detected, the secondary transmitter not only utilizes the idle slots
of primary pair but also transmits along with the primary transmitter with
probability $p$. This is possible because we consider the general multi-packet
reception model. Given the requirement on the primary pair's throughput, the
probability $p$ is chosen to maximize the secondary pair's throughput. To this
end, we obtain two-dimensional maximum stable throughput region which describes
the theoretical limit on rates that we can push into the network while
maintaining the queues in the network to be stable. The result is obtained for
both cases in which the capacity of the battery at the primary node is infinite
and also finite.
Advances in Wireless Sensor Network Technology (WSN) have provided the
availability of small and low-cost sensor with capability of sensing various
types of physical and environmental conditions, data processing and wireless
communication. In WSN, the sensor nodes have a limited transmission range, and
their processing and storage capabilities as well as their energy resources are
limited. Triple Umpiring System (TUS) has already been proved its better
performance on Wireless Sensor Networks. Clustering technique provides an
effective way to prolong the lifetime of WSN. In this paper, we modified the Ad
hoc on demand Distance Vector Routing (AODV) by incorporating Signal to Noise
Ratio (SNR) based dynamic clustering. The proposed scheme Efficient and Secure
Routing Protocol for Wireless Sensor Networks through SNR based dynamic
Clustering mechanisms (ESRPSDC) can partition the nodes into clusters and
select the Cluster Head (CH) among the nodes based on the energy and Non
Cluster Head (NCH) nodes join with a specific CH based on SNR Values. Error
recovery has been implemented during Inter cluster routing itself in order to
avoid end-toend error recovery. Security has been achieved by isolating the
malicious nodes using sink based routing pattern analysis. Extensive
investigation studies using Global Mobile Simulator (GloMoSim) showed that this
Hybrid ESRP significantly improves the Energy efficiency and Packet Reception
Rate (PRR) compared to SNR unaware routing algorithms like Low Energy Aware
Adaptive Clustering Hierarchy (LEACH) and Power- Efficient Gathering in Sensor
Information Systems (PEGASIS).
This paper presents a computationally efficient implementation of a Hamming
code decoder on a graphics processing unit (GPU) to support real-time
software-defined radio (SDR), which is a software alternative for realizing
wireless communication. The Hamming code algorithm is challenging to
parallelize effectively on a GPU because it works on sparsely located data
items with several conditional statements, leading to non-coalesced, long
latency, global memory access, and huge thread divergence. To address these
issues, we propose an optimized implementation of the Hamming code on the GPU
to exploit the higher parallelism inherent in the algorithm. Experimental
results using a compute unified device architecture (CUDA)-enabled NVIDIA
GeForce GTX 560, including 335 cores, revealed that the proposed approach
achieved a 99x speedup versus the equivalent CPU-based implementation.
In this paper, a new decoding scheme for low-density parity-check (LDPC)
codes using the concept of simple product code structure is proposed based on
combining two independently received soft-decision data for the same codeword.
LDPC codes act as horizontal codes of the product codes and simple algebraic
codes are used as vertical codes to help decoding of the LDPC codes. The
decoding capability of the proposed decoding scheme is defined and analyzed
using the paritycheck matrices of vertical codes and especially the
combined-decodability is derived for the case of single parity-check (SPC) and
Hamming codes being used as vertical codes. It is also shown that the proposed
decoding scheme achieves much better error-correcting capability in high signal
to noise ratio (SNR) region with low additional decoding complexity, compared
with a conventional decoding scheme.
Rateless codes have been shown to be able to provide greater flexibility and
efficiency than fixed-rate codes for multicast applications. In the following,
we optimize rateless codes for unequal error protection (UEP) for multimedia
multicasting to a set of heterogeneous users. The proposed designs have the
objectives of providing either guaranteed or best-effort quality of service
(QoS). A randomly interleaved rateless encoder is proposed whereby users only
need to decode symbols up to their own QoS level. The proposed coder is
optimized based on measured transmission properties of standardized raptor
codes over wireless channels. It is shown that a guaranteed QoS problem
formulation can be transformed into a convex optimization problem, yielding a
globally optimal solution. Numerical results demonstrate that the proposed
optimized random interleaved UEP rateless coder's performance compares
favorably with that of other recently proposed UEP rateless codes.
This paper studies non-asymptotic model selection for the general case of arbitrary design matrices and arbitrary nonzero entries of the signal. In this regard, it generalizes the notion of incoherence in the existing literature on model selection and introduces two fundamental measures of coherence---termed as the worst-case coherence and the average coherence---among the columns of a design matrix. It utilizes these two measures of coherence to provide an in-depth analysis of a simple, model-order agnostic one-step thresholding (OST) algorithm for model selection and proves that OST is feasible for exact as well as partial model selection as long as the design matrix obeys an easily verifiable property. One of the key insights offered by the ensuing analysis in this regard is that OST can successfully carry out model selection even when methods based on convex optimization such as the lasso fail due to the rank deficiency of the submatrices of the design matrix. In addition, the paper establishes that if the design matrix has reasonably small worst-case and average coherence then OST performs near-optimally when either (i) the energy of any nonzero entry of the signal is close to the average signal energy per nonzero entry or (ii) the signal-to-noise ratio in the measurement system is not too high. Finally, two other key contributions of the paper are that (i) it provides bounds on the average coherence of Gaussian matrices and Gabor frames, and (ii) it extends the results on model selection using OST to low-complexity, model-order agnostic recovery of sparse signals with arbitrary nonzero entries. Comment: 31 pages, 4 figures; This paper is a full-length journal version of a shorter paper that was presented at the IEEE International Symposium on Information Theory, Austin, TX, June 2010