Wiley

International Journal Of Communication Systems

Published by Wiley
Online ISSN: 1099-1131
Discipline: Communication Technology
Learn more about this page
Aims and scope

The International Journal of Communication Systems provides a forum for R&D, open to researchers from all types of institutions and organisations worldwide, aimed at the increasingly important area of communication technology. The Journal's emphasis is particularly on the issues impacting behaviour at the system, service and management levels. Published twelve times a year, it provides coverage of advances that have a significant potential to impact the immense technical and commercial opportunities in the communications sector. The International Journal of Communication Systems strives to select a balance of contributions that promotes technical innovation allied to practical relevance across the range of system types and issues.

 

Editors

Recent publications
Article
During the past few years, wireless sensor networks (WSNs) have created a substantial interest because of their capabilities namely, self‐healing and self‐organizing under restricted network resources. Numerous WSN‐based routing algorithms are implemented to perform well, but the energy consumption of every data packet and information loss is more. Due to the power restriction in the sensor nodes, energy consumption is regarded as the major issue in networks that need to be minimized to improve the network lifetime. Hence, to conquer these consequences, this paper aims to propose an adaptive spider monkey optimization (ASMO)‐based effective optimal routing path from various sensor nodes with enhanced network lifetime. Here an ASMO is employed in selecting an optimal routing path. In addition to this, the proposed ASMO algorithm is capable of selecting energy‐aware routes by energy consumption. It also chooses the routes with minimum end‐to‐end delay using the bare least hop count criteria, thus enhancing the network life span. The results of the proposed method are compared with other methods such as DMEERP, EGSMRP, EEIABC, and RACO. The results reveal that the proposed approach consumed less amount of energy; hence, network lifetime is more when compared with other approaches.
 
Article
In wireless sensor networks (WSNs), restricted battery power, balanced energy consumption, and collaborative data processing are considered as key challenges that need to be handled with utmost care. Clustering in WSNs is an optimal methodology for conserving energy of sensor nodes and attaining efficient data processing that attributes toward the maximization of network lifetime. An efficient cluster head (CH) selection scheme is essential for achieving superior collaborative data processing in WSNs. Swarm intelligent metaheuristic algorithm‐based CH selection approaches are identified to be better for designing energy‐efficient schemes that select optimal CHs from nodes in a fair way. In this paper, Hybrid Grasshopper and Differential Evolution‐based Optimization Algorithm (HGDEOA) is proposed for targeting on the objective of attaining energy stability and prolonging network lifetime. This HGDEOA incorporates an adaptive strategy into differential evolution (DE) for improving the global searching capability during the process of optimization. It is proposed for improving the convergence efficiency and maintaining population diversity. It also possesses the capability of enhancing the degree of convergence speed and precision in calculation. This integration of GOA into DE prevents the premature convergence of the algorithm, since the deviation amid the scaling individuals induces the population to be more randomly distributed, targeting at the retention of population diversity. The simulation results confirm that the proposed HGDEOA sustains residual energy by 19.32%, improves throughput by 16.21%, prolongs network lifetime by 18.76%, and maintains stability by18.94% when compared to the benchmarked approaches used for investigation.
 
Article
A wideband hybrid microstrip patch antenna with high bandwidth dimension ratio (BDR) is reported in this research work. The electrical dimension of the presented antenna is 0.268λ × 0.201λ × 0.01λ, where λ is the maximum operating wavelength of the antenna. The patch consists of one circle and four parallelograms overlapping with each other. The patch looks like a two dimensional “PINE TREE.” The fabricated antenna exhibits return loss less than −10 dB from 2 to 15.40 GHz with a percentage bandwidth of 154% and BDR of 2859. Also a peak gain of 3.3 dBi is achieved at 8 GHz. To improve the peak gain further over the entire frequency band, proposed antenna is combined with a mono layer stop band frequency selective surface (FSS). Without disturbing the original bandwidth, the antenna is mounted on top of 16 unit cell FSS arranged in of 4 × 4 matrix. When FSS is placed at a suitable minimum distance, maximum gain of 7.22 dBi is achieved by the combined structure. Physical dimension of the proposed FSS combined antenna is 64 × 64 × 35.2 mm, which is very much suitable for long range wireless applications. Both the antenna and the FSS are designed and simulated using Computer Simulation Technology (CST) software. Also it is fabricated on FR4 substrate. It has loss tangent (tan theta) of 0.02, dielectric constant of 4.4, and thickness of 1.6 mm. All the simulation results have been validated using Vector Network Analyzer (VNA). The measured results exhibit good compliance with the simulated results.
 
Article
The Internet of Things (IoT) is expected to connect devices with unique identifiers over a network to create an equilibrium system with high speeds and volumes of data while presenting an interoperability challenge. The IoT data management system is indispensable for attaining effective and efficient performance because IoT sensors generate and collect large amounts of data used to express large data sets. IoT data management has been analyzed from various perspectives in numerous studies. In this study, a Systematic Literature Review (SLR) method was used to investigate the various topics and key areas that have recently emerged in IoT data management. This study aims to classify and evaluate studies published between 2015 and 2021 in IoT data management. Therefore, the classification of studies includes five categories, data processing, data smartness application, data collection, data security, and data storage. Then, studies in each field are compared based on the proposed classification. Each study investigates novel findings, simulation/implementation, data set, application domain, experimental results, advantages, and disadvantages. In addition, the criteria for evaluating selected articles for each domain of IoT data management are examined. Big data accounts for the highest percentage of data processing fields in IoT data management, at 34%. In addition, fast data processing, distributed data, artificial intelligence data with 22%, and data uncertainty analysis account for 11% of the data processing field. Finally, studies highlight the challenges of IoT data management and its future directions.
 
Article
Efficient geographic routing techniques in multi‐hop wireless networks are popular and attractive for their effective routing mechanism; here, for routing, the location information is used instead of topology information. In recent years, most of the proposed position‐based routing protocols have been designed for 2D space, but in real‐life scenarios, the nodes are usually deployed in 3D space. Working with 3D space is more complicated than with 2D space. Enhancement of the end‐to‐end throughput while reducing the energy consumption and the end‐to‐end delay is the primary consideration for designing routing protocols in a 3D geographical multi‐hop wireless network. This paper proposes a projection‐based routing technique, where routes obtained in 3D multi‐hop wireless networks are projected on 2D planes and on the reference line (RL). The main goal of this study is to simplify the operational difficulties in 3D space and performance enhancement of the network in terms of throughput, energy consumption and delay. The proposed technique presents the various projection methods for the routes obtained in 3D space. The projections are made on (i) the RL joining source to destination; (ii) the planes XY$$ XY $$, YZ$$ YZ $$ and ZX$$ ZX $$; and (iii) a particular 3D plane. The effectiveness of the proposed technique is demonstrated through simulation, considering the network parameters, end‐to‐end delay, throughput and energy consumption.
 
Article
Efficient routing of generated packets through the network with minimal overhead in path discovery and subsequent route maintenance is the fundamental objective required in the modern wireless network operation. Recently, the application of autonomic computational learning techniques for design and optimization of routing protocols in ad hoc networks is substantially gaining the research interest. The commonly deployed soft computing methodology of fuzzy inference system is capable of handling uncertain and imprecise networking information related to the frequently changing states of generic mobile technologies. In this paper, we propose a novel fuzzy logic‐based ad hoc on‐demand distance vector (FL‐AODV) routing protocol employing the multivariate cross‐layer design architecture to optimize the multiple performance parameters in wireless ad hoc networks. This fuzzy optimization framework essentially applies the header length from data link and physical layers, route timeout from network layer, and node mobility speed from application layer as inputs to the fuzzification interface. Besides, bit rate for application layer and communication range parameter for data link layer are scrutinized as the fuzzy outputs derived from the defuzzifier. The designed adaptive routing protocol is extensively assessed through simulation experimental analysis under the varying effects of node mobility conditions. Various network performance attributes including the reception cache hit, packet delivery ratio, packet errors, ping loss rate, mean throughput, and delay are computed and analyzed for comprehensive comparison between the presented fuzzy‐based FL‐AODV and classical AODV routing mechanisms. Finally, we compare our fuzzy routing model with previous algorithms to demonstrate its efficacy in terms of key performance metrics of throughput and delay.
 
Article
Nowadays, vehicles have become more and more intelligent and equipped with highly sophisticated systems. This allows them to communicate with each other and with the roadside units (RSUs). Furthermore, to ensure efficient data dissemination in vehicular ad hoc networks (VANETs), it is recommended that a Vehicle‐to‐Infrastructure (V2I) architecture be chosen where RSUs will be installed at intersections. Nevertheless, it is not advisable to place an RSU at each intersection because of their high cost. It is therefore appropriate to reduce the number of RSUs by choosing locations at intersections that maximize the surface covered of the urban area and minimize the area of overlapping zones. Moreover, deploying an optimal number of RSUs in an urban area meeting the above requirements is an NP‐hard problem since the number of combinations is very high when the number of intersections is very large. For this purpose, we used metaheuristic approaches. The first approach is represented by the standard version of genetic algorithms (GA‐Basic) and its improved version (GA‐Improved) while the second approach is based on the standard version of simulated annealing (SA‐Basic) and its improved version (SA‐Improved). The proposed approaches are evaluated over OMNET++ simulator. The results obtained showed that the GA‐Improved approach deploys a reduced number of RSUs (37.5%) while guaranteeing acceptable routing performance compared to the GA‐Basic, SA‐Basic, and SA‐Improved approaches which deploy 46.25%, 65%, and 52.50%, respectively, for the same routing performance.
 
Article
Internet of Mobile Things (IoMT) have become very popular recently. The routing protocol for low power and lossy networks (RPL) is standardized for static topologies. However, mobility is the nature of IoT. Mobility serves as a promising candidate to harness hand‐off time issues, delay in data transmission, overhead, and low packet delivery rate (PDR) effectively. This study presents a comprehensive account of the mobility‐aware RPL‐based routing protocols to validate and compare the experimental results. Remarkably, classification methods are used in many articles. The aim is to introduce significant research efforts to improve RPL objective functions (OF) performance in hand‐off time, PDR, delay, overhead, and so forth. In this regard, a complete analysis of the existing routing protocols in IoMT has been presented to compare the results. The main focus of this study is on approaches that proposed new OFs for supporting mobility in RPL. Two main categories are considered to study RPL‐based routing protocol mechanisms: The mobile and static sink. The related studies on the mobile sink are divided into three groups: Single metric‐based OF, composite metric OF, and hybrid routing protocols. Also, the related works based on the static sink are categorized into four groups: Fuzzy logic‐based OF, trickle timer‐based OF, composite metrics‐based OF, and modification control messages‐based OF approach. This paper presents a detailed comparison of mechanisms in each category. It also highlights the pros, cons, open issues, and evaluated metrics of each paper. Besides, challenges of mobility in the RPL‐based routing protocol mechanism in IoMT for future studies.
 
Article
Named data networking (NDN) is gaining momentum as a future Internet architecture. NDN is a type of information‐centric networking (ICN) that attempts to change the current Internet architecture from host/location‐centric to content‐centric, where data retrieving is done by the names of the contents irrespective of the location of the data. A mechanism to advertise the name‐based prefixes between different domains is necessary to accelerate the NDN deployment. In IP‐based routing, border gateway protocol (BGP) is the de facto inter‐domain routing protocol that plays a vital role in Internet communication by enabling different Internet domains to exchange routing information. In its current form, BGP can advertise and process IP‐based prefixes, but it cannot advertise or process NDN name‐based prefixes. Accordingly, BGP needs to be extended to support NDN technology. This paper proposes an NDN extension for BGP, referred to as N‐BGP, that offers a solution to exchange name‐based routes in the current BGP networks. This proposed extension modifies the traditional BGP speaker into a hybrid one. This hybrid speaker is qualified to understand, advertise, receive, process, and store both IP‐based and name‐based routes simultaneously and efficiently, without disturbing or breaking the current Internet operation; that is, it can coexist along with traditional speakers. We also validate and evaluate our proposed solution in a hybrid environment, and the results show that N‐BGP has the capability to exchange and process both Name and IP‐based routes efficiently.
 
Article
Software‐Defined Network (SDN) based group mobility models named MoMo and resource management algorithm produced through mutation lion optimization algorithm (MLOA) are proposed to alleviate handover, signaling cost and address the network congestion issues. By using the global view of SDN controlled network, the proposed algorithm takes network conditions and end user quality of services requirements into account to achieve network quality. The global view of SDN controlled network and the optimization of resource allocation decisions are handled by MLOA. The proposed system has four phases of operation, namely, initial phase, handover preparation phase, resource management phase, and handover phase. In the SDN based mobile environment of network with cellular BSs and Wi‐Fi access point (APs) are shown. A licensed communication is used by BSs, and an unlicensed communication is used by APs. They are present inside each cell randomly. OpenFlow‐based protocol is followed in BSs, APs, and in switches, for enabling the ease of controlling SDN controllers using a secured channel. Assumption is made in such a way that every BSs and the APs are strongly connected with the open flow switches. For a given cellular network, these switches are then co‐located with the base stations. Different forms of macrocell switches and the femto cell are directly connected to the internet, whereas macrocells are the part of core networks. On the other side, as far as the APs are considered, these switches will be located inside the ISP. The switches are then connected into many APs. They will be controlled in a centralized manner using SDN, and the proposed controller is just a program which runs on the server. The same can be fixed anywhere inside the network even in a distant data center. image
 
Article
Hierarchical based routing is a kind of group‐based routing that consists in creating of a virtual hierarchy among network sensor nodes. This class of routing technique is generally designed for large‐scale networks. It aims to efficiently increase network lifetime by cutting the whole network into clusters. However, traditional clustering techniques show some limitations and do not take into consideration the self‐organized with dynamic topology inherent in wireless sensor networks (WSNs). These limitations can lead to an unbalanced cluster head distribution that affects the whole network energy consumption. In this paper, we propose a grid‐based k‐means clustering protocol (named GBK), which combines grid‐based routing with k‐means algorithm in order to overcome the above mentioned weaknesses. From the supervised zone area size parameter, the base station determines the optimal grid size based on our optimization study. Afterwards, the k‐means algorithm is executed in every grid cell generating a cluster head per cell grid where the nearest node to the grid cell centroid is elected. An enhancement of this proposed GBK algorithm named GBK‐R is also proposed to extend network stability of the GBK algorithm by node scoring calculation that take as parameters the node remaining energy in addition to the distance to centroid. Our proposed GBK and GBK‐R algorithms allow for an enhanced network stability and increase the network lifetime as demonstrated by our performance evaluation study. In addition, this GBK clustering algorithm provides a better network topology control and a better control of the random nature of node distribution by generating cluster heads with bounded localization.
 
Article
The work presents the design of a novel semi‐circular arc‐shaped two‐element multiple‐input‐multiple‐output (MIMO) antenna with improved isolation. The antenna is obtained by cutting and subtracting a circular disk from another circular disk, forming the shape of a semi‐circular arc. The antenna is replicated to form a two element MIMO antenna system on a FR4 substrate of size 36 × 26 mm² with edge‐to‐edge separation of 12.4 mm. The developed MIMO antenna resonates in the frequency range 3.26 to 6.97 GHz giving a maximum element‐to‐element isolation around 26 dB in the operating bandwidth. Also, the antenna gives a maximum radiation efficiency of 98% with a peak gain of 5.3 dBi in the operating band. The proposed compact MMO antenna covers wide variety of wireless applications like 5G sub‐6 GHz and wireless local area network (WLAN) applications, which includes n77/n78/n79, WiFi‐5, V2X/DSRC, WiFi‐6, and INSAT‐C bands. Characteristic mode analysis (CMA) is performed in characterizing the performance of the antenna over the specified operating bands by studying and analyzing the parameters like modal significance (MS), characteristic angle (CA), modal current, and modal patterns. The obtained MIMO parameters like envelope correlation coefficient (ECC), diversity gain (DG), mean effective gain (MEG), and total active reflection coefficient (TARC) reveal that the proposed antenna is a suitable choice in MIMO environment.
 
Article
A novel‐shaped dual‐band circularly polarized (CP) monopole antenna is proposed. Two parasitic strips are fabricated under the unique shaped monopole element to get dual broad operation bands. The proposed antenna is fabricated on the FR4 substrate with the specification of the substrate dimension 40 mm × 30 mm, εr=4.4$$ {\varepsilon}_r=4.4 $$, h=1.6mm$$ h=1.6\ \mathrm{mm} $$, and loss tangent tanδ=0.02$$ \mathit{\tan}\ \delta =0.02 $$. The parasitic elements are responsible to improve the current distribution Ix$$ {I}_x $$ and Iy$$ {I}_y $$ on the surface. Two CP modes are developed for orthogonally distributions of Ex$$ {E}_x $$ and Ey$$ {E}_y $$ fields with same magnitude. All the successive CP modes are merged, and a broad AR BW is achieved. The RHCP and LHCP waves radiate along the broadside direction at the lower and higher frequency bands. The achieved two impedance bandwidth and axial ratio (AR) bandwidth are 1.5 GHz (7.05–8.55 GHz), 4.45 GHz (11.65–16.1 GHz) and 1.4 GHz (7.1–8.5 GHz), 0.2 GHz (14.6–14.8 GHz). A very good gain response along with two peak gains of 5.5 dBi and 5.9 dBi is achieved. The proposed antenna is applicable for International Telecommunication Union (ITU) satellite communication (7.25–7.75 GHz, 7.9–8.4 GHz) and Ku‐band satellite applications (11.7–12.2 GHz) and (14–14.5 GHz).
 
Article
The problem of extending the lifespan of wireless sensor networks (WSN) based on the Internet of Things (IoT) has been widely investigated over the last 20 years. This paper proposes an Optimized J‐RMAC (optimized joint routing and media access control protocol) to guarantee the network lifetime in IoT‐based WSN. Initially, all sensor nodes report their position and coverage information to the sink, which uses this information to pick a list of active nodes based on energy usage and active time. Then, the k‐covered network is formed to execute the routing task by selecting the active nodes with the largest sensing areas. A multi‐objective seagull optimization algorithm (MO‐SOA) represents routing paths between the source and destination by considering two objective functions: energy consumption cost and end‐to‐end delay of a routing path. After that, the contention window of the nodes in the routing path is adjusted using a new iterative adaptive adjustment process of the contention window with adjustment parameters (IAACW‐AP) to avoid message conflicts. The proposed protocol is simulated in the NS2 simulator. The performance of the proposed protocol will be compared with existing strategies in terms of network lifetime, packet delivery ratio, communication overhead, energy consumption, and delay.
 
Article
In this paper, a design of a reconfigurable printed antenna circuitry for 5G portable devices is proposed based on a miniaturized structure. Thus, the proposed antenna is structured as a printed circuit monopole with a coplanar waveguide port (CWP). The ground proposed CWP is designed as an L‐shaped reflector in order to increase the directivity of the proposed antenna toward a certain direction. A matching circuitry based on a fractal Minkowski structure of the first order is inductively attached to the antenna design to increase the antenna bandwidth. To control the antenna performance, the matching circuit is connected to the L‐shaped reflector through four PIN diodes. The effects of different switching scenarios on the antenna performance are tested numerically and experimentally for validation. It is found that when all diodes are switched ON, such antenna shows two frequency bands, S11 < −10 dB, from 3.5 to 3.7 GHz and from 5.08 to 6.9 GHz. Nevertheless, the antenna gain is found to be about 3.47 dBi at 3.6 GHz and 3.69 dBi around 5.1 GHz. The other switching scenarios are tested and presented in this work. It is observed that the PIN diodes' switching affects significantly on the antenna directivity and the radiation patterns. The antenna performance is parametrically analyzed using CST MWS based on a numerical technique and based on an analytical circuit model. The proposed antenna is fabricated and tested to be compared against the theoretical results. An excellent agreement was obtained between the simulated and measured results.
 
Article
Internet of Everything (IoE) is one of the emerging technologies in the advancement of digital life and innovation. But the major issue that is to be addressed is the security concerns over such an environment, especially in end‐to‐end device communication. The proposed approach primarily focuses on reliable end‐to‐end device communication using post‐quantum location‐aware encryption, aiming to achieve confidentiality and integrity in an Internet of Everything environment. Even though the post‐quantum techniques have shown to be one of the evolving solutions for reliable data communication, but its robustness against the Man‐In‐The‐Middle and Sybil attacks is still a wide‐open subject and undiscovered. Numerous traditional encryption algorithms are now in use, while post‐quantum encryption is being studied and considered as a quantum‐safe substitute. The proposed approach also demonstrates a robust attack model which focuses on communication network threats like namely, Man‐In‐The‐Middle and Sybil attacks intending to evaluate the overall performance of the network by analyzing the performance parameters like, number of nodes in the network, message size, execution time, and memory consumption by the nodes and aiming to achieve an appreciable accuracy in an IoE environment.
 
Article
The technological progression in the area of Wireless Body Area Network (WBAN) has made it possible to design various sensing modules operating for detecting the multitudinous physical attributes of a patient's body. However, the limited battery of these sensing devices has restricted the scope for WBAN. Hence, it is imperative to design an energy efficient algorithm to combat the gigantic energy consumption of these sensor nodes. Therefore, in this paper, we propose a Novel Energy Efficient hybrid Meta‐Heuristic Approach (NEEMA) for WBAN. We adopt this hybrid approach by using Tunicate Swarm Algorithm (TSA) and Genetic Algorithm (GA); we name it as T‐GA, to deliver the high convergence and large exploitation and exploration capabilities. We follow the clustering approach by selecting the Cluster Head (CH) with the help of the fitness function of T‐GA. We consider the novel parameters for selecting the CH that helps in the overall energy preservation of sensor nodes. Our proposed work focuses on multi‐hop communication among the patients on whose body, the body sensor nodes are installed, so that the data through the multi‐hop reach to the healthcare. The Relay Head (RH) node is used for forwarding the data to next RH and hence to sink. RH selection follows the same method of selection as that of CH. The experimental outcomes of NEEMA outperform the state‐of‐the‐art algorithms and prove to be highly beneficial for various WBAN applications.
 
Article
Internet of Thing (IoT) is a trending internet connectivity extension to connect physical devices on the heterogeneous networks to meet application requirements on various grounds, that is, health, agriculture, industry, and so forth. The amount of streamed data is generated by the IoT devices and processed via IoT device itself or external resources such as Cloud/Fog by considering real‐time constraints. The problem arises in real‐time data transmission when a huge amount of stream data is generated from IoT devices, and it travels through the network and needs to be processed on low‐speed IoT infrastructure. Considering this scenario, there is an extremely requirement for designing a model relying on real‐time stream data handling for speedy transmission considering IoT device real‐time communication constraint. On the other hand, IoT devices are also very sensitive towards energy consumption during the streamed data transmission, and low battery devices become a barrier in data handling in real‐time. To overcome these limitations, a model is proposed to compress and encrypt the data stream during the IoT network transmission. In this model, delta encoding facilitates the compression in form of deltas. To ensure security requirements, a light weighted stream cipher encryption technique is proposed. For the experimentation, a test‐bed setup is prepared to utilize Raspberry Pi (IoT device), CPU, and collectors to collect data stream. The proposed approach is compared with the baseline model, that is, LDPC encoding on temperature sensor datasets. As an outcome, the data are compressed up to 37.59%, and the average transmission time and the average power consumption are reduced by 72.57% and 68.86%, respectively.
 
Article
Incorporation of multiple antennas in downlink non‐orthogonal multiple access (NOMA) system helps in achieving better performance by exploiting different diversity schemes. To this end, we propose a general framework of transmit antenna selection (TAS)‐assisted orthogonal space‐time block code (OSTBC) transmission for multiple NOMA users. We derive exact expressions of probability density function of the TAS‐OSTBC processed signal‐to‐noise ratio (SNR) at the output of maximum‐ratio combiner of the NOMA users. Using well‐known moment generating function approach, we evaluate the error performance of TAS‐OSTBC assisted NOMA users over generalized η−μ fading channel in presence of perfect and imperfect successive interference cancelation. We investigate the proposed system in different multi‐antenna scenarios and accordingly observe better error performance of the users by increasing the number of transmit and receive antennas. We perform their asymptotic analysis leading to full‐diversity order at high SNR. For numerical evaluation, we truncate the infinite series in derived results and examine the upper bound of truncation error. We evaluate nth moment of the output SNR and obtain channel quality estimation index as a performance measure of the proposed NOMA system. Moreover, we investigate the influence of power coefficients and fading parameters, η and μ, on the error performance of TAS‐OSTBC‐assisted NOMA users. We also compare the derived error performances with the existing schemes, wherein we observe that the proposed system shows remarkable improvement in the bit error rate (BER) of the NOMA users. For validation, we execute simulations which closely match with the derived analytical results.
 
Requirements of 5G⁷:
Article
The fifth generation of mobile technology is referred to as “5G.” 5G refers to the next significant phase of mobile communications standards after the upcoming 4G standards. With 5G technology, the bulk of high‐bandwidth consumers would be able to use their phones in innovative ways. When 5G is pushed over a VOIP‐enabled device, individuals encounter record levels of call volume and data transmission. In this paper, we reviewed smart antenna 5G for Internet of Things (IoT) application. Beamforming is a 5G active antenna technique that uses directional radio links to concurrently and selectively supply high bandwidth to certain mobile devices. Multi‐antenna systems are required when using larger frequency ranges. The better the propagation conditions for electromagnetic waves, the higher the frequency. To some extent, multi‐antenna arrays and beamforming can assist mitigate this. Radio signals can be transmitted and received in a spatially targeted manner due to beamforming. The better the beamforming works, the more dipoles (antenna elements) are available. In contrast to earlier eras on wireless networking, such as Global System For Mobile Communication (GSM), Universal Mobile Telecommunication System (UMTS), also 4G/LTE, 5G would not require major technological advances. Additional systems and equipment are added to the existing LTE technology to increase data capacity and reduce latency. The 5G NR infrastructure depends heavily on active antenna arrays, which enable multi‐user multiple‐in multiple‐out (MIMO) technologies. For targeted radio contact with the receiver, these antenna modules use beamforming.
 
Article
The multi‐band transmission (MBT) technique is one of the most cutting‐edge approaches to increase the capacity of fiber‐optic transmission systems and meeting the growing bandwidth demand. MBT technique considers the transmission in C + L‐ and U‐bands based on Erbium‐doped fiber amplifiers (EDFAs). Therefore, we propose a high gain and low noise figure EDFA for L + U‐band amplification having gain bandwidth of more than 82 nm in the spectral range of 1578–1660 nm, based on a single standard S‐band forward pump source. The L + U‐band amplification is obtained by pumping the gain medium which is heavily doped with Er3+. The results based on numerical simulations show that an efficient amplification is obtained over L + U‐band with an average small signal gain of around 39 dB covering 1578–1660 nm wavelength range while incorporating a short 12 m length of Erbium‐doped fiber (EDF) at an optimized pump power of 300 mW. A maximum noise figure of 4.46 dB is observed at 1593 nm. In addition, power conversion efficiency (PCE) of different C‐band pump wavelengths has also been evaluated, and the findings show that the PCE of 1560 nm pump wavelength is 70% higher than standard 1480 nm pump wavelength at the expense of cost efficiency. Finally, the system level performance of the designed L + U‐band EDFA is investigated in a five channel wavelength division multiplexed (WDM) link. The study demonstrates L + U‐band EDFA for amplification employing a single standard S‐band forward pump and single short piece of EDF. Average gain of around 39 dB in the wavelength range of 1575–1660 nm is achieved with maximum noise figure of around 4.46 dB at 1593 nm. The proposed L + U‐band EDFA will be a step forward towards realization of future multi‐band transmission systems for deployment in the L + U‐band.
 
Article
A low‐profile eight‐element printed dipole antenna (PDA) array backed by broadband rhomboid artificial magnetic conductor (AMC) is introduced for wireless communication systems. By loading a 4 × 27 AMC reflector into the eight‐element array of PDA, a low‐profile wideband structure with enhanced radiation properties is achieved. The measured S parameters show the broad bandwidth from 4.75 to 7.05 GHz in C‐band with enhanced gains of eight elements (more than 8 dBi) and the suitable isolation between the array elements of more than 23 dB for multi‐input multi‐output (MIMO) systems. The suggested PDA with a pair of the microstrip meandered folded poles excited by an E‐shaped microstrip feedline expands the bandwidth in the range of 5.85–6.95 GHz (S11 ≤ −10 dB). The novel AMC unit cell is realized based on the recognized method as rhomboid coupled parasitic patches. The rhomboid AMC design operates at 6.26 GHz with an AMC bandwidth of 5.20–7.24 GHz (32.8%). Then, the suggested rhomboid AMC surface as a reflector of the antenna is inserted into the PDA to exhibit −10 dB measured impedance bandwidth from 4.94 to 6.93 GHz (more than 33%) for wireless local area network (WLAN) and worldwide interoperability for microwave access (WiMAX) applications. The suggested PDA with AMC compared to the PDA without AMC exhibits a size reduction of 34%, enhanced gain up to 8 dBi, and excellent impedance matching (at least −20 dB) with directional radiation patterns.
 
Article
In recent times, the expeditious growth of Internet of Things (IoT) offers applications to ease day‐to‐day activities with minimum human effort. Once the IoT application installed, the connected devices perform their tasks without human intervention. Hence, the need of performance optimization and security enhancement is vital to minimize end‐to‐end communication delay, improve kernel‐level security, mitigate faults adaptively, and have suitable backup options in case of node failure. This paper proposes a quality of service (QoS)‐aware fault‐proof secure Q‐learning‐based IoT (QIoT) kernel‐level protocol that integrates multipath aggregation and fuzzy authentication for security, with multichannel communication for improved QoS. Especially, this protocol integrates a source‐level clustering mechanism based on Q‐learning that aims at reducing route search delay. In order to provide fault tolerance, the kernel is equipped with real‐time fault‐tolerance mechanism that is activated in case of node‐level faults. Due to integration of Q‐learning, computational overheads are reduced by over 15% when compared with Zephyr, AliOS, and RTX kernels. This reduction in computational overheads facilitates light‐weight behavior of the kernel, due to which other QoS parameters like energy consumption, throughput, and routing overhead are reduced. The proposed QIoT kernel‐level protocol is compared with standard kernel modules, and performance evaluation showcases an improvement in authentication security by 8%, end‐to‐end delay by 5%, energy efficiency by 25%, and fault mitigation by 18%, thereby assisting the use of the proposed kernel for real‐time deployments.
 
Article
Miniaturization of Internet of Things (IoT) enabled devices has led to advancement in Intelligent Transportation System. Some of the challenges for efficient traffic management are connection of vehicles through an IP based infrastructure for transportation, reliable, and flexible traffic control management, maintaining quality of services for video streaming applications in Internet of Vehicles and many more. Today there is a need of an efficient transportation system with improved efficiency of traffic safety and lower traveling costs. An intelligent traffic control system using Ant Colony Optimization (ACO) algorithm has been proposed and analyzed in this paper. Proposed algorithm has been compared with existing state of art algorithms in terms of average waiting time and average traveling time.
 
Article
Tropospheric attenuations can be significant in the millimeter wave (mmWave) frequency bands; hence, accurate prediction modeling of tropospheric attenuation is important for reliable mmWave communication. Several models have been established by the International Telecommunication Union (ITU), yet estimation accuracy is limited due to the large spatial scales used for model input parameters. In this paper, we address this and apply local precipitation data to analyze tropospheric attenuation statistics and compare to results when using ITU regional input rain data. Specifically, tropospheric attenuation is predicted via simulations using the ITU method at 30, 60, and 90 GHz in four distinct geographic locations with different climate types. From our simulations, we gather statistics for annual average rain attenuation, worst month rain attenuation, and rain attenuation per decade. Our results indicate that when using local measured rain data, for 1 km link distance, mean rain event attenuation increases from 0.5 to 2 dB. Local rain data yield larger attenuations at essentially all percentages of time not exceeded (essentially corresponding to all probability values): for example, for 0.1% of time not exceeded, in Columbia, SC, rain attenuation for 30 GHz frequency increases to 9 dB with local rain data, compared to 5 dB with ITU's regional data, corresponding to rain rates of 38.2 and 17.5 mm/h, respectively; at the same probability and location, the 90 GHz attenuation increases by 10 dB, from 10 to 20 dB when local rain data are used. Fog attenuations are also appreciable, reaching 8 dB for the 90 GHz frequency. Moreover, for the example locations, peak rain attenuations have increased at a rate of approximately 2 dB/decade over the past 50 years. Our results indicate that actual tropospheric attenuations may be substantially larger than that predicted by the ITU model when using regional rain rate data.
 
Article
Heterogeneous networks (HetNets) have been a trending topic of interest for researchers in 5G technology. A HetNet structure comprises a macro cell network assisted by small cell networks such as pico cells and femto cells. This additional hardware ensures distribution of the user equipment (UE) load of the main base station (MBS) at the cost of a surge in the overall system power consumption. Optimized power consumption coupled with enhanced cell throughput by dynamic small cell ON/OFF strategy improves the energy efficiency (EE) in dense HetNets. This paper proposes two algorithms to switch the small cell ON/OFF based on cell throughput contribution rate (CTCR). CTCR is the ratio of actual cell throughput to the maximum cell throughput with full utilization of the allotted bandwidth. In the first method, the threshold to decide small cell ON/OFF has been carefully defined considering two important factors—the MBS‐SBS distance and the ratio of small cell density to UE density such that less loaded small cells that are closer to the MBS are the candidates chosen for sleep mode. In the second method, a correction factor in the computation of CTCR is introduced. It is a logarithmic function of the relative distance of the small base station (SBS) to the radius of macro cell coverage. This steps up the threshold for SBS that are close to MBS. They are more suitable sleep state candidates as their UEs can be served directly by the MBS and enable large user throughput. Simulation results show that the EE of the proposed amended CTCR method is 8% more than proposed CTCR method and 30.66% better relative to conventional load‐based sleep control method.
 
Article
This study presents the average symbol error probability (ASEP) performance of a single‐input single‐output communication with imperfect phase error (IPE) and without IPE over the Beaulieu‐Xie fading channels. The Beaulieu‐Xie fading model is proposed recently and it is a popular characterization for the line‐of‐sight and non‐line‐of‐sight environments. The proposed theoretical analysis, which is based on the probability density function, is enough general and applicable in some real‐life scenarios. For both cases, closed‐form exact, approximate, and asymptotic expressions are derived for the ASEP of different modulation types. Finally, exact simulations have been introduced to confirm the accuracy of the theoretical findings obtained and present insights into the considered scheme performance.
 
Article
A drone or unmanned aerial vehicles (UAVs) is becoming a trending area for researchers worldwide. UAV's contribution is increasing in day‐to‐day life, whether it is in a military zone, disaster management, healthcare sector, smart cities, Internet of Things (IoT), urban air mobility, and many more. In contrast, UAV's limited computational capability and low‐energy sources pose significant challenges for real‐time data processing, storage, networking, and security that are critical in emergencies such as floods, earthquakes, and cyclones. UAVs are rapidly used to satisfy user requirements as well as services. As the demand for UAVs aided heterogeneous wireless networks increases in critical emergencies, fog computing serves several benefits to fulfill users' demands in terms of low latency, support, data storage, mobility, availability, scalability, and so on. This study aims to present a comprehensive study with their technical aspects for understanding fog computing, security issues, privacy concerns, and risks, along with its solutions. This paper suggests the collaboration of UAV‐Fog architecture based on the four‐tier network consisting of smart things, local UAVs, UAV‐Fog, and cloud server, to control UAV's data and also described some of the security issues faced by this cloud infrastructure. Further, this research article also sheds new light on some scenarios of UAV‐Fog for such deployments, applications, opportunities, challenges, and their major security threats and their countermeasures. Afterward, we design taxonomy of the collaboration of UAV‐Fog with their respective approaches.
 
Article
Scheduling at the core node plays an important role in the transmission performance of optical burst switched (OBS) networks. There are two approaches of scheduling for an incoming burst: with void filling and without void filling, where the scheduling with void filling is more efficient thanks to the ability to exploit the idle bandwidth between scheduled bursts. However, the scheduling success depends on factors such as arrival time, burst length, last available unused time (LAUT), void start time, void end time, and so forth. This paper proposes a scheduling data analysis approach to find the factors that influence scheduling efficiency, thereby suggesting an FDL‐based solution to reduce data loss. The results of analysis and simulation show that the derived factors including the last overlap and the near‐last overlap have the most significant influence on the scheduling success rate and the FDL‐based solution is best suited for loss reduction.
 
Article
The essential design concern of a sensor network is to balance energy consumption of sensor nodes (SNs) to prolong the network lifetime. Many research works cited that clustering techniques efficiently utilize the network's energy resource by organizing SNS into groups of clusters and benefits in reduced data transmission. An extensively used category of cluster‐based protocols is the probabilistic clustering technique in which a preset optimum likelihood is used to facilitate the selection procedure of cluster head (CH). These clustering techniques suffer from non‐uniform dissemination of CH, which leads to uneven load balance and uneven energy consumption of SNs during network activities like data transmitting and data receiving. This causes an energy‐hole problem and reduces network lifetime. In order to solve these issues, we have focused on to design a balanced cluster‐based data aggregation and formulate a method that increases the energy efficiency of probabilistic clustering techniques by optimizing the number of clusters and the dissemination of CHs in the sensor network. The simulation analysis proves that the proposed technique accomplishes significantly enhanced than the existing works.
 
Article
In the traffic monitoring system, the transportation department generates the traffic map to report the real-time traffic conditions, so as to provide more efficient service for drivers. However, if the vehicle obtains the local traffic conditions that the driver is more concerned about from the remote server, it will consume a heavy bandwidth and incurs an increased response delay. Generating and broadcasting local traffic conditions through fog computation is a feasible method to reduce communication costs, and most existing traffic monitoring systems based on fog computation are implemented using Bilinear Pairing operations. In this paper, we propose LPTM scheme with Elliptic Curve Cryptosystem (ECC), an optimized Timed Efficient Stream Loss-tolerant Authentication (TESLA) protocol is adopted to achieve efficient and secure communication, and we also use an identity-based signature scheme with partial message recovery (PMR-IBS) to effectively shorten the length of the fog node broadcast message. Detailed security proofs show that the requirements of security and privacy are all achieved, and better simulation performance is presented in both computation and communication overhead.
 
Article
The ability to move and hover has made rotary‐wing unmanned aerial vehicles (UAVs) suitable platforms to act as flying communications relays (FCRs), aiming at providing on‐demand, temporary wireless connectivity when there is no network infrastructure available or a need to reinforce the capacity of existing networks. However, since UAVs rely on their on‐board batteries, which can be drained quickly, they typically need to land frequently for recharging or replacing them, limiting their endurance and the flying network availability. The problem is exacerbated when a single FCR UAV is used. The FCR UAV energy is used for two main tasks: Communications and propulsion. The literature has been focused on optimizing both the flying network performance and energy efficiency from the communications point of view, overlooking the energy spent for the UAV propulsion. Yet, the energy spent for communications is typically negligible when compared with the energy spent for the UAV propulsion. In this article, we propose energy‐aware relay positioning (EREP), an algorithm for positioning the FCR taking into account the energy spent for the UAV propulsion. Building upon the conclusion that hovering is not the most energy‐efficient state, EREP defines the trajectory and speed that minimize the energy spent by the FCR UAV on propulsion, without compromising in practice the quality of service offered by the flying network. The EREP algorithm is evaluated using simulations. The obtained results show gains up to 26% in the FCR UAV endurance for negligible throughput and delay degradation.
 
Article
A well‐optimized and well‐performed communication network protocol is necessary to build successful underwater acoustic sensor networks (UWASNs). But for wired and wireless communication, medium access control (MAC) has a great effect on network performance and optimization. But unlike land‐based MAC protocols, underwater MAC protocols come with various challenges and issues like high propagation delay, limited bandwidth for communication signals, large attenuation in network signals, and the high noise level in signals. It is very challenging to build a well‐optimized underwater MAC protocol. Also, in UWASN, sensor nodes are generally divided into sub‐network parts to reduce the propagation delay of data signals. But this creates the problem of non‐uniform traffic load in sensor nodes. So, considering these issues, dynamic hold time MAC (DHT‐MAC) protocol is proposed here. In this protocol, depending on the distance from the central node, sensor nodes are divided into two sub‐network zones (parent node and child node). Depending on the traffic load and propagation delay, the child nodes can change their respective parent nodes dynamically. Advantage of the proposed method is that, if any of the parent nodes stops working, the child node will connect to the nearest parent node. When collecting the data signals, it has been observed that child nodes have a light traffic load compared to parent nodes. So dynamic cooperative transmission MAC (DCT‐MAC) protocol which is a contention‐based MAC protocol has been used in child nodes and as parent nodes have high traffic load, reservation‐based MAC protocol has been used.
 
SARAS system model: A D2D communication considering multiple interferences underlaying heterogeneous network with eavesdropper ε
CR NOMA system with power allocation and SIC16,23
(A) Sum rate comparison of SARAS with different schemes when n(C) = 6 and n(di) = 20, (B) sum rate comparison of SARAS with different schemes when n(C) = 6 and n(di) = 50, and (C) sum rate comparison of SARAS with different different schemes when n(C) = 6 and n(di) = 60
(A) Sum secrecy capacity comparison of SARAS with different schemes when n(C) = 6, (B) CMU's average secrecy capacity of SARAS with different D2D transmit powers, that is, 17 and 23 dBm, and (C) CMU's average secrecy capacity of SARAS with different D2D transmit powers, that is, 25 and 50 dBm with varying number of D2D groups
Article
Device-to-device (D2D) is a fifth-generation (5G) network's key technology, which allows devices in proximity to communicate one-to-one without a base station (BS). Moreover, it ameliorates channel gain, communication latency, energy efficiency, and spectral efficiency. However, mitigating interference (D2D and cellular mobile users (CMUs)) and eavesdropper effect (at D2D links) is a prime concern in D2D communication. These issues can be resolved with an efficient allocation of radio resources. Cognitive radio (CR) and non-orthogonal multiple access (NOMA) are viable solutions to resolve the aforementioned issues. Motivated by the aforementioned discussion, in this paper, we propose a joint CR and NOMA-based scheme, i.e., SARAS for secure resource allocation in D2D communication. It aims to maximize the overall sum rate and secrecy capacity of D2D users with assured quality of service (QoS) and quality of experience (QoE). CR is used for pairing between strong and weak D2D users (i.e., D2D pair) and NOMA's successive interference cancellation (SIC) technique helps to reduce the interference effect between cellular and D2D users. We used a coalition game for efficient resource utilization, which transfers D2D users from one coalition to another based on the preference order and channel conditions. This landed D2D users in the best coalition having favorable channel conditions. We have executed the proposed SARAS scheme over the MAT-LAB simulation tool, considering a varying number of D2D and CMUs. Simulation results show that the proposed scheme outperforms the traditional OFDMA, random, and nearest approaches concerning parameters such as sum rate and secrecy capacity.
 
Article
  • Sri Pravallika NarjalaSri Pravallika Narjala
  • Anitha V RAnitha V R
  • RamaNaidu KRamaNaidu K
Design, simulation and measurement of a broadband epsilon‐near‐zero (ENZ) metasurface‐based gradient refractive index (GRIN) Luneberg lens has been presented with different antennas demonstrating gain enhancement and producing highly directive beams. The unit‐cell meta‐structure consists of a conducting ring with monopoles at 45°. The horizontal radius of the conducting ring is varied to vary the refractive index of the structure. Unit cells with varying refractive index are arranged in a 2D plane to produce a planar Luneberg lens structure. The lens is demonstrated to produce a gain improvement of 6–13.2 dBi for different configurations and different antennas including a microstrip patch antenna and horn antenna both in simulation and measurement while maintaining the total efficiency of the antenna being used with the lens.
 
Article
The continuous growth of wireless connectivity and the emergence of the concept of the Internet of Everything in the future sixth‐generation (6G) network require a new communication paradigm. Different from the prior works that focus on performance optimization for communication networks, in this paper, we attempt to analyze the energy efficiency of unmanned aerial vehicles (UAVs) communication aided by reconfigurable intelligent surfaces (RIS) which is a new disruptive technology, for the reason that UAV has the line‐of‐sight (LoS) link transmission and flexibility and RIS has the low power for improving the communication reliability and network coverage. At firstly, when central limit theorem (CLT) assumption is used and the number of RIS reflecting elements is large, we derive the closed‐form expression of the average signal‐to‐interference‐plus‐noise ratio (SINR) of each user based on the new BS‐RIS‐user communication link and RIS‐user/BS association. Following this, we further gain the energy efficiency (EE) in closed‐form based on random access between BS and user. Finally, we further show the performance of EE under certain assumptions. In particular, the offered results demonstrate that the use of the RIS can significantly improve EE due to RIS assist in improving the SINR and has very low transmission power; meanwhile, the UAV equipped with RIS can also improve EE for the reason that RIS can be used as a mobile relay to take advantage of the LoS link transmission.
 
Article
Enhancement of network performance is a big challenge today for the emerging technology based on IEEE 802.16. Wireless technology is an alternate solution to wired technology. Network availability all the way is an important challenge in day‐to‐day life. Due to the huge population, the availability of the network is again a big challenge. Also, it becomes more difficult to capture the network if considering rural areas, hilly, lakes, and seashores today. This work proposed relay stations in WiMAX (Worldwide Interoperability for Microwave Access) networks along with various bandwidth allocation algorithms to increase the signal strength over the long distance using retransmission of the original signal by relay station and extend the coverage with a higher data rate and higher user capacity. The availability of WiMAX networks for the channel bandwidth assignment to users for data and service is very important in today's scenario. This paper focuses on evaluating the performance of bandwidth allocation algorithms in light WiMAX networks with and without relay stations. The performance evaluation is done by increasing the number of nodes with various bandwidth allocation techniques with and without relay stations. In this work the round robin and strict priority algorithms are used for channel bandwidth allocation in a light WiMAX network and performance is analyzed through throughput, goodput, and packet drop rate, which will help to implement and configure the suggested optimal parameters in the production environment. This work enhanced the throughput and goodput and reduce the dropout rate, and the user capacity also increased.
 
Article
Providing the appropriate coverage is quite essential for the effective functioning of many applications in Wireless Sensor Networks. Therefore, the efficiency of node deployment algorithms to supply the requested coverage is of high significance. In this paper, inspired by the equilibrium of molecules, a novel node deployment algorithm, called Smart Self‐organising Node Deployment (SSND), is proposed to provide maximum coverage. Despite other proposed algorithms, which provide coverage based on the collective movement of nodes with massive energy consumption, SSND moves one sensor in every neighbourhood at each step to reduce the sensor nodes' movement and hence the energy consumption. The chosen sensor nodes at each time step are distributedly determined by an eligibility function to reduce the non‐essential movements while improving the accuracy of the reported locations of neighbours. Our extensive simulation study shows that SSND can achieve up to 30% coverage improvement compared to those of other algorithms in most scenarios and provides an adequate trade‐off between coverage and energy consumption.
 
Article
Technological advancements in the area of the Internet of Things have fostered the development of multi‐hop architectures pertaining to applications seeking large network areas. However, while exploiting such applications, the sensor devices being used are made to communicate through multi‐hop routing techniques, burdening the relay nodes. Hence, it leads to a hot‐spot problem, as the nodes passing on the data, that is, relay nodes, consume their energy at a large magnitude. To solve this issue, in this paper, we propose a novel optimized routing technique to mitigate hot‐spot problem (NORTH) for wireless sensor network (WSN)‐based IoT. We employ the tunicate swarm algorithm (TSA) to optimize the cluster‐based routing, specifically the selection of cluster head (CH) of each cluster by using some novel parameters. These parameters include energy status, a distance of a node from the sink and other nodes, load balancing, node proximity, and average energy stock of the network. We investigate two network scenarios, that is, when a sink is placed inside the network and otherwise, to give an optimized solution for every case. Further, to mitigate the hot‐spot problem, the relay node is selected in a cluster with the same mechanism as CH, which performs the task of data forwarding. The simulation analysis of NORTH reveals the supremacy of the proposed work against the recently proposed algorithms, based on various performance metrics, namely, network longevity, stability duration, throughput, and the network's remaining energy.
 
Wireless sensor network architecture
Hop count internal wormhole attack
High power transmission external wormhole attack
Our SLR methodology and article selection
Taxonomy of schemes to counteract wormhole attack
Article
Wireless sensor networks (WSNs) consist of hundreds of small sensors that collect and report data. During data sharing, these WSNs become vulnerable to numerous security threats, including the deadly ones. Mitigating this risk is a real challenge, especially in a low‐resource environment such as a WSN. In this work, using a systematic literature review, we surveyed a large body of research (28 studies) that focused on mitigating wormhole security attacks. Through this study, we evaluate the impact of many proposed security schemes and their impact on the WSN's performance. We compare schemes for effectiveness and pinpoint the limitations. We also analyze the various parameters and metrics used in them and highlight the open research challenges in the field.
 
Article
A high‐capacity free‐space optics‐based terrestrial communication link using spectral‐efficient space division multiplexing and polarization division multiplexing techniques has been proposed in this research work. Orthogonal frequency division multiplexing with the use of coherent detection has been proposed for the transmission of quadrature amplitude modulated high‐speed signals. Eight independent 50‐Gbit/s data streams are transported by two orthogonal polarization states (X and Y) of four distinct Hermite–Gaussian spatial modes to increase the transmission rate of the link to 400 Gbit/s over a single channel. Through numerical simulations, we report feasible transportation of 400‐Gbit/s information at 10 km under clear weather with faithful performance. Further, the impact of atmospheric scintillation and fog weather has been evaluated using simulations. The results report that the maximum range reduces to 6 km under clear weather in the presence of strong atmospheric scintillation. Moreover, the maximum range of 2000, 1700, and 1300 m is reported for light, moderate, and heavy fog conditions respectively. A spectral‐efficient high‐capacity hybrid space division multiplexed‐polarization division multiplexed‐orthogonal frequency division multiplexed coherent detection‐based free‐space optics transmission link is proposed and investigated; 400 Gbit/s is faithfully transmitted along range varying from 1.3 to 10 km depending on the external environmental conditions. The impact of atmospheric turbulence is also investigated, and the results demonstrate that the range reduces from 10 to 6 km in the presence of strong turbulent conditions.
 
Article
In this paper, we study high spectrum efficiency transmission in vehicle to everything (V2X) system to allow the roadside unit (RSU) broadcasts its safety information to vehicles which belong to a dedicated group. However, low latency and high reliability are tightly required in related applications of V2X; we explore advantages of cognitive radio (CR) and non‐orthogonal multiple access (NOMA) to form system, namely, CR NOMA‐V2X. Such considered system provides transmissions from the RSU to vehicles with respect to lower access latency and to enhance the packet reception probability. In the proposed scheme, the vehicles need channel state information (CSI) for signal decoding order, but we face with imperfect CSI occurs in practical scenarios. To evaluate such degraded system performance of the involved CR NOMA‐V2X, we introduce the approximate closed‐form expressions of outage probability. Numerical results indicate the validness of our derivations. We study high spectrum efficiency transmission in vehicle‐to‐everything (V2X) system to allow the Roadside Unit (RSU) broadcasts its safety information to vehicles which belong to a dedicated group. However, low latency and high reliability are tightly required in related applications of V2X; we explore advantages of cognitive radio (CR) and non‐orthogonal multiple access (NOMA) to form system, namely, CR NOMA‐V2X. Such considered system provides transmissions from the RSU to vehicles with respect to lower access latency and to enhance the packet reception probability.
 
Article
Non‐orthogonal multiple access (NOMA) is a potential technology for the fifth‐generation (5G) cellular systems. The main task of the user pairing (UP) in NOMA is the selection of the paired users according to their channel gains. In this paper, an efficient UP algorithm is proposed to enhance the performance metrics of NOMA systems such as capacity, fairness, and outage probability. Firstly, the impact of the paired users' channel gains on the performance of NOMA systems is investigated and discussed. The proposed UP is based on the minimization of the capacity loss to maximize the NOMA network capacity. Also, the proposed UP targets to reduce the divergence between paired users' channel gains to avoid capacity loss and the degradation of the user fairness and the user data rate. Performance evaluation results have revealed that the proposed UP significantly outperforms the existing UP algorithms in maximizing the capacity and improving the fairness and the outage probability. Also, it achieves very small values of the outage probability which range from 1% to 10% of the achieved values by the other UP algorithms.
 
Article
The preceding decade has seen successful rollout of 5G and convergence of broadcast broadband and telecom sector. The near future bandwidth demands for services such as advanced immersive multimedia are even more challenging. This has led to advent of 6G. The access technologies need to provide virtually unlimited data rates to support majority of applications in 6G. The optical wireless communication (OWC) with its inherent advantages is a potential enabler in this scenario. However, phenomenon such as atmospheric turbulence poses a serious degradation to performance of such systems. For multimedia services, end user perception is the ultimate quality indicator. To ascertain this quality in quantitative terms, full reference quality metrics are employed for communication purposes. In this paper, digital video broadcasting terrestrial (DVB‐T) videos with varying complexities are transmitted over OWC‐passive optical network (PON) architecture. For performance enhancement 2 × 2 repetitive coding MIMO is employed with maximal ratio combining receiver. A total of 6 video quality assessment (VQA) metrics are evaluated for the system w.r.t channel parameter as Rytov variance. For all the VQA metrics, an enhancement in the performance is observed by using MIMO technique. This performance improvement is more prominent for more complex video as compared to less complex video. As for the limiting case BER of 10−3 gives a limiting link distance of 1350 m; however with VQA metrics, the limiting case distance is 1280 m. By the use of 2 × 2 MIMO techniques, this distance enhances by 20–25 m. In this piece of work, a DVB‐T video that is broadcasted over an OWC‐PON is at variable channel length. A total of 6 VQA parameters are evaluated, and their performance improves with MIMO diversity particularly for more complex videos. Limiting case distance for QoS evaluation is 1350 m, whereas QoE study restricts it to 1280 m, and MIMO enhances the same by 20–25 m.
 
Article
The most common Internet of Things (IoT) scenarios entail devices with limited energy resources and need to be connected to the Internet via wireless networks. This has driven the recent development of low‐power wide‐area networks (LPWANs) and the rise of the Long Range (LoRa) technology. The LoRa protocol has a simple modulation scheme that ensures low power consumption, high convergence, and resistance against interference. In most LPWAN technologies, several physical layer challenges arise, such as low data rates, spectral inefficiency, and increased interference. As a physical layer solution, the cognitive radio (CR) offers a possible way of resolving these challenges. CR allows wireless networks to operate without the need for a dedicated spectrum. Regarding the variety of end‐user requirements, developing a public communication network that can support such diverse and heterogeneous applications is necessary to reduce the implementation costs than developing a dedicated communication network for each application. This paper proposes a Cognitive LoRa (C‐LoRa) protocol that utilizes unlicensed and licensed frequencies as well as interference mitigation to improve the QoS of LoRa. To extract the priority list of traffic patterns, C‐LoRa incorporates the Analytic Hierarchy Process (AHP) algorithm. The priority list enables real‐time applications to receive optimal spectrum allocation. C‐LoRa can be efficiently implemented as a public communication infrastructure for heterogeneous IoT devices. The addition of licensed channels improves the overall QoS and decreases the average waiting time in queues. The platform layer of C‐LoRa consists of a cognitive engine that sends traffic priority lists to cognitive spectrum allocators. The IoT application servers are connected to the cloud platform layer via SNMP, HTTP, and other desired protocols. Access gateways equipped with a cognitive spectrum allocator are always connected to a power supply and serve as a transparent bridge to the cognitive engine at the platform layer, converting RF packets to IP packets and vice versa.
 
Article
Underwater Wireless Communication is a rapidly growing technology in the research domain. The researchers have been working towards the developments in underwater communication by investigating climatic changes, prediction of natural disasters, marine environment, monitoring aquatic life, oceanographic transmission, and data collection. The main objective of underwater communication is to achieve lossless, high transmission with the least power consumption. The Unmanned Underwater Vehicles (UUVs) and Autonomous Underwater Vehicles (AUVs) are set up with remote instruments and sensors to enable the natural exploration of the resources in the undersea environment. This research article is based on the study of various modes of communication, architectures, and protocol layers involved in underwater communication. Furthermore, an underwater hybrid connection is established by opto‐acoustic signals, and the overview of various performance characteristics, channel behaviors, and issues in the underwater scenario are investigated. A vision about underwater wireless optical and acoustic communication in underwater environment is discussed with its real‐time existing applications. The existing optical and acoustic approaches with its novelty, various modes of communication, its 2D and 3D architectures with the protocol layers, and channel behaviors are analyzed. Challenges to achieve efficient underwater transmission are listed out. The simulation is performed to compare the optical and acoustic behaviors in terms of temperature, salinity, pressure, and depth velocity variations in underwater environment.
 
Article
Vehicular ad hoc networks (VANETs) are receiving increased attention, and several routing protocols have been proposed already by the research community. In such a framework, social aspects and human mobility are important performance enablers, which however are mostly overlooked. In this work, we propose the socially aware CLWPR (SCALE) protocol, significantly enhancing cross‐layer weighted position‐based routing (CLWPR), a routing protocol for urban VANET environments, combining social properties such as trust, influence, and users' individual mobility patterns to its core design in order to support an efficient content dissemination scheme. The design principles of SCALE that rely on both online and offline social metrics quantifying relationships on social networking platforms and opportunistic contacts of nodes due to physical proximity, respectively, are presented, while nodes with close online and offline social relationship are favored as next forwarder nodes. Subsequently we provide the performance evaluation of SCALE against other routing protocols in distributed vehicular networks, employing representative urban scenarios with synthetic and real traffic. It is shown that the proposed approach presents improved performance in terms of Packet Delivery Ratio (PDR) and throughput when compared with other similar protocols by an average of 37% in scenarios with synthetic dataset and 58% with real dataset for PDR and by an average of 45% in scenarios with synthetic dataset and 61% with real dataset for throughput.
 
Article
Node localization technology can identify and track nodes, making observing data more relevant; for example, information received at the sink node would be useless to the client if node localization data from the sensor region were not included. Localization is described as determining the location of unknown sensor nodes named destination nodes applying the recognized location of anchor nodes based on measurements such as time difference of occurrence, time of occurrence, angle of occurrence, triangulation, and maximum probability. The purpose of node localization is to assign coordinates points to all sensor nodes arbitrarily put in the monitoring region and have an unknown location. Localization of nodes is essential to account for the cause of events that help group sensor querying, routing, and network coverage. In this paper, data transmission among the nodes is done by comparing the received signal strength indicator (RSSI) value with the supervised learning value. If the RSSI value is less than the supervised learning value, the data transmission takes place; else, no transmission. This paper proposes a hybrid localization scheme that effectively uses K‐fold optimization with supervised learning and gives good results for distance error and RSSI/energy efficiency. The proposed scheme can effectively detect the optimal path for data transmission, node localization for the destination, and overall performance enhancement using threshold decision making.
 
Article
Spectral efficiency (SE) is one of the eminent requirements in 5G mobile networks. Cell‐free (CF) massive MIMO is deemed a key technology to provide substantial SE in 5G compared with the cellular and small cell approaches. Most of the prior studies have been focusing on the access points (APs) uniform distribution to assess the SE performance. However, 5G networks are typically dense, irregularly distributed, and mostly constrained by channel impairments. Therefore, considering a uniform distribution of APs is unrealistic. This paper considers a practical network distribution by taking into account the irregular and adaptive APs distribution based on the Poisson point process approach and over the Rician fading channels. Therein, the downlink (DL) SE of CF massive MIMO system is accurately investigated bearing in mind the AP's irregular distribution and fast channel variation for both perfect and imperfect channel state information (CSI) cases. The simulation results have shown that the DL SE of the CF massive MIMO system is considerably affected when considering the irregular deployment of APs compared with the uniform distribution, especially when the phase noise effect is tense. The SE gain performance is reduced by 37.9% in the DL transmissions, compared with the uniform model. Besides, the results have proven that the DL SE gain is remarkably improved when the APs are largely distributed within the network. However, the gained DL SE is affected when increasing the user's density and length of the uplink training period.
 
Journal metrics
30 days
Submission to first decision
21%
Acceptance rate
$3,350 / £2,200 / €2,800
APC
1.882 (2021)
Journal Impact Factor™
3.7 (2021)
CiteScore
Top-cited authors
Laurence Tianruo Yang
  • St. Francis Xavier University
Feng Xia
  • Federation University Australia
Lizhe Wang
Joel Rodrigues
  • Senac Faculty of Ceará
Aura Ganz
  • University of Massachusetts Amherst