Article

Omnet++ discrete event simulation system 2

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

ABSTRACT The paper introduces OMNeT++, a C++-based discrete event

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Similarly to gem5, OMNeT++ is also a well-known open-source discrete-event simulator, which has gained widespread popularity as a network simulation platform both in the scientific community and in an industrial setting [59]. Simulation models in OMNeT++ ...
... A basic sequential simulation, which runs on a single CPU, is provided out-of-the-box. However, it features tools to facilitate improved scalability through simulation campaigns and parallel discrete-event simulation [59]. More specifically, OM-NeT++ provides the tools to perform a parameter study on a model, which is used to explore the parameter space, or to perform repetitions using different seeds for a random number generator to increase the statistical accuracy. ...
... Nevertheless, only a single processor will not be sufficient to complete large-scale scientific simulations or exploration of a vast design space in a manageable time frame. OMNeT++ refers to batch-queuing, cluster computing or grid computing middleware for the adoption of (distributed) compute clusters [59]. The parallel discrete-event simulation support of OMNeT++ allows the developer to indicate separate subprocesses within a model. ...
Thesis
Full-text available
Industrial Cyber-Physical Systems (CPS) drive industry sectors worldwide, combining physical and software components into sophisticated interconnected systems. Distributed CPS (dCPS) further enhance these systems by interconnecting multiple distributed subsystems through intricate, complex networks. Researchers and industrial designers need to carefully consider various design options that have the potential to impact system behaviour, cost, and performance during the development of dCPS. However, the increased size and complexity present manufacturing companies with new challenges when designing their next-generation machines. Furthermore, objectively evaluating these ma-chines' vast number of potential arrangements can be resource-intensive. One of the approaches designers can utilise to aid themselves with early directions in the design process is Design Space Exploration (DSE). Nevertheless, the vast amount of potential design points (a single system configuration) in the design space (collection of all possible design points) poses a significant challenge to scalably and efficiently reach an exact or reasonable solution during the design process. This thesis addresses the scalability challenge in the design process employed by researchers and designers of the next-generation complex dCPS. A baseline of understanding is constructed of the state-of-the-art, its complexity, research directions, and challenges in the context of DSE for dCPS and related research fields. To facilitate scalable and efficient DSE for dCPS, an evaluation environment is proposed, implemented, and evaluated. The research considers key design considerations for developing a distributed evaluation workflow that can dynamically be adapted to enable efficient and scalable exploration of the vast design space of complex, distributed Cyber-Physical Systems. Evaluation of the proposed environment employs a set of system models, representing design points within a DSE process, to assess the solution and its behaviour, performance, capability, and applicability in addressing the scalability challenge in the context of DSE for dCPS. During the evaluation, the performance and behaviour are investigated in three areas: (i) Simulation Campaign , (ii) Task Management Configuration, and (iii) Parallel Discrete-Event Simulation (PDES). Throughout the evaluation, it is demonstrated that the proposed environment is capable of providing scalable and efficient evaluation of design points in the context of DSE for dCPS. Furthermore, the proposed solution enables designers and researchers to tailor it to their environment through dynamic complex workflows and interactions, workload-level and task-level parallelism, and simulator and compute environment agnosticism. The outcomes of this research contribute to advancing the research field towards scalable and efficient evaluation for DSE of dCPS, supporting designers and researchers developing their next-generation dCPS. Nevertheless, further research can be conducted on the impact of a system's behavioural characteristics on the performance and behaviour of the proposed solution when using the PDES methodology. Additionally, the interaction between external applications and the proposed solution could be investigated to support and enable further complex interactions and requirements.
... Smart grid data communication network [3]. challenge in conducting this research is the implementation and adaptation of wireless technologies within the OMNeT++ [4] and ns-3 [5] network simulators, while accounting for the unique characteristics of NAN scenarios. ...
... It is important to mention that there are studies that compare different technologies using simulators in terms of various J.P. Astudillo León et al. To address these limitations, this paper utilizes well-established simulators in the scientific community, such as OMNeT++ [4] or ns-3 [5], and integrates the exact location of smart meters for a more realistic simulation of network behavior. By utilizing the location of smart meters in Montreal, we provide a more precise evaluation of network performance and potential challenges, facilitating the design and optimization of the smart grid data communication network infrastructure. ...
Article
Smart Grids play a crucial role in managing electric power in a more efficient, secure, and sustainable manner. These grids rely on information and communication technologies to facilitate two-way communication between energy providers and consumers, enabling better coordination in the production and distribution of electricity. This article aims to compare the main technologies that can be used in Neighborhood Area Networks (NAN) for Smart Grid data communication networks. NANs are critical to Smart Grid infrastructure, providing advanced energy monitoring and control solutions like smart metering and demand management. The article surveys commonly used technologies such as IEEE 802.15.4 g, IEEE 802.11s, LoRa, and LTE, highlighting their pros and cons. Unlike many existing works in the literature, this article does not limit itself to a theoretical comparison but also conducts network simulations using widely-used scientific tools such as ns-3 and OMNeT++. The article evaluates the technologies in two scenarios, a grid-like scenario, and a real-world scenario using actual smart meter locations in Montreal. The simulations’ results are analyzed using widely-used metrics such as Packet Delivery Ratio (PDR), network transit time, and compliant factor, and offer insights into selecting the right technology for NANs in Smart Grids based on various network scenarios.
... A popular simulation environment which makes use of this approach is, again, the Omnet++ simulator [52]. The PDES implementation of Omnet++ actually makes use of the algorithm proposed by Chandy, Misra, and Bryant discussed earlier [51]. Another well-known discrete event simulation library called SystemC also provides conservative PDES functionality, called Time-Decoupled SystemC [55]. ...
... Nevertheless, it also supports using optimistic synchronisation. However, this requires writing significantly more complex code and the implementation of a more complicated simulation kernel [51]. Even when this is all in place, optimistic synchronisation may be slow when excessive rollbacks frequently occur. ...
Preprint
Full-text available
Industrial Cyber-Physical Systems (CPS) are sophisticated interconnected systems that combine physical and software components driving various industry sectors worldwide. Distributed CPS (dCPS) consist of many multi-core systems connected via complicated networks. During the development of dCPS, researchers and industrial designers need to consider various design options which have the potential to impact the system's behaviour, cost, and performance. The resulting ramification in size and complexity poses new challenges for manufacturing companies in designing their next-generation machines. However, objectively evaluating these machines' vast number of potential arrangements can be resource-intensive. One potential alternative is to use simulations to model the systems and provide initial analysis with reduced overhead and costs. This literature review investigates state-of-the-art scalability techniques for system-level simulation environments, i.e. Simulation Campaigns, Parallel Discrete Event Simulations (PDES), and Hardware Accelerators. The goal is to address the challenge of scalable Design Space Exploration (DSE) for dCPS, discussing such approaches' characteristics, applications, advantages, and limitations. The conclusion recommends starting with simulation campaigns as those provide increased throughput, adapt to the number of tasks and resources, and are already implemented by many state-of-the-art simulators. Nevertheless, further research has to be conducted to define, implement, and test a sophisticated general workflow addressing the diverse sub-challenges of scaling system-level simulation environments for the exploration of industrial-size distributed Cyber-Physical Systems.
... Similar to NS2, OMNeT++ [16] is a discrete-event simulator based on C++ and Tcl/Tk. Modules built in OMNeT++ communicate with each other by passing messages and the interface and functionality of the modules are separated which supports models reuse. ...
... In particular, a well-known simulator infrastructure is used to emulate urban traffic and the VN. OMNeT++ [41] is a well-known discrete event simulator that can replicate a network. In particular, it can provide a model of block programming similar to the ISO/OSI stack, so it's particularly easy to design cross-layer network protocols. ...
Preprint
Full-text available
This thesis addresses the use of Cooperative Intelligent Transport Systems (CITS) to improve road safety and efficiency by enabling vehicle-to-vehicle communication, highlighting the importance of secure and accurate data exchange. To ensure safety, the thesis proposes a Machine Learning-based Misbehavior Detection System (MDS) using Long Short-Term Memory (LSTM) networks to detect and mitigate incorrect or misleading messages within vehicular networks. Trained offline on the VeReMi dataset, the detection model is tested in real-time within a platooning scenario, demonstrating that it can prevent nearly all accidents caused by misbehavior by triggering a defense protocol that dissolves the platoon if anomalies are detected. The results show that while the system can accurately detect general misbehavior, it struggles to label specific types due to varying traffic conditions, implying the difficulty of creating a universally adaptive protocol. However, the thesis suggests that with more data and further refinement, this MDS could be implemented in real-world CITS, enhancing driving safety by mitigating risks from misbehavior in cooperative driving networks.
... First, the vehicular network is implemented using the opensource framework VEINS . The OMNeT++ framework (Vargas, 2018) acts as the network simulation platform. Then, the traffic mobility simulation is implemented in SUMO (Lopez et al., 2018). ...
Article
Many studies have shown that air quality in cities is affected due to emissions of carbon from vehicles. As a result, policymakers ( e.g ., municipalities) intensely search for new ways to reduce air pollution due to its relation to health diseases. With this concern, connected vehicle technologies can leverage alternative on-road emissions control policies. The present investigation studies the impact on air pollution by (i) updating vehicles’ routes to avoid pollution exposure (route choice policy), (ii) updating vehicles’ speed limits (speed control policy), and (iii) considering electric vehicles (EVs). Vehicles are informed in advance about route conditions ( i.e ., on-road emissions) using the vehicular network. We found that by updating vehicle routes, 7.43% less CO emissions are produced within the evaluated region. Also, we find no evidence of significant emissions reductions in the case of limiting vehicles’ speed. Lastly, with 30% of EV penetration, safe CO emissions levels are reached.
... This section discusses the procedures employed for data collection and processing to create comprehensive datasets representative of the diverse urban scenarios selected, namely Montreal, Barcelona, and Rome. The specifics of simulation settings utilized via the OMNeT++ simulator [30] to generate the requisite datasets for each urban area are detailed in Table 1. Within each simulation run, smart meters send packets to the collector, thereby capturing an array of routing metrics pertaining to link and node statuses. ...
... Thus, the simulations for data collection were run separately over different areas, as illustrated in Fig. 3. The simulations were carried out using the network simulator OMNeT++ [24]. In each simulation, the smart meters sent packets through random routes to the collector. ...
Article
Full-text available
This research explores the potential of Machine Learning (ML) to enhance wireless communication networks, specifically in the context of Wireless Smart Grid Networks (WSGNs). We integrated ML into the well-established Routing Protocol for Low-Power and Lossy Networks (RPL), resulting in an advanced version called ML-RPL. This novel protocol utilizes CatBoost, a Gradient Boosted Decision Trees (GBDT) algorithm, to optimize routing decisions. The ML model, trained on a dataset of routing metrics, predicts the probability of successfully reaching a destination node. Each node in the network uses the model to choose the route with the highest probability of effectively delivering packets. Our performance evaluation, carried out in a realistic scenario and under various traffic loads, reveals that ML-RPL significantly improves the packet delivery ratio and minimizes end-to-end delay, making it a promising solution for more efficient and responsive WSGNs.
... We developed a discrete event simulation (DES) 12 using the MATLAB-based simulation tool SimEvents (online supplemental appendix 1). Our DES simulates the movement of product candidates in each pooled-funding mechanism design option through the PRND R&D pipeline. ...
Article
Full-text available
Introduction Poverty-related and neglected diseases (PRNDs) cause over three million deaths annually. Despite this burden, there is a large gap between actual funding for PRND research and development (R&D) and the funding needed to launch PRND products from the R&D pipeline. This study provides an economic evaluation of a theoretical global pooled-funding mechanism to finance late-stage clinical trials of PRND products. Methods We modelled three pooled-funding design options, each based on a different level of coverage of candidate products for WHO’s list of PRNDs: (1) vaccines covering 4 PRNDs, (2) vaccines and therapeutics covering 9 PRNDs and (3) vaccines, therapeutics and diagnostics covering 30 PRNDs. For each option, we constructed a discrete event simulation of the 2019 PRND R&D pipeline to estimate required funding for phase III trials and expected product launches through 2035. For each launch, we estimated global PRND treatment costs averted, deaths averted and disability-adjusted life-years (DALYs) averted. For each design option, we calculated the cost per death averted, cost per DALY averted, the benefit–cost ratio (BCR) and the incremental cost-effectiveness ratio (ICER). Results Option 1 averts 18.4 million deaths and 516 million DALYs, has a cost per DALY averted of US84andyieldsaBCRof5.53.Option2averts22.9milliondeathsand674millionDALYs,hasacostperDALYavertedofUS84 and yields a BCR of 5.53. Option 2 averts 22.9 million deaths and 674 million DALYs, has a cost per DALY averted of US75, an ICER over option 1 of US49andyieldsaBCRof3.88.Option3averts26.9milliondeathsand1billionDALYs,hasacostperDALYavertedofUS49 and yields a BCR of 3.88. Option 3 averts 26.9 million deaths and 1 billion DALYs, has a cost per DALY averted of US114, an ICER over option 2 of US$186 and yields a BCR of 2.52. Conclusions All 3 options for a pooled-funding mechanism—vaccines for 4 PRNDs, vaccines and therapeutics for 9 PRNDs, and vaccines, therapeutics and diagnostics for 30 PRNDs—would generate a large return on investment, avert a substantial proportion of the global burden of morbidity and mortality for diseases of poverty and be cost-effective.
... In the literature, model training and testing is performed either according to synthetic datasets or according to real traffic traces. Synthetic datasets are commonly generated by event driven simulators (e.g., available in MATLAB, OM-NeT++ [109]), by traffic demand models [29], or by distributions describing the traffic behavior. Indicatively, Poisson distribution is used for simulating arrivals and departure times, while the log-normal distribution is also applied to simulate the holding times or traffic volumes. ...
Article
Full-text available
The unprecedented growth of the global Internet traffic, coupled with the large spatio-temporal fluctuations that create, to some extent, predictable tidal traffic conditions, are motivating the evolution from reactive to proactive and eventually towards adaptive optical networks. In these networks, traffic-driven service provisioning can address the problem of network over-provisioning and better adapt to traffic variations, while keeping the quality-of-service at the required levels. Such an approach will reduce network resource over-provisioning and thus reduce the total network cost. This survey provides a comprehensive review of the state of the art on machine learning (ML)-based techniques at the optical layer for traffic-driven service provisioning. The evolution of service provisioning in optical networks is initially presented, followed by an overview of the ML techniques utilized for traffic-driven service provisioning. ML-aided service provisioning approaches are presented in detail, including predictive and prescriptive service provisioning frameworks in proactive and adaptive networks. For all techniques outlined, a discussion on their limitations, research challenges, and potential opportunities is also presented.
... To evaluate the protocol's performance, we have implemented the proposed MAC protocol in GreenCastalia [62], an extension of the Castalia 3.3 simulator [63]. Castalia is an open-source network simulator and is built with OMNeT++ [64]. It is a widely used and actively maintained network simulator in the WSN research community. ...
Article
Full-text available
The dynamic nature of energy harvesting rate, arising because of ever changing weather conditions, raises new concerns in energy harvesting based wireless sensor networks (EH-WSNs). Therefore, this drives the development of energy aware EH solutions. Formerly, many Medium Access Control (MAC) protocols have been developed for EH-WSNs. However, optimizing MAC protocol performance by incorporating predicted future energy intake is relatively new in EH-WSNs. Furthermore, existing MAC protocols do not fully harness the high harvested energy to perform aggressively despite the availability of sufficient energy resources. Therefore, a prediction-based adaptive duty cycle (PADC) MAC protocol has been proposed, called PADC-MAC, that incorporates current and future harvested energy information using the mathematical formulation to improve network performance. Furthermore, a machine learning model, namely nonlinear autoregressive (NAR) neural network, is employed that achieves good prediction accuracy under dynamic harvesting scenarios. As a result, it enables the receiver node to perform aggressively better when there is sufficient inflow of incoming harvesting energy. In addition, PADC-MAC uses a self-adaptation technique that reduces energy consumption. The performance of PADC-MAC is evaluated using GreenCastalia in terms of packet delay, network throughput, packet delivery ratio, energy consumption per bit, receiver energy consumption, and total network energy consumption using realistic harvesting data for 96 consecutive hours under dynamic solar harvesting conditions. The simulation results show that PADC-MAC provides lower average packet delay of the highest priority packets and all packets, energy consumption per bit, and total energy consumption by more than 10.7%, 7.8%, 81%, and 76.4%, respectively when compared to three state-of-the-art protocols for EH-WSNs.
... Eventually, we validated an optimized solution by analysing the latencies of data flows in an event-driven simulator of a packet-switched xHaul network implemented in OMNET++ v.5.6.1 environment [43]. Namely, we applied MaSCA-RA for planning the xHaul network, i.e., to optimize the DU placement and routing of flows, and next we simulated the transmission and routing of the packet flows between the network elements. ...
Article
Full-text available
Packet-switched xHaul networks based on Ethernet technology are considered a promising solution for assuring convergent, cost-effective transport of diverse radio data traffic flows in dense 5G radio access networks (RANs). A challenging optimization problem in such networks is the placement of distributed processing units (DUs), which realize a subset of virtualized baseband processing functions on general-purpose processors at selected processing pool (PP) facilities. The DU placement involves the problem of routing of related fronthaul and midhaul data flows between network nodes. In this work, we focus on developing optimization methods for joint placement of DUs and routing of flows with the goal to minimize the overall cost of PPs activation and processing in the network, which we refer to as the PPC-DUP-FR problem. We account for limited processing and transmission resources as well as for stringent latency requirements of data flows in 5G RAN. The latency constraint makes the problem particularly difficult in a packet-switched xHaul network since it involves the non-linear and dynamic estimation of the latencies caused by buffering of packets in the switches. The latency model that we apply in this work is based on worst-case calculations with improved latency estimations that skip from processing the co-routed, but non-affecting flows. We use a mixed-integer programming (MIP) approach to formulate and solve the PPC-DUP-FR optimization problem. Moreover, we develop a heuristic method that provides optimized solutions to larger PPC-DUP-FR problem instances, which are too complex for the MIP method. Numerical experiments performed in different network scenarios indicate on the effectiveness of the heuristic in solving the PPC-DUP-FR problem. In particular, the heuristic achieves up to 63% better results than MIP (at the MIP optimality gap equal to 76%) in a medium-size mesh network, in which the MIP problem is unsolvable for higher traffic demands within reasonable runtime limits. In larger networks, MIP is able to provide some results only for the PPC-DUP-FR problem instances with very low traffic demands, whereas the solutions generated by the heuristic are at least 83% better than the ones achieved with MIP. Also, the analysis performed shows a significant impact of the PP cost factors considered and of the level of cost differentiation of PP nodes on the overall PP cost in the network. Finally, simulation results of a case-study packet xHaul network confirm the correctness of the latency model used.
... [14] weist explizit auf bereits vorhandene Kombinationen mit Netzwerksimulatoren hin: die Frameworks iTETRIS Control System (iCS) [13,130], Eclipse MOSAIC [7,148] sowie Veins [155,156]. iCS basiert auf dem diskreten Netzwerksimulator ns-3 [10], Veins auf dem diskreten Netzwerksimulator OMNeT++ [158,159] und Eclipse MOSAIC bietet eine Laufzeitumgebung, die beide Netzwerksimulatoren unterstützt. Weber al. [138] erweitert, allerdings ohne den Fokus auf eine System-Level-Simulation zu legen. ...
Thesis
Im Rahmen dieser Arbeit wurde das OMNeT++-Simulationsframework 5G-Sim-V2I/N zur Performancebewertung von QoS-Anforderungen für V2I und V2N Anwendungsfällen entwickelt, welche auf einem Datenaustausch per 5G-Uu-Luftschnittstelle basieren. Dabei wurde der komplette Protokollstack der 5G User Plane modelliert, um QoS-Anforderungen bei simulierten Fahrzeugen sowie Internet-Servern auf Applikationsebene messen zu können. Drei Anwendungsfälle wurden mit dem Simulationsmodell hinsichtlich der QoS-Anforderungen, die von der 3GPP und der 5GAA vorgegeben werden, bewertet. Zuerst wurde Cooperative Perception per V2I untersucht. Es zeigt sich, dass mit einer Bandbreite von 10 MHz die QoS-Anforderungen eingehalten werden können, was allerdings von der Anzahl gleichzeitig mit einer Basisstation verbundener Fahrzeuge abhängt. Anschließend wurden zwei V2N Anwendungsfälle untersucht. In einem Multiapplikationsszenario führen Fahrzeuge bis zu vier Applikationen mit unterschiedlichen QoS-Anforderungen parallel aus. Der dritte Anwendungsfall repräsentiert ein Remote Driving Szenario, bei dem wenige Remote Vehicle mit von Menschen gesteuerten Fahrzeugen simuliert werden und ebenfalls um Funkressourcen konkurrieren. Die Simulationsergebnisse zeigen, dass das Scheduling per PFQ, welches Funkressourcen fair verteilt, nicht für eine Einhaltung von QoS-Anforderungen ausreicht. Das Schedulingverfahren auf MAC-Ebene, das insbesondere in Basisstationen für eine performante Verteilung der Funkressourcen sorgen muss, wurde deshalb um das 5G QoS Modell erweitert und für verschiedene Schedulingvarianten verwendet. Hierbei erfolgte für jede Anwendung, die Ressourcen benötigt, eine Berechnung einer Schedulingpriorität. Die Schedulingverfahren wurden mit PFQ verglichen. In beiden Anwendungsfällen zeigt sich, dass eine Kombination von Standardprioritätswerten sowie individueller Parameter wie der Kanalqualität am besten die Voraussetzungen für eine Erfüllung der unterschiedlichen QoS-Anforderungen gewährleisten. Die Arbeit gibt abschließend einen Ausblick auf die nachfolgende Mobilfunkgeneration 6G und deren Auswirkungen auf V2X. Ebenso werden mögliche Erweiterungen des Simulationsmodells aufgeführt.
... To evaluate OppNet protocols, we can use simulators such as ONE, that is an integrated solution that can simulate delay-tolerant wireless protocols with synthetic and trace-based mobility models; 25 Adyton, that simulates OppNets only with trace-based mobility models; 26 and Opportunistic protocol simulator (OPS), 27 that is an extension of the discrete event simulator OMNeT++. 28 OPS can simulate OppNet's protocols with synthetic and tracebased mobility models. 27 ...
Article
Full-text available
According to state-of-the-art research, mobile network simulation is preferred over real testbeds, especially to evaluate communication protocols used in Opportunistic Networks (OppNet) or Mobile Ad hoc NETworks (MANET). The main reason behind it is the difficulty of performing experiments in real scenarios. However, in a simulation, a mobility model is required to define users’ mobility patterns. Trace-based models can be used for this purpose, but they are difficult to obtain, and they are not flexible or scalable. Another option is TRAce-based ProbabILiStic (TRAILS). TRAILS mimics the spatial dependency, geographic restrictions, and temporal dependency from real scenarios. In addition, with TRAILS, it is possible to scale the number of mobile users and simulation time. In this paper, we dive into the algorithms used by TRAILS to generate mobility graphs from real scenarios and simulate human mobility. In addition, we compare mobility metrics of TRAILS simulations, real traces, and another synthetic mobility model such as Small Worlds in Motion (SWIM). Finally, we analyze the performance of an implementation of the TRAILS model in computation time and memory consumption. We observed that TRAILS simulations represent the interaction among users of real scenarios with higher accuracy than SWIM simulations. Furthermore, we found that a simulation with TRAILS requires less computation time than a simulation with real traces and that a TRAILS graph consumes less memory than traces.
... In this study, the data aggregation and prediction process are based mainly on the prediction method proposed in our previous work. 22 Since it has not defined so far in the literature, it presents an integral part of the proposed framework. The data aggregation is aimed to tackle the problems of the traffic parameters accuracy. ...
Article
Full-text available
The arrival of cloud computing technology promises innovative solutions to the problems inherent in existing vehicular ad hoc network (VANET) networks. Because of the highly dynamic nature of these networks in crowded conditions, some network performance improvements are needed to anticipate and disseminate reliable traffic information. Although several approaches have been proposed for the dissemination of data in the vehicular clouds, these approaches rely on the dissemination of data from conventional clouds to vehicles , or vice versa. However, anticipating and delivering data, in a proactive way, based on query message or an event driven has not been defined so far by these approaches. Therefore, in this paper, a VANET-Cloud layer is proposed for traffic management and network performance improvements during congested conditions. For the traffic management, the proposed layer integrates the benefits of the connected sensor network (CSN) to collect traffic data and the cloud infrastructure to provide on-demand and automatic cloud services. In this work, traffic services use a data exchange mechanism to propagate the predicted data using a fuzzy aggregation technique. In the evaluation phase, simulation results demonstrate the effectiveness of the proposed VANET-Cloud layer to dramatically improve traffic safety and network performance as compared with recent works.
... In the literature, model training and testing is performed either according to synthetic datasets or according to real traffic traces. Synthetic datasets are commonly generated by event driven simulators (e.g., available in MATLAB, OM-NeT++ [109]), by traffic demand models [29], or by distributions describing the traffic behavior. Indicatively, Poisson distribution is used for simulating arrivals and departure times, while the log-normal distribution is also applied to simulate the holding times or traffic volumes. ...
Preprint
Full-text available
The unprecedented growth of the global Internet traffic, coupled with the large spatio-temporal fluctuations that create, to some extent, predictable tidal traffic conditions, are motivating the evolution from reactive to proactive and eventually towards adaptive optical networks. In these networks, traffic-driven service provisioning can address the problem of network over-provisioning and better adapt to traffic variations, while keeping the quality-of-service at the required levels. Such an approach will reduce network resource over-provisioning and thus reduce the total network cost. This survey provides a comprehensive review of the state of the art on machine learning (ML)-based techniques at the optical layer for traffic-driven service provisioning. The evolution of service provisioning in optical networks is initially presented, followed by an overview of the ML techniques utilized for traffic-driven service provisioning. ML-aided service provisioning approaches are presented in detail, including predictive and prescriptive service provisioning frameworks in proactive and adaptive networks. For all techniques outlined, a discussion on their limitations, research challenges, and potential opportunities is also presented.
Article
Full-text available
Railway transportation is a cost-effective and reliable mode of transportation. The construction of an entire railway network is a challenging task that requires careful planning and execution. Infrastructure and schedule are just two of the many elements that must be accurate and cheap for greater performance. Train simulators are computer-based simulations of rail transport operations that can help in the planning, creation, and administration of effective train operations. These simulators can model various aspects of rail transport, such as train movements, signaling systems, and track layouts, allowing railway operators to test and optimize their operations before implementing them in the real world. In this study, 20 simulators with various parameters like simulator types, category, working, etc. were examined together with a total of top 10 companies that offer railway-related services for the year 2022. The results of this study shall offer insightful knowledge about the simulation, creation, and administration of effective train operations.
Article
Full-text available
The state-of-the-art framework for VANETs, Vehicles in Network Simulation (VEINS), is primarily sparse and fragmented. The combination of VANETs and VEINS can improve road safety, efficiency, and user experience for connected and autonomous vehicles. This research examined existing trends and knowledge gaps to provide actionable insights for technical contexts and researchers. Therefore, this systematic literature evaluation was conducted to create a full classification of the article ecosystem. The literature applies the VEINS framework to simulate and evaluate in-vehicle personalized entertainment recommendations based on real-time traffic data and user preferences. We examine service metrics for VANET-integrated vehicle content exchange. Three databases were consulted throughout this study: Scopus, ScienceDirect, and IEEE Xplore. The databases had extensive VANET-related research built on the VEINS framework. Then, screening was completed based on the services considerations. The topic is thoroughly covered in this categorization. Taxonomy proposes categories and subcategories. The initial group includes papers discussing different aspects of VANET-based VEINS framework applications (35/9878 total). The second group consists of pieces that focus on the answer (15/98 total). Network-related articles (48/98 total) make up the final section. This work concludes with a discussion of the VEINS framework’s design and bidirectional connectivity. This study could be helpful for researchers working on VANETs and the VEINS framework by highlighting areas where further development is necessary.
Conference Paper
Field-deployed robotic fleets can provide solutions that improve operational efficiency, control operational costs, and provide farmers with transparency over day-to-day scouting operations. The topology of agricultural environments, such as polytunnels, provides a basic configuration that can be exploited to create topological maps aiding operational planning and robot navigation. However, these environments these environments, optimised for human operations or large farming vehicles, pose a major challenge for multiple moving robots to coordinate their navigation while performing tasks. An unmodified farm environment, not tailored for robotic fleet deployments, can cause traffic bottlenecks, thereby affecting the overall efficiency of the fleet. In this work, we propose a Genetic Algorithm-based Topological Optimisation (GATO) algorithm that discretises the search space of topological modifications into finite integer combinations. Each solution is encoded as an integer vector that contains the location information of the topology modification. We evaluate our algorithm through a discrete event simulation of the picking and in-field logistics processes on a commercial strawberry farm, and the results demonstrate its effectiveness in identifying topological modifications that enhance the efficiency of robotic fleet operations.
Article
Full-text available
With the ever-increasing size of training models and datasets, network communication has emerged as a major bottleneck in distributed deep learning training. To address this challenge, we propose an optical distributed deep learning (ODDL) architecture. ODDL utilizes a fast yet scalable all-optical network architecture to accelerate distributed training. One of the key features of the architecture is its flow-based transmit scheduling with fast reconfiguration. This allows ODDL to allocate dedicated optical paths for each traffic stream dynamically, resulting in low network latency and high network utilization. Additionally, ODDL provides physically isolated and tailored network resources for training tasks by reconfiguring the optical switch using LCoS-WSS technology. The ODDL topology also uses tunable transceivers to adapt to time-varying traffic patterns. To achieve accurate and fine-grained scheduling of optical circuits, we propose an efficient distributed control scheme that incurs minimal delay overhead. Our evaluation on real-world traces showcases ODDL’s remarkable performance. When implemented with 1024 nodes and 100 Gbps bandwidth, ODDL accelerates VGG19 training by 1.6×1.6 \times and 1.7×1.7 \times compared to conventional fat-tree electrical networks and photonic SiP-Ring architectures, respectively. We further build a four-node testbed, and our experiments show that ODDL can achieve comparable training time compared to that of an ideal electrical switching network.
Article
Full-text available
The quantity of bugs that a software test-data finds determines its effectiveness. A useful technique for assessing the efficacy of a test set is mutation testing. The primary issues with the mutation test are cost and time requirements. Close to 40% of the injected bugs in the mutation test are effect-less (equivalent). Reducing the number of generated total mutants by decreasing equivalent mutants and reducing the execution time of the mutation test are the main objectives of this study. An error-propagation aware mutation test approach has been suggested in this research. Three steps make up the process. To find a collection of instruction-level characteristics effective on the error propagation rate, the data and instructions of the input program were evaluated in the first step. Utilizing supervised machine learning techniques, an instruction classifier was developed using the prepared dataset in the second step. After classifying the program instructions automatically by the created classifier, the mutation test is performed only on the identified error-propagating instructions; the identified non-error-propagating instructions are avoided to mutate in the proposed mutation testing. The conducted experiments on the set of standard benchmark programs indicate that the proposed method causes about 19% reduction in the number of generated mutants. Furthermore, the proposed method causes a 32.24% reduction in the live mutants. It should be noted that the proposed method eliminated only the affectless mutants. The key technical benefit of the suggested solution is that mutation of the instructions that don't propagate errors is avoided. These findings can lead to a performance improvement in the existing mutation-test methods and tools.
Article
Full-text available
Smart cities rely on real-time sensor data to improve services and quality of life. However, the rapid growth of sensor data poses challenges in transmission, storage, and processing. This paper presents a case study on estimating sensor data generation and the role of Opportunistic Networks (OppNets) in data collection in Ahmedabad and Gandhinagar, India. We highlight the challenges of managing large amounts of sensor data, particularly in densely populated cities. We propose OppNets as a promising solution, as they can leverage the mobility of devices to relay data in a distributed manner. We present a detailed analysis of sensor requirements and data generation for different smart city applications and discuss the potential benefits of OppNets for smart city data collection. Our study shows that Ahmedabad and Gandhinagar require approximately 4.6 million and 1.3 million sensors, producing an estimated 2702 Terabytes and 704 Terabytes of sensor data daily, respectively.
Article
This study proposes a distributed and extensible cross-region vehicle authentication scheme with the reputation for improving the security and efficiency of cross-region vehicle authentication. The existing authentication schemes demonstrate the following drawbacks: 1) Each vehicle is preloaded with the same system private key, which may be leaked so that the entire system would be destroyed; 2) Other schemes rely on trusted authority to aid in selecting some cluster head nodes; 3) The existing cross-region authentication schemes are not flexible and scalable since they depend on the infrastructure fixed on the roadside. With the proposed scheme, each vehicle stores a long-term private key that is different from those of other vehicles, thereby avoiding a system crash when destroying a vehicle. When the cross-region vehicle enters a new region, it can verify the reputation value of the surrounding vehicles to select the edge computing vehicle. The formal security proof shows that the proposed scheme has adequate security under the real-or-random model. The performance evaluation of our scheme with several related schemes reveals that it generates relatively low computation and communication overhead, is more robust, and achieves minimum packet loss ratio and delay.
Article
Full-text available
Connected and Autonomous Vehicles (CAVs) expect to dramatically improve road safety and efficiency of the transportation system. However, CAVs can be vulnerable to attacks at different levels, e.g., attacks on intra-vehicle networks and inter-vehicle networks. Those malicious attacks not only result in loss of confidentiality and user privacy but also lead to more serious consequences such as bodily injury and loss of life. An intrusion detection system (IDS) is one of the most effective ways to monitor the operations of vehicles and networks, detect different types of attacks, and provide essential information to mitigate and remedy the effects of attacks. To ensure the safety of CAVs, it is extremely important to detect various attacks accurately in a timely fashion. The purpose of this survey is to provide a comprehensive review of available machine learning (ML) based IDS for intra-vehicle and inter-vehicle networks. Additionally, this paper discusses publicly available datasets for CAV and offers a summary of the many current testbeds and future research trends for connected vehicle environments.
Chapter
We present a network emulator for dynamic link networks, i.e., networks whose parameter values vary; for example, satellite communication networks where bandwidth capacity varies. We describe the design of the emulator, which allows replicating any network system, through the use of state-of-the-art virtualization technologies. This paper is also devoted to the verification of the datasets produced by monitoring the network emulation. We propose a model-based design for a dynamic link network emulator and discuss how to extract data for network parameters such as bandwidth, delay, etc. These data can be verified to ensure a number of desired properties. The main goal is to try to guarantee that the emulator behaves as the real physical system. We rely on model checking strategies for the dataset validation, in particular, we utilize a Satisfiability Module Theories (SMT) solver. The properties to check can include one or several network parameter values and can contain dependencies between various network instances. Experimental results showcase the pertinence of our emulator and proposed approach.KeywordsModel-based DesignDynamic link NetworksEmulatorMany-sorted First Order LogicSatisfiability Modulo Theories
Article
Network models are an essential block of modern networks. For example, they are widely used in network planning and optimization. However, as networks increase in scale and complexity, some models present limitations, such as the assumption of Markovian traffic in queuing theory models, or the high computational cost of network simulators. Recent advances in machine learning, such as Graph Neural Networks (GNN), are enabling a new generation of network models that are data-driven and can learn complex non-linear behaviors. In this paper, we present RouteNet-Fermi, a custom GNN model that shares the same goals as Queuing Theory, while being considerably more accurate in the presence of realistic traffic models. The proposed model predicts accurately the delay, jitter, and packet loss of a network. We have tested RouteNet-Fermi in networks of increasing size (up to 300 nodes), including samples with mixed traffic profiles — e.g., with complex non-Markovian models — and arbitrary routing and queue scheduling configurations. Our experimental results show that RouteNet-Fermi achieves similar accuracy as computationally-expensive packet-level simulators and scales accurately to larger networks. Our model produces delay estimates with a mean relative error of 6.24% when applied to a test dataset of 1,000 samples, including network topologies one order of magnitude larger than those seen during training. Finally, we have also evaluated RouteNet-Fermi with measurements from a physical testbed and packet traces from a real-life network.
Conference Paper
By the time of writing this paper, countries around the world are in a race against time to reduce or stop the spread of SARS-CoV2 (COVID-19) virus. Relying on typical measures such as social distancing to partial and full lockdowns seems to be insufficient. However, involving modern technologies should be sought after in order to support the efforts by governments’ measures to decrease the spread of virus. Vehicular Ad-Hoc Networks (VANET), among many other technologies, can fit into the current scene. Through their Onboard Units (OBU), and by leveraging modern communication technologies, vehicles can play a vital role in reducing the spread of COVID-19 and minimize the effects of the pandemic. In this paper, a VANET application layer model for monitoring and detecting COVID-19 symptomatic cases is proposed. In the proposed model, a vehicle's OBU senses the vehicle's driver and passengers’ temperature for any abnormalities, warn surrounding vehicles as well as any Roadside Units (RSU) within the transmission range with warning messages containing the detected case's temperature, and the vehicle's coordinates and identification. The information disseminated from the vehicles to the RSU is used to track the vehicles and take actions accordingly. The proposed model presents and expansion to the current VANET safety and healthcare applications by utilizing the available distant thermometers and current wireless communication. A proof of concept model that can monitor, detect, and warn in the case of the existence of vehicles with fever symptomatic cases of COVID-19 in.real-time was developed and verified using simulation.
Article
Full-text available
The COVID-19 pandemic is currently having disastrous effects on every part of human life everywhere in the world. There have been terrible losses for the entire human race in all nations and areas. It is crucial to take good precautions and prevent COVID-19 because of its high infectiousness and fatality rate. One of the key spreading routes has been identified to be transportation systems. Therefore, improving infection tracking and healthcare monitoring for high-mobility transportation systems is impractical for pandemic control. In order to enhance driving enjoyment and road safety, 5G-enabled vehicular fog computing may gather and interpret pertinent vehicle data, which open the door to non-contact autonomous healthcare monitoring. Due to the urgent need to contain the automotive pandemic, this paper proposes a COVID-19 vehicle based on an efficient mutual authentication scheme for 5G-enabled vehicular fog computing. The proposed scheme consists of two different aspects of the special flag, SF = 0 and SF = 1, denoting normal and COVID-19 vehicles, respectively. The proposed scheme satisfies privacy and security requirements as well as achieves COVID-19 and healthcare solutions. Finally, the performance evaluation section shows that the proposed scheme is more efficient in terms of communication and computation costs as compared to most recent related works.
Article
Vehicular ad hoc network (VANET) is an emerging technology that can significantly improve the efficiency of transportation systems and mitigate traffic accidents by exchanging traffic-related messages or announcements. Nevertheless, there has not been a consensus on how to generate, distribute, and validate trustworthy announcements in such an untrusted wireless environment. Security and privacy, inspiration mechanism, and resource integration are significant challenges for announcement generation and dissemination. In this paper, a secure and trustworthy announcement dissemination scheme is realized for location-based service (LBS) application in VANET. A blockchain-assisted vehicular cloud (VC) architecture is proposed to harvest underutilized heterogeneous resources of vehicles participating in VANET. Moreover, the technologies of blockchain and smart contract are adopted to classify vehicles into different levels automatically by bidding for bonuses. What’s more, vehicles can generate trustworthy announcements with the help of neighbor vehicles by adopting the technology of threshold signature. Meanwhile, the reputation of announcements is evaluated for trust management. Formal security analysis shows that the proposed scheme satisfies fundamental security and privacy requirements in VANET. Experimental results show that the proposed scheme is robust and efficient.
Article
The organization of vehicles into platoons is very promising due to its contributions to intelligent transportation systems, through the reduction in traffic congestion and fuel consumption. As the powerful processing units and other resources embedded in vehicles may not be fully utilized during the entire travel period of vehicular platoons, we claim that the vehicles can collaborate as a single unit to form a federated platoon-based vehicular cloud to meet the high demand for computing resources and services in vehicular environments. In order to make advancements in the deployment of federated platoon-based vehicular cloud, data partitioning and scheduling schemes that distribute data chunks of large and divisible application data among platoon vehicles are proposed considering the characteristics of vehicular resources, network parameters, and the position of vehicles in the platoon. The data partitioning and scheduling schemes, which are modeled based on the different information flow topologies of vehicular platoon, consist of the Bi-Directional Recursive (BD-R), Bi-Directional Interlaced (BD-I), Bi-Directional Lead Recursive (BDL-R), Bi-Directional Lead Interlaced (BDL-I), Bi-Directional Lead Aggregate Recursive (BDLA-R) and the Bi-Directional Lead Aggregate Interlaced (BDLA-I). Performance analysis carried out through realistic simulations showed that while the BDL-R and BDL-I schemes have the best performance in terms of task execution time, the other schemes have the advantage of enforcing priority in unprocessed data transmission and dependency in the aggregation of processed data chunks by each platoon member. Analysis of the impact of the proposed data partitioning and scheduling schemes on platoon string stability will be examined in future studies.
ResearchGate has not been able to resolve any references for this publication.