Figure - available from: Discover Internet of Things
This content is subject to copyright. Terms and conditions apply.
The Master-Worker application module

The Master-Worker application module

Source publication
Article
Full-text available
Smart devices in various application areas are becoming increasingly prevalent for efficient handling of multiple critical activities. One such area of interest is high-security militarized environments. Due to military zones’ harsh and unpredictable nature, monitoring devices deployed in such environments must operate without power interruption fo...

Citations

... They demonstrate how middleware solutions, especially for those used in tactical settings, might offer features that help lessen some of the drawbacks associated with IoT-enabled military applications. The work done in [114] proposes two application models, "the sequential module" and the "master-worker module," to process data collected by the end devices within the IoMT network. The work explores energy-efficient strategies for managing IoT devices in military environments using fog computing. ...
... In this paper, we performed a comprehensive analysis of solutions developed to optimize communication capabilities in IoBT systems. The proposed techniques to deal with specific challenges such as power management [5,20,107,108,113,114], network security [45,[119][120][121][122][123]126], cyber security [91,124,125,[129][130][131][132], data management [136,137], and interoperability [97][98][99]115]. Our analysis distinguishes the approaches employed and highlights the challenges and performance in the studies considered. ...
... The drawback is that the work does not explicitly discuss the scalability of the proposed protocol, particularly in large-scale networks, where the number of nodes and network complexity can significantly impact performance. The strength of the master-worker module in [114] lies in the considerable savings achieved in energy consumption, making it effective in handling smart devices. The drawback is that the paper only focused on the logical components of the application modules. ...
Article
Full-text available
The use of Internet of Things (IoT) technology in military settings has introduced the notion of “Internet of Battle Things” (IoBT), transforming modern warfare by interconnecting various equipment and systems essential for battlefield operations. This connectivity facilitates real-time communication, data sharing, and collaboration among military assets, enhancing situational awareness, decision-making processes, and overall operational effectiveness. The domain for IoBT encompasses a broad range of military assets, from drones and ground vehicles to soldier-worn wearables, sensors, and munitions. These assets are capable of collecting and transmitting critical information from the battlefield, including location data, status updates, environmental conditions, and the movements of adversaries. IoBT networks depend on robust communication networks, secure data transmission protocols, advanced data analytics for processing vast datasets, and seamless integration with command-and-control infrastructures. However, IoBT devices and systems function in dynamic and challenging battlefield conditions which present unique communication challenges. This study aims to review research efforts that provide current state-of-the-art solutions, their limitations, and emerging technologies. We classify these challenges into interoperability, power and energy management, security, and network resilience, while also discussing future research directions to improve communication in IoBT networks.
... Centrally situated in this framework are the Medium Access Control (MAC) protocols. Specifically, the IEEE 802.15.4 standard has been a vanguard, incorporating the CSMA protocol and underlining an exigency for low data rate communications coupled with a pronounced emphasis on energy efficiency [7], [8]. This study embarks on a meticulous examination and critical appraisal of alternatives to the CSMA protocol, spotlighting the Slotted ALOHA protocol. ...
Conference Paper
Full-text available
Given the proliferation of connected devices and the prioritization of real-time data acquisition across various scenarios, enhancing the energy efficiency within Wireless Sensor Networks (WSNs) is of paramount importance. This work has focused on the IEEE 802.15.4 standard and addresses existing medium access control protocols such as CSMA or Slotted ALOHA and proposes refinements in the Slotted ALOHA protocol through incorporating techniques like Binary Exponential Backoff (BEB) and Q-learning. These enhancements have demonstrated to be promising in terms of average delay reduction, energy efficiency and bolstered network throughput. As it facilitates more efficient energy management it constitutes a robust alternative to conventional CSMA in WSN MAC sub-layer protocols.
... For instance, in an electric power grid, the fuel store is a critical entity. Other entities that may critically depend on the electric power grid may be within other infrastructures such as water and gas [22,25,27,30,31]. Due to the interconnection and interdependency of these infrastructures, the failure of some entities in one system may even shut down a subset of entities in another system. ...
... The catastrophe was of enormous proportions. The disabled power stations triggered the shutdown of service points in the internet communication network [22,25,27]. This caused the damage to spread to controllers of other entities that relied on the internet, which caused more power stations to shut down. ...
Article
Full-text available
A wide range of critical infrastructures are connected via wide area networks as well as the Internet-of-Thing (IoT). Apart from natural disasters, these infrastructures, providing services such as electricity, water, gas, and Internet, are vulnerable to terrorist attacks. Clearly, damages to these infrastructures can have dire consequences on economics, health services, security and safety, and various business sectors. An infrastructure network can be represented as a directed graph in which nodes and edges denote operation entities and dependencies between entities, respectively. A knowledgeable attacker who plans to harm the system would aim to use the minimum amount of effort, cost, or resources to yield the maximum amount of damage. Their best strategy would be to attack the most critical nodes of the infrastructure. From the defender’s side, the strategy would be to minimize the potential damage by investing resources in bolstering the security of the critical nodes. Thus, in the struggle between the attacker and defender, it becomes important for both the attacker and defender to identify which nodes are most critically significant to the system. Identifying critical nodes is a complex optimization problem. In this paper, we first present the problem model and then propose a solution for computing the optimal cost attack while considering the failure propagation. The proposed model represents one or multiple interconnected infrastructures. While considering the attack cost of each node, the proposed method computes the optimal attack that a rational attacker would make. Our problem model simulates one of two goals: maximizing the damage for a given attack budget or minimizing the cost for a given amount of damage. Our technique obtains solutions to optimize the objective functions by utilizing integer-linear programming while observing the constraints for each of the specified goals. The paper reports an extensive set of experiments using various graphs. The results show the efficacy of our technique in terms of its ability to obtain solutions with fast turnaround times.
... Many research studies have tried to address the placement problem from various perspectives. Some have considered minimising the overall application latency [12,[15][16][17][18] while placing the application modules on fog nodes, whereas some research has considered the energy consumption [19][20][21] of the fog nodes while placing the application modules. Despite the efforts in the previous research, there's a need for more robust solutions that take into account both latency, energy consumption, and the completion time of the applications. ...
Preprint
Full-text available
In recent years, fog computing has gained significant popularity for its reduced latency (delay), low power consumption, mobility, security and privacy, network bandwidth, and real-time responses. It provides cloud-like services to Internet of Things (IoT) applications at the edge of the network with minimal delay and real-time responses. Fog computing resources are finite, computationally constrained, and powered by battery cells, which require optimal power management. To facilitate the execution of IoT services on fog computing resources, applications are broken down into a group of data-dependent application modules. The application modules communicate and transfer data from one module to another in order to achieve a common goal. With the limitations on computing resource capacity and the rise in demand for these resources for application module processing, there is a need for a robust application module placement strategy. Inefficient application module placement can result in a tremendous hike in latency, a higher completion time, a fast drain on battery cells, and other placement problems. This paper focuses on minimising the average delay, completion time (Makespan time), and energy usage of the fog system while placing the data-dependent modules of the IoT application on resources in the fog layer. To achieve the said objectives, a hybrid meta-heuristic algorithm based on the Red Deer Algorithm (RDA) and the Harris Hawks Optimisation Algorithm (HHO) is proposed. The optimisation algorithms independently search for a placement solution in the search space and update the best solution based on some probability function. The proposed hybrid algorithm was implemented using the iFogSim simulator and evaluated based on average completion time, average latency, and average energy consumption. The simulation results show the effectiveness of the proposed hybrid heta-heuristic algorithm over the traditional RDA and HHO algorithms.
... The enabling of civil-military collaboration for disaster management in smart city scenario has been discussed in [20]. The use of IoT-fog architecture for military sector with an emphasis on energy conservation has been discussed in [21]. In the present work, the geospatial information is analysed to reach the affected region from the source during disaster management. ...
Article
Full-text available
Mission-critical applications refer to the real-time applications, which require fast and secure service provisioning, such as defense sector and disaster management. This paper proposes a delay-aware and secure service provisioning model for such types of applications. As a use-case, we have considered the defense sector, which is a vital sector for a country’s all-round well-being including security, safety, society, and economy. In the conventional sensor-cloud model, the sensor data is stored and processed in the cloud. However, the sensor nodes have small coverage and the use of the long distant cloud servers increases the delay. Therefore, the conventional sensor-cloud model may not be efficient for defense application. Moreover, data hiding for security purposes is another important aspect of this field. To address these challenges, this paper proposes a mobility-aware sensor-fog paradigm for mission-critical applications based on network coding and steganography, referred to as Mobi-Sense. In Mobi-Sense, steganography is used for hiding the data during transmission. The theoretical results demonstrate that Mobi-Sense outperforms the existing frameworks with respect to delay and power consumption by (4080)\sim (40-80)%. The simulation results present that Mobi-Sense reduces the delay by (1840)\sim (18-40)% than the conventional sensor-cloud framework for mission-critical applications. An optimal path finding algorithm based on deep learning has been deployed in the context of disaster scenario. The experimental analysis shows that the proposed optimal path finding method achieves precision and accuracy above 90%. This is observed that our proposed modules have outperformed existing baselines in terms of accuracy, delay, and power consumption.
... The ecological business economy is an economy that achieves a high degree of unity and the sustainable development of rapid economic development and environmental protection [12,13]. In order to study the growth strategy of the ecological business economy, this paper uses deep learning algorithms and theoretical models of environmental regulation to conduct research. ...
Article
Full-text available
The concept of ecological commercial economy refers to the use of ecological economics principles and system engineering methods to change production and consumption patterns within the scope of the carrying capacity of the ecosystem in order to tap into all of the available resource potential. It develops some economically developed and ecologically efficient industries and builds a culture with reasonable systems, a harmonious society, and a healthy ecological environment. This paper aims to use deep learning algorithms to study environmental protection and the optimization of ecological business economic growth from the perspective of sustainable development. In this regard, this paper proposes a theoretical model of environmental regulation, which aids in the study of the sustainable development of the ecological economy. Through experimental analysis, this study determined that the non-renewable resources of the two cities designated M and N dropped from 82% and 99% in 2017 to 78% and 79% in 2021, a decrease of 3% and 20%, respectively. This shows that the non-renewable resources of the four cities in area A generally showed a downward trend. The experimental results show that the deep learning theory and the environmental regulation model play a specific and effective role in the researching of the ecological business economy.
... Companies cannot address the threats and opportunities of the external environment alone. It must combine its own business objectives and internal conditions to identify appropriate opportunities for the company [14]. Opportunities in the environment can only be opportunities for a company if they are aligned with the resources and core competencies that belong or will belong to the company. ...
Article
Full-text available
As the global semiconductor industry has entered a new round of rapid growth, it has also entered a golden cycle of economic growth. Semiconductor companies increase their intrinsic value through financing, industry mergers and acquisitions, and venture capital searches. At the same time, market investors pay more attention to the intrinsic value of companies when looking for good investment targets. Therefore, the systematic risk assessment of the global semiconductor market has become a common concern of market investors and corporate management. In this context, this paper found a method that can assess the systemic risk of the semiconductor global market, which is to use the K-means algorithm based on deep feature fusion. This paper analyzed the algorithm in depth, analyzed the quantum space of tensors, and used the definition of cluster fusion to obtain the relationship between the projection matrices U and V. Experiments were carried out on the improved algorithm, and market research was conducted on a multinational semiconductor company A, which mainly included the basic statistics of the rate of return and the ACF and PACF coefficients of the rate of return series. Finally, the stock risk comparison of company A and company B in the same period was carried out. The experimental results showed that comparing the three items of compound growth rate, coefficient of variation, and active rate coefficient, the highest compound growth rate was 0.41, which came from Category 2, the highest variation coefficient was 2.31, which came from Category 10, and the highest active rate coefficient was 1.78, which came from Category 9. The experimental content was completed well.
Article
Full-text available
Internet-of-Things (IoT) connects various physical objects through the Internet and it has a wide application, such as in transportation, military, healthcare, agriculture, and many more. Those applications are increasingly popular because they address real-time problems. In contrast, the use of transmission and communication protocols has raised serious security concerns for IoT devices, and traditional methods such as signature and rule-based methods are inefficient for securing these devices. Hence, identifying network traffic behavior and mitigating cyber attacks are important in IoT to provide guaranteed network security. Therefore, we develop an Intrusion Detection System (IDS) based on a deep learning model called Pearson-Correlation Coefficient - Convolutional Neural Networks (PCC-CNN) to detect network anomalies. The PCC-CNN model combines the important features obtained from the linear-based extractions followed by the Convolutional Neural Network. It performs a binary classification for anomaly detection and also a multiclass classification for various types of attacks. The model is evaluated on three publicly available datasets: NSL-KDD, CICIDS-2017, and IOTID20. We first train and test five different (Logistic Regression, Linear Discriminant Analysis, K Nearest Neighbour, Classification and Regression Tree,& Support Vector Machine) PCC-based Machine Learning models to evaluate the model performance. We achieve the best similar accuracy from the KNN and CART model of 98%, 99%, and 98%, respectively, on the three datasets. On the other hand, we achieve a promising performance with a better detection accuracy of 99.89% and with a low misclassification rate of 0.001 with our proposed PCC-CNN model. The integrated model is promising, with a misclassification rate (or False alarm rate) of 0.02, 0.02, and 0.00 with Binary and Multiclass intrusion detection classifiers. Finally, we compare and discuss our PCC-CNN model in comparison to five traditional PCC-ML models. Our proposed Deep Learning (DL)-based IDS outperforms traditional methods.