Article

Edge Computing for the Internet of Things: A Case Study

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The amount of data generated by sensors, actuators and other devices in the Internet of Things (IoT) has substantially increased in the last few years. IoT data are currently processed in the cloud, mostly through computing resources located in distant data centers. As a consequence, network bandwidth and communication latency become serious bottlenecks. This article advocates edge computing for emerging IoT applications that leverage sensor streams to augment interactive applications. First, we classify and survey current edge computing architectures and platforms, then describe key IoT application scenarios that benefit from edge computing. Second, we carry out an experimental evaluation of edge computing and its enabling technologies in a selected use case represented by mobile gaming. To this end, we consider a resource-intensive 3D application as a paradigmatic example and evaluate the response delay in different deployment scenarios. Our experimental results show that edge computing is necessary to meet the latency requirements of applications involving virtual and augmented reality. We conclude by discussing what can be achieved with current edge computing platforms and how emerging technologies will impact on the deployment of future IoT applications.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... With the evolving space of technology, Refs. [28,29] mentions that the uses and cases of edge computing has a high likelihood to change. ...
... It allows instant analysis of a huge amount of data generated by IoT. It permits instantaneous decisionmaking and protection against data breaches [19,29]. In smart cities, edge computing optimizes real-time traffic fed by sensors and avoids air pollution while reducing travel time [19]. ...
... Initiatives like the European Unions' (EU's) Horizon 2020 program fund projects for smart infrastructure development, while private companies contribute technological expertise [10]. These efforts build essential foundations such as high-speed internet access and edge data centers, facilitating the deployment of smart city solutions like intelligent traffic systems and energy-efficient grids [29]. Moreover, public-private collaborations drive the establishment of industry standards and best practices. ...
Chapter
Full-text available
Motivated by global technological advancements, this paper explores the relationship between edge computing and the Internet of Things (IoT) as society approaches the twenty-second century. Utilizing both case studies and impact assessment approaches, the paper emphasizes the evolution of these technologies, their application areas, and their societal implications. Cloud computing has traditionally dominated large-scale data processing and storage, while IoT and edge computing enable ubiquitous computing with a focus on endpoint sensing and near-field computation, respectively. Technological leaps facilitated by edge computing include advancements in sensory applications, artificial intelligence, and nanotechnology, promising transformative impacts across sectors. Examples include automated metering and real-time analytics in homes, as well as improved healthcare through efficient video surveillance, energy management, environmental monitoring, and logistics. Edge computing’s societal impacts span smart city development, enhanced healthcare services, environmental sustainability, and economic growth through innovative business models and job creation. This paper establishes a foundation for the efficient integration of edge computing in IoT discussions as society prepares for the challenges and opportunities of the twenty-second century.
... The proliferation of IoT devices has lead to increased dependence on cloud infrastructure for computing purposes. However, cloud is expensive and also comes with bottlenecks such as network disruptions, unpredictable latency and bandwidth (load) particularly in safety-critical and performance-sensitive applications [1,2]. Edge computing extends computational, network connectivity, and storage capabilities from the cloud to the network's edge. ...
... Symbols usedSymbol Descriptionx i Position vector of particle i , representing the resource usage for battery, processing load, and bandwidth v i Velocity vector of particle i , representing the change in resource usage for battery, processing load, and bandwidth Battery usage of phone iProcessing load assigned to phone i Quality of the wireless connection for phone i , affecting bandwidth w Inertia weight, balancing exploration and exploitation in the PSO algorithm c1 Cognitive coefficient, guiding particles towards their personal best positions c 2Social coefficient, guiding particles towards the global best position r 1 , r2 Random numbers uniformly distributed in [0, 1] , adding stochasticity to the search process ...
Article
Full-text available
Edge computing is a well known paradigm where sensing devices including smartphones and other IoT devices transmit data to powerful edge devices for processing instead of directly sending to the cloud, primarily to reduce latency. The recent years has seen an exponential growth in smartphones particularly, the processing power, RAM and storage. However, these devices are not being used to their fullest capability apart from games which are video intensive and hence demand resources. Most alarming thing is the number of smartphones that are being unused, due to hardware upgrades and new models in the market. This means an alternate source of computing power is underutilized. A few works have utilized smartphones for matrix computation tasks, but practical use of smartphone computation power is not studied yet.In this paper, we propose a distributed system of smartphones following a master–slave architecture. The master node employs optimization based scheduling algorithms to assign tasks to slaves (phones) that form clusters to create a local edge computing platform. We formulate the task assignment and scheduling problem based on constraints including CPU power, RAM, and battery capaacity. We implement and study the performance of particle swarm optimization (PSO) and mixed integer linear programming (MILP) algorithms on two compute intensive applications—video processing/rendering and indoor location estimation. The results of the implementation and optimization analysis motivate the design of an architecture that can leverage the computing power of smartphone CPU cores.
... In a fog network, end devices collaboratively pool their resources to carry out various services [7]. The abundance of diverse resources and the extensive deployment of IoT devices make fog computing highly attractive for cyber-physical systems, especially in smart city applications such as intelligent transportation, smart energy, and the industrial Internet [8]. Notably, location-based data and low latency play a crucial role in such scenarios [9]. ...
Article
Full-text available
The proliferation of Internet of Things (IoT) devices has opened new roads for collaborative distributed applications, particularly in smart city environments, where a variety of resources, including sensing, actuation, computation, and storage, are essential for providing effective location-based services. This paper specifically focuses on the sharing of heterogeneous resources among IoT applications in smart cities. By leveraging game-theoretic principles, this study addresses resource allocation through a combinatorial double auction. The solution is rooted in the concept of Social IoT (SIoT), where Internet-connected objects create dynamic social networks based on rules set by their owners. Social relationships, such as ownership and co-location, are leveraged to form groups offering enhanced reliability resource bundles. The proposed solution offers several key economic properties, including incentive compatibility, individual rationality, and a balanced budget, while maintaining low computational complexity. Simulation results demonstrate that the proposed combinatorial double auction mechanism achieves over 70%70\% successful resource allocation for up to 1000 requests, maintains computational efficiency with execution times under 30 s, and ensures economic properties such as incentive compatibility and individual rationality, making it a scalable and practical solution for large-scale smart city IoT applications.
... In Edge computing, video frames are processed domestically [81], eliminating community transmission delays. The response time is advanced due to the fact that: 1) AI-powered edge devices detect movement anomalies in milliseconds. ...
Preprint
Full-text available
This paper presents LAXMI, a secure and light-weight edge–cloud hybrid video analytics pipeline for motion detection and real-time object classification. The end objective is to design a system that can perform computationally efficient scene analysis in the edge but securely transmit full video data to the cloud for archival or additional processing. The pipeline aims to minimize processing delay, minimize bandwidth usage, and guard surveillance content with a combination of optimized machine learning and cryptographic methods. The LAXMI system integrates a temporal gradient-based motion detection (TGMD) algorithm at the edge, which precisely identifies motion segments within continuous video streams. Upon detection of motion, the system conducts YOLOv8 object detection for classification and annotation of related entities such as people, cars, animals, and other moving objects. One of the key novelties is employing a hybrid tokenization strategy where the video is encrypted with AES-256 and the encryption key is securely transported using RSA public-key cryptography. This ensures data confidentiality as well as secure edge-to-cloud communications. The edge server then generates a JWT-based token for authenticating the cloud upload. Four-week-duration performance tests reveal that the edge-based motion detection subsystem achieved a 92.7\% accuracy and F1-score of 96.2\%, while the YOLOv8 classifier achieved 94.8\% accuracy and F1-score of 94.4\%. Further, the edge pipeline outperformed the cloud baseline across processing latency, energy expense, and bandwidth consumption constantly, thereby setting the effectiveness of the proposed LAXMI architecture in actual smart surveillance applications.
... In recent years, the exponential growth of data-driven services-enabled by cloud computing and supported by hyper-scale infrastructure-has positioned CDCs as an essential component within global information and communication networks. Their central role in powering Artificial Intelligence (AI), big data analytics [1][2][3][4][5][6], e-commerce [7], streaming platforms [8][9][10][11][12][13], and edge computing [14][15][16][17][18] has made them critical to the digital infrastructure across all economic sectors. However, this massive computational footprint comes at a significant energy and environmental cost. ...
Article
Full-text available
Cloud Data Centers (CDCs) are an essential component of the infrastructure for powering the digital life of modern society, hosting and processing vast amounts of data and enabling services such as streaming, Artificial Intelligence (AI), and global connectivity. Given this importance, their energy efficiency is a top priority, as they consume significant amounts of electricity, contributing to operational costs and environmental impact. Efficient CDCs reduce energy waste, lower carbon footprints, and support sustainable growth in digital services. Consequently, energy efficiency metrics are used to measure how effectively a CDC utilizes energy for computing versus cooling and other overheads. These metrics are essential because they guide operators in optimizing resource use, reducing costs, and meeting regulatory and environmental goals. To this end, this paper reviews more than 25 energy efficiency metrics and more than 250 literature references to CDCs, different energy-consuming components, and configuration setups. Then, some real-world case studies of corporations that use these metrics are presented. Thereby, the challenges and limitations are investigated for each metric, and associated future research directions are provided. Prioritizing energy efficiency in CDCs, guided by these energy efficiency metrics, is essential for minimizing environmental impact, reducing costs, and ensuring sustainable scalability for the digital economy.
... Today, the cloud-edge collaboration architecture, as an emerging computing paradigm, aims to integrate cloud computing and edge computing to achieve more efficient data processing. Typically, edge computing paradigm [26] [27] [28] can significantly reduce the latency and energy consumption required for data transmission over networks, while cloud computing paradigm [29] [30] [31] can perform detailed data analysis on existing data. Combining the characteristics of edge computing and cloud computing paradigm to perform computation tasks brings about the possibility of providing fast and high-precision detection results [32]- [34]. ...
Article
Full-text available
With the improvement of hyperspectral image resolution, existing anomaly detection algorithms find it challenging to quickly process large volumes of hyperspectral data while fully exploiting spectral information. The collaborative cloud-edge computing, as an emerging computing paradigm, aims to integrate cloud and edge computing paradigm for more efficient data processing. However, existing algorithms cannot be directly deployed directly in the cloud-edge collaborative environment for rapid detection. To address this issue, this paper designs a collaborative cloud-edge anomaly detection method. Firstly, to enhance the anomaly detection capability and improve the overall detection accuracy, we propose a joint subspace constraint representation (JSCR) model. We also introduce a corresponding joint dictionary construction algorithm aimed at performing subspace learning. Then, different detection task nodes are deployed at the edge and the cloud based on the characteristics of cloud-edge computing paradigm. Furthermore, we propose a cloud-edge model solving algorithm. This algorithm reformulates the JSCR model into a new optimization problem involving a small amount of factorized data. It minimizes the large-scale data transmission between the cloud and the edge while reducing the data volume required for model solving at the cloud. The proposed method in this paper efficiently leverages the computational resources of both the cloud and the edge to execute anomaly detection. Experimental results demonstrate that, compared to existing hyperspectral anomaly detection algorithms, the proposed algorithm provides more accurate detection results in less time.
... In networks such as smart homes, the volume of data generated is expected to be enormous [1]. However, IoT devices, such as sensors, smartphones, and wearables, typically have limited computational and energy resources [2]. Therefore, these devices must send data elsewhere for processing. ...
Article
Full-text available
In mobile edge computing (MEC), efficient job allocation is essential to optimize system performance and reduce reliance on cloud computing. Edge servers, deployed at base stations, must handle user-submitted jobs without overloading, which would otherwise lead to excessive job transfers to the cloud. Current k-means-based server-placement and job-allocation methods primarily minimize communication costs but fail to handle heterogeneous server performance. This oversight results in load imbalances where low-performance servers become overloaded, increasing unnecessary cloud transfers and network congestion. Such methods also do not address k-means’ sensitivity to initialization, which impacts job-distribution efficiency. To overcome these limitations, we propose a joint optimization method for integrating edge-server placement and job allocation with the objective of minimizing transfer probability in heterogeneous MEC environments. The method integrates a k-means++-based initial placement algorithm to reduce initialization sensitivity and dynamic job-reallocation algorithm that adjusts assignments on the basis of transfer probability. Extensive simulations demonstrate that our method reduces job overflow and cloud transfers compared with conventional methods. Real-world millimeter-wave communication experiments also confirm the effectiveness of the proposed method in practical MEC environments
... Edge Computing layer plays a pivotal role in enabling real-time processing and transformation of data close to its source, bridging the gap between physical devices and higher-level applications [23]. By operating at the edge of the network, it reduces latency, improves efficiency, and enhances privacy [30]. This approach provides lightweight solutions for local, small-scale data storage and processing, forming the foundation for Edge-based data sharing-asa-service models. ...
Conference Paper
Full-text available
The increasing reliance on IoT ecosystems demands robust, secure, and context-aware data-sharing mechanisms that operate closer to data sources. Data spaces must leverage trusted edge-based system architectures for near real-time data processing, transformation , and enrichment while ensuring data privacy and security. However, the current International Data Spaces (IDS) Model lacks comprehensive support for Edge-based architectures and flexible, context-driven access control models essential for managing diverse applications within data space ecosystems. To address these gaps, we propose IDS4Edge, an IDS-compliant approach that enables dynamic, context-driven, Edge-based IoT data sharing as a service. IDS4Edge integrates flexible access control policies on top of IDS connectors, tailored to specific IoT application contexts. These policies dynamically adapt in real-time to changes in IoT contexts and contractual agreements, ensuring secure and efficient data sharing at the Edge. We validate our solution through a proof-of-concept implementation , demonstrating how IDS4Edge facilitates trusted, scalable, and real-time data sharing while maintaining compliance with IDS principles. This approach paves the way for enhanced (industrial) IoT applications and advanced data-sharing paradigms, such as Manufacturing-as-a-Service (MaaS).
... One potential solution to address the challenges faced by MEC is the use of local (i.e., on-device) computations [17]. However, despite ongoing advancements in hardware technology, many current IoT devices still lack the capacity to meet the demands of emerging computation-intensive and latency-sensitive applications [18]. ...
Preprint
Full-text available
Extreme Edge Computing (EEC) pushes computing even closer to end users than traditional Multi-access Edge Computing (MEC), harnessing the idle resources of Extreme Edge Devices (EEDs) to enable low-latency, distributed processing. However, EEC faces key challenges, including spatial randomness in device distribution, limited EED computational power necessitating parallel task execution, vulnerability to failure, and temporal randomness due to variability in wireless communication and execution times. These challenges highlight the need for a rigorous analytical framework to evaluate EEC performance. We present the first spatiotemporal mathematical model for EEC over large-scale millimeter-wave networks. Utilizing stochastic geometry and an Absorbing Continuous-Time Markov Chain (ACTMC), the framework captures the complex interaction between communication and computation performance, including their temporal overlap during parallel execution. We evaluate two key metrics: average task response delay and task completion probability. Together, they provide a holistic view of latency and reliability. The analysis considers fundamental offloading strategies, including randomized and location-aware schemes, while accounting for EED failures. Results show that there exists an optimal task segmentation that minimizes delay. Under limited EED availability, we investigate a bias-based EEC and MEC collaboration that offloads excess demand to MEC resources, effectively reducing congestion and improving system responsiveness.
... This led to the integration of AI capabilities directly into IoT systems. Premsankar et al. [13] highlight the limitations of cloud computing in handling the vast amounts of data generated by IoT devices, advocating edge computing as a solution. This supports the claim that traditional cloud-based models are insufficient for real-time decision making. ...
Article
Full-text available
The convergence of Artificial Intelligence (AI) and the Internet of Things (IoT) has led to the emergence of the Artificial Intelligence of Things (AIoT), a paradigm that enhances distributed intelligence across interconnected systems. This paper explores the evolutionary shift from AIoT towards the Internet of Artificial Intelligence Agents (IAIA), a transformative framework in which autonomous, networked AI agents engage in collaborative decision-making, real-time learning, and adaptive problem solving. By integrating edge computing, federated learning, and decentralized architectures, IAIA transcends the limitations of conventional IoT and AIoT models, enabling a scalable, resilient, and intelligent ecosystem. We analyze key enablers, including 5G/6G connectivity, blockchain for trust and security, and swarm intelligence for emerging behaviors. Furthermore, we discuss cross-domain applications that span smart cities, healthcare, industrial automation, and next-generation cyber-physical systems. Finally, we outline the critical challenges, such as governance, security, and ethical AI, and propose a roadmap for the widespread adoption of IAIA. This work provides a foundational perspective on the future of intelligent distributed systems, setting the stage for a new era of AI-driven, self-organizing networks.
... For instance, as shown by the ARGO project implemented by the Municipality of Turin (Italy), Edge AI can enable devices such as smart traffic cameras or pollution sensors to detect incidents, traffic, crimes or variations in air quality in real-time, allowing law enforcement to intervene immediately, without the need for data transmission to remote servers (Shi & Dustdar, 2016;Premsankar et al., 2018). ...
Article
Full-text available
Based on the previous recent research and results in this area, notably the EU Public Sector Tech Watch reports and handbooks "European Landscape on the Use of Artificial Intelligence by the Public Sector", "Road to the adoption of AI by the public sector", "A collective effort in exploring the applications of Artificial Intelligence and blockchain in the Public Sector" and a European Parliament briefing on "Artificial Intelligence and Public services", this study on “AI and GenAI adoption by local and regional administrations” (1) analyses the results of the previous studies – which are based on data gathered over the last couple of years – and (2) delves deeper into the opportunities and challenges subnational authorities have with the AI and GenAI adoption and the impact this has on the quality of their interactions with citizens, as well as the efficiency of internal and cross-administration processes. The study also (3) investigates whether and how AI and GenAI could contribute to bridging digital divides across different types of territories and/or groups of citizens. It further (4) examines the role of political leaders in promoting AI, scalability, knowledge sharing and the need for cooperation between the technical and political levels to ensure boost of AI use. In the end, the study offers a number of recommendations for enhancing the adoption and effectiveness of AI and GenAI at subnational level.
... However, the application of edge computing technology in English education also faces some challenges. The first one is the network security problem [9][10][11][12]. Due to the wide distribution of edge computing devices, educational institutions need to strengthen the security protection of edge devices to prevent the leakage of students' personal information and the loss of educational data due to attacks on the devices. ...
Article
Full-text available
Edge computing technology has led to the emergence of a variety of emerging businesses, but it also introduces more uncertain security threats and demand for differentiated security services. This paper focuses on the security protection technology of the English education network and proposes a statistically based incremental network traffic feature extraction strategy. The strategy aims to extract and tailor the intruding network features, ensuring that the feature vectors accurately represent the real network behaviors while maintaining simplicity. Then, from the perspective of endogenous security, based on the idea of dynamic heterogeneous redundancy, the virtualized resource security construction using the edge computing network’s architectural characteristics is utilized. Aiming at the security needs of the English education network, an endogenous edge computing network security scheme based on network function virtualization is proposed, and a dynamic heterogeneous redundancy DHR security model is established. The model has the ability to provide anomaly information to the scheduling module, which in turn lowers the ranking of executives under attack in the pending set. This reduces the likelihood of penetration attacks targeting system vulnerabilities, thereby enhancing the system’s security reliability. The failure rate of the method in this paper is consistently maintained below 0.52 overall, highlighting excellent security and realizing the design of a secure access method for English education networks.
... The forensic challenges associated with IoT devices extend beyond the local device itself and often involve edge and fog computing environments. Edge computing enables local processing and storage closer to IoT devices [21], reducing latency and improving efficiency. However, it also presents unique forensic challenges, such as data volatility, decentralised storage, and limited access to logs. ...
Article
Full-text available
The proliferation of Internet of Things (IoT) devices presents significant challenges for cybersecurity and digital forensics, particularly as these devices have become increasingly weaponised for malicious activities. This research focuses on the forensic analysis capabilities of Raspberry Pi devices configured with Kali Linux, comparing their forensic capabilities to conventional PC-based forensic investigations. The study identifies key gaps in existing IoT forensic methodologies, including limited tool compatibility, constrained data retention, and difficulties in live memory analysis due to architectural differences. The research employs a testbed-based approach to simulate cyberattacks on both platforms, capturing and analysing forensic artefacts such as system logs, memory dumps, and network traffic. The research findings reveal that while traditional PCs offer extensive forensic capabilities due to superior storage, tool support, and system logging, Raspberry Pi devices present significant forensic challenges, primarily due to their ARM architecture and limited forensic readiness. The study emphasises the need for specialised forensic tools tailored to IoT environments and suggests best practices to enhance forensic investigation capabilities in weaponised IoT scenarios. This research contributes to the field by bridging the gap between theoretical frameworks and real-world forensic investigations, offering insights into the evolving landscape of IoT forensics and its implications for digital evidence collection, analysis, and forensic readiness.
... The forensic challenges associated with IoT devices extend beyond the local device itself and often involve Edge and Fog computing environments. Edge computing enables local processing and storage closer to IoT devices [22], reducing latency and improving efficiency. However, it also presents unique forensic challenges, such as data volatility, decentralised storage, and limited access to logs. ...
Preprint
Full-text available
The proliferation of Internet of Things (IoT) devices has introduced new challenges for digital forensic investigators due to their diverse architectures, communication protocols, and security vulnerabilities. This research paper presents a case study focusing on the forensic investigation of an IoT device, specifically a Raspberry Pi configured with Kali Linux as a hacker machine. The study aims to highlight differences and challenges in investigating weaponised IoT as well as establish a comprehensive methodology for analysing IoT devices involved in cyber incidents. The investigation begins with the acquisition of digital evidence from the Raspberry Pi device, including volatile memory and disc images. Various forensic tools and utilities are utilised to extract and analyse data, such as Exterro FTK and Magnet AXIOM, as well as open-source tools like Volatility, Wireshark, and Autopsy. The analysis involves examining system artefacts, logfiles, installed applications, and network connections to reconstruct the device's activity and identify potential evidence proving that the user perpetrated security breaches or malicious activities. The research results help improve IoT forensics by showing the best ways to look at IoT devices, especially those that are set up to be hacker machines. The case study demonstrates how the research results are helping to improve IoT forensic capabilities by showing the best ways to look at IoT devices, especially those that have been set up as hacker machines. The case study shows how forensic methods can be applied in IoT settings. It helps in creating guidelines, standards, and training for those who work as IoT forensic investigators. In the end, improving forensic readiness in IoT deployments is needed to keep essentials safe from cyber threats, keep digital evidence safe, and keep IoT ecosystems running smoothly, which protects the integrity of IoT ecosystems.
... With the development of the Internet of Things (IoT), 5G communication technology, and artificial intelligence, edge computing has emerged [28][29][30] . Initially, the processing of large amounts of data relied on centralized systems, but as the number of IoT devices surged, edge computing began processing data closer to the source and near the end-users. ...
Article
Full-text available
In shale gas extraction, bottomhole liquid loading reduces gas well efficiency. Traditional time-based plunger lift methods use reservoir energy to remove liquid, but model-based optimization has since emerged. However, these methods, deployed on remote servers, lead to inefficient data transfer and high server loads. This study proposes an Adaptive Particle Swarm Optimization Model Predictive Control (APSO-MPC) for plunger lift optimization, implemented via edge computing. APSO dynamically adjusts inertia weights and learning factors, while a microprocessor-based edge architecture localizes computations at the controller, eliminating transmission delays and reducing server load. Simulations show APSO-MPC improves gas production by 18% compared to traditional methods, while edge computing increases data transmission by 24%, reduces packet loss by 83%, and lowers server memory and computational delays.
... Moreover, the deployment and access of edge devices and their ability to continue service even when communication is slow or temporarily interrupted ensure the scalability and reliability of edge computing [7]. The application of edge computing has been greatly successful in many aspects, for example, IoT [30], autonomous driving [5], smart cities [4], robotics [31], and so on. ...
Preprint
Full-text available
The emergence of 5G and edge computing hardware has brought about a significant shift in artificial intelligence, with edge AI becoming a crucial technology for enabling intelligent applications. With the growing amount of data generated and stored on edge devices, deploying AI models for local processing and inference has become increasingly necessary. However, deploying state-of-the-art AI models on resource-constrained edge devices faces significant challenges that must be addressed. This paper presents an optimization triad for efficient and reliable edge AI deployment, including data, model, and system optimization. First, we discuss optimizing data through data cleaning, compression, and augmentation to make it more suitable for edge deployment. Second, we explore model design and compression methods at the model level, such as pruning, quantization, and knowledge distillation. Finally, we introduce system optimization techniques like framework support and hardware acceleration to accelerate edge AI workflows. Based on an in-depth analysis of various application scenarios and deployment challenges of edge AI, this paper proposes an optimization paradigm based on the data-model-system triad to enable a whole set of solutions to effectively transfer ML models, which are initially trained in the cloud, to various edge devices for supporting multiple scenarios.
... Implementing edge computing in IAQ systems involves deploying computational resources closer to the data sources, typically edge servers or advanced IoT devices. These edge nodes are responsible for data aggregation, preliminary analysis, and decision-making processes traditionally handled by cloud servers [164]. ...
Article
Full-text available
People spend a significant portion of their time in enclosed spaces, making indoor air quality (IAQ) a critical factor for health and productivity. Artificial intelligence (AI)-driven systems that monitor air quality in real-time and utilize historical data for accurate forecasting have emerged as effective solutions to this challenge. However, these systems often raise privacy concerns, as they may inadvertently expose sensitive information about occupants’ habits and presence. Addressing these privacy challenges is essential. This research comprehensively reviews the existing literature on traditional and AI-based IAQ management, focusing on privacy-preserving techniques. The analysis reveals that while significant progress has been made in IAQ monitoring, most systems prioritize accuracy at the expense of privacy. Existing approaches often fail to adequately address the risks associated with data collection and the implications for occupant privacy. Emerging AI-driven technologies, such as federated learning and edge computing, offer promising solutions by processing data locally and minimizing privacy risks. This research introduces a novel AI-based IAQ management platform incorporating the SITA (Spatial, Identity, Temporal, and Activity) model. By leveraging customizable privacy settings, the platform enables users to safeguard sensitive information while ensuring effective IAQ management. Integrating Internet of Things (IoT) sensor networks, edge computing, and advanced privacy-preserving technologies, the proposed system delivers a robust and scalable solution that protects both privacy and health.
... In recent years, the industrial sector has experienced a substantial digital transformation, leading to the collection of vast amounts of data from various production lines [1]. This digitization, driven by advancements in technology and the integration of the Internet of Things (IoT), has facilitated the development of digital twins that represent industrial processes through extensive datasets. ...
Article
Full-text available
The industrial sector has undergone significant digital transformation, driven by advancements in technology and the Internet of Things (IoT). These developments have facilitated the collection of vast quantities of data, which, in turn, pose significant challenges for real-time data processing. This study seeks to validate the efficacy and accuracy of edge computing models designed to represent subprocesses within industrial environments and to compare their performance with that of traditional cloud computing models. By processing data locally at the point of collection, edge computing models provide substantial benefits in minimizing latency and enhancing processing efficiency, which are crucial for real-time decision-making in industrial operations. This research demonstrates that models derived from distinct subprocesses yield superior accuracy compared to comprehensive models encompassing multiple subprocesses. The findings indicate that an increase in data volume does not necessarily translate to improved model performance, particularly in datasets that capture data from production processes, as combining independent process data can introduce extraneous ‘noise’. By subdividing datasets into smaller, specialized edge models, this study offers a viable approach to mitigating the latency challenges inherent in cloud computing, thereby enhancing real-time data processing capabilities, scalability, and adaptability for modern industrial applications.
... Data Privacy and Protection: The onshore FA regulations multiply the data protection rules that govern the handling of the personal information, for instance; the EU's General Data Protection Regulation for financial institutions. Where block chain has a problem regarding these privacy laws is in the impossibility, once data is recorded on the block chain, of changing the data in any way [35]. The lack of detail erasure option poses issues about data rights as well as privacy from those who need to apply and the right to be forgotten such as GDPR. ...
Article
Full-text available
New and innovative block chain technology is becoming the key in enhancing the security, transparency and efficiency in the financial sector. However, as financial applications based on the block chain expand and improve, so do the block chain threats and safety concerns. This paper aims to discuss the aspects of the block chain security with reference to the financial system and its advantages and drawbacks. It embraces major risks like 51% attacks, smart contract exploits, phishing, and data privacy and security issues; new risks from quantum computing and Decentralized finance (DeFi) platforms. Best practices also outlined in the paper include the use of an industry-grade cryptographic algorithm, a robust multi-signature authentication technique, auditing of the block chain application at regular intervals, the adoption of secure, decentralized identity verification and management, as well as compliance with industry standards such as KYC and AML. It also underlines the need to establish effective access controls and to develop capability in scaling solutions and sustained monitoring. Lastly, it can be noted that the acquisition of block chain-based financial applications entails the use of a combination of measures to address existing and future risks. As these best practices are implemented and the threats advance, the financial institutions will be better placed to realize the full value of block chain technology while at the same time protecting the privacy and security of people's financial transactions.
... In addition to adopting lightweight detection models, realtime object detection is often performed in IoT and edge computing environments such as moving vehicles or robots where computational resources are limited [24]- [26]. To overcome this limitation, cloud computing is widely adopted to upload the input image or video, perform processing on large cloud nodes, and download the result to the IoT and edge devices. ...
Article
Full-text available
As the performance and accuracy of machine learning and AI algorithms improve, the demand for adopting computer vision techniques to solve various problems, such as autonomous driving and AI robots, increases. To meet such demand, IoT and edge devices, which are small enough to be adopted in various environments while having sufficient computing capabilities, are being widely adopted. However, as devices are utilized in IoT and edge environments, which have harsh restrictions compared to traditional server environments, they are often limited by low computational and memory resources, in addition to the limited electrical power supply. This necessitates a unique approach for small IoT devices that are required to run complex tasks. In this paper, we propose a concurrent multi-frame processing scheme for real-time object detection algorithms. To do this, we first divide the video into individual frames and group the frames according to the number of cores in the device. Then, we allocate a group of frames per core to perform the object detection, resulting in parallel detection of multiple frames. We implement our scheme in YOLO (You Only Look Once), one of the most popular real-time object detection algorithms, on a state-of-the-art, resource-constrained IoT edge device, Nvidia Jetson Orin Nano, using real-world video and image datasets, including MS-COCO, ImageNet, PascalVOC, DOTA, animal videos, and car-traffic videos. Our evaluation results show that our proposed scheme can improve the diverse aspect of edge performance and improve the runtime, memory consumption, and power usage by up to 445%, 69%, and 73%, respectively. Additionally, it demonstrates improvements of 2.10× over state-of-the-art model optimization.
... It uses internet protocols to make the object devices connected and visible through the internet. The number of these devices reached billions in recent years [15]. The IoT devices are used to generate data for a specific domain in which they are used to enrich data needed for processing. ...
Article
Full-text available
Edge computing systems have emerged to facilitate real-time processing for delay-sensitive tasks in Internet of Things (IoT) Systems. As the volume of generated data and the real-time tasks increase, more pressure on edge servers is created. This eventually reduces the ability of edge servers to meet the processing deadlines for such delay-sensitive tasks, degrading users' satisfaction and revenues. At some point, scaling up the edge servers' processing resources might be needed to maintain user satisfaction. However, enterprises need to know if the cost of that scalability will be feasible in generating the required return on the investment and reducing the forgone revenues. This paper introduces a cost-benefit model that values the cost of edge processing resources scalability and the benefit of maintaining user satisfaction. We simulated our cost-benefit model to show its ability to decide whether the scalability will be feasible using different scenarios.
... One such trend is the increasing deployment of edge computing techniques, wherein data processing occurs closer to the source of data generation, which allows for a comprehensive reduction in latency and bandwidth usage. [26]. This is particularly beneficial in industries like manufacturing, where real-time responses to sensor data are critical for operational efficiency. ...
Article
Full-text available
The current landscape of business operations is largely dominated by the integration of the Internet of Things (IoT) with Management Information Systems (MIS), which results in higher efficiency and innovative solutions for common and complex organisational challenges [1] [2]. Owing to this, the following research paper aims to explore the multifaceted benefits and challenges associated with this integration. Alongside this, it seeks to provide insights for not just future research within the field, but also for practitioners and scholars alike. The study begins with a comprehensive introduction to the relevance of IoT and MIS, highlighting their evolution and the complexities that may arise in their integration. After this, it moves further, delving into the technological foundations of IoT, and examining its core components, architecture, and emerging trends. Discussing the theoretical frameworks that surround information systems, it then seeks to illustrate both historical and contemporary integration issues and conducts a deep-seated analysis of current research on IoT- MIS Integration, extrapolating on the advantages, risks, and tangible benefits. The qualitative research is backed up by real-world case studies of well-known companies such as GE and Walmart, which present a strong insight into successful IoT-MIS integration and the challenges encountered during implementation. These examples reveal key takeaways, best practices, and strategic recommendations for businesses looking to harness the full potential of this integration. Lastly, strategic recommendations are presented, including best practices for effective integration and mitigation strategies for associated risks. The paper eventually concludes by outlining future research directions, identifying emerging trends in IoT and MIS integration, and formulating relevant research questions. With the aforementioned, the paper aims to contribute to the overall comprehension of IoT-MIS integration and presents a balanced view of its potential usability in modern-day content. Furthermore, the findings thus attained offer a valuable guide tool for firms willing to navigate the complexities of digital transformation, allowing them to emphasize the quintessential importance of strategic planning, investment in technology, and a commitment to robust security measures. Thus, this study serves as a roadmap for organizations which aspire to thrive in an increasingly interconnected and data-centric business environment.
... The heterogeneity of IoT protocols and solutions leads to diversity in traffic characteristics. Because of this, IoT network traffic Dr analysis and modeling have drawn the interest of researchers [5], [6], [7], [8]. These studies are important for designing high performance IoT systems that can deliver the desired Quality of Service (QoS). ...
Article
The presence of diverse traffic types is well-established in IoT networks. Various probability distributions have been found to describe packets inter-arrival time contrasting with the familiar exponential distribution in traditional networks. These findings suggest the need to develop appropriate traffic models for performance analysis of IoT network systems. An essential component in IoT network is gateway as it provides connectivity to the core Internet. The IoT gateway also performs functions such as protocol translation and traffic aggregation. Therefore, efficient design of the IoT gateway is necessary for better network management. The paper presents a new analytical model, N-Gamma/M/1, for analyzing the performance of IoT gateway. The equivalence of N-Gamma/M/1 and Gamma/M/1 models is proved mathematically. Additionally, an in-depth performance evaluation of the IoT gateway under various arrival patterns is conducted through simulation. The numerical analysis of the proposed N-Gamma/M/1 model emphasizes the need for more buffers at the gateway when traffic from input devices has varying values of gamma distribution parameters. It is also noted that the IoT gateway experiences a longer mean queue length resulting in higher mean waiting time and packet loss when inter-arrival time distribution of packets follows generalized Pareto, Weibull and lognormal distributions with different parameter values. This makes the task of IoT network management challenging. Adaptive and intelligent resource allocation policies along with dynamic congestion control algorithms may provide a solution to minimize packet loss and ensure quality of service.
Article
Full-text available
Edge computing has emerged as a transformative paradigm in the realm of robotic wireless sensor networks (WSNs), particularly in applications requiring real-time data processing and low-latency responses. By facilitating local computation at or near the data source, edge computing addresses critical challenges such as bandwidth limitations, latency issues, and data privacy concerns inherent in traditional cloud-based processing models. This paper explores the integration of edge computing into robotic WSNs, examining its impact on system performance, scalability, and security. Through a comprehensive review of current literature and case studies, we analyze the benefits and challenges associated with this integration, providing insights into future research directions and potential applications in various domains. WIRED
Article
In the rapidly evolving networking and communication technology era, the emergence of novel edge computing paradigms helps reduce latency and improve communication efficiency. The advancements of edge computing bring data processing closer to its source, reducing communication distance. Moreover, integrating Software‐Defined Networking (SDN) in edge computing enhances network management by decoupling the control plane from the data plane, enabling more flexible and efficient resource allocation in distributed environments. However, scheduling, resource allocation, and load balancing are significant obstacles to enhancing the edge computing resources' performance. Besides, efficient resource allocation and load balancing help to use all resources and optimize the system's performance effectively. To address these issues, this paper proposed an Average‐Based Resource Allocation and Load Balancing (ABRL) algorithm for task allocation and load balancing, which aims to minimize the task's completion time and enhance the system's resource utilization. A three‐layer SDN‐based edge architecture is designed to implement the algorithm that improves the system's performance. The simulation studies have been conducted using the OpenDaylight (ODL) controller and implemented in Python. Experimental results demonstrate that the proposed strategy optimizes makespan, average resource utilization, and level of load balancing under consideration and exhibits better performance than the existing state‐of‐the‐art techniques.
Article
Full-text available
The Internet of Things (IoT) has enhanced people’s quality of life across various fields, including healthcare, agriculture, automotive, and education. However, ensuring the security of IoT systems is crucial due to their diverse operating methods and the heterogeneous nature of IoT environments. To effectively address security vulnerabilities in IoT applications, developers and companies must engage in discussions about existing and future solutions for IoT security risks. This paper comprehensively analyses the security concerns, limitations, and requirements associated with IoT. It utilizes a taxonomy to define security requirements for each layer of the three-layer IoT architecture. By categorizing IoT security issues and solutions based on a layered approach, readers can implement best practices and prevent existing security threats at each layer. It is essential to recognize that IoT security vulnerabilities can lead to breaches in data privacy. Consequently, blockchain technology has the potential to address these challenges. This review examines the security issues that compromise IoT systems and explores blockchain-based solutions, consensus protocols, and prospective areas for future research.
Article
Full-text available
Edge computing (EC) is a distributed computing approach to processing data at the network edge, either by the device or a local server, instead of centralized data centers or the cloud. EC proximity to the data source can provide faster insights, response time, and bandwidth utilization. However, the distributed architecture of EC makes it vulnerable to data security breaches and diverse attack vectors. The edge paradigm has limited availability of resources like memory and battery power. Also, the heterogeneous nature of the hardware, diverse communication protocols, and difficulty in timely updating security patches exist. A significant number of researchers have presented countermeasures for the detection and mitigation of data security threats in an EC paradigm. However, an approach that differs from traditional data security and privacy-preserving mechanisms already used in cloud computing is required. Artificial Intelligence (AI) greatly improves EC security through advanced threat detection, automated responses, and optimized resource management. When combined with Physical Unclonable Functions (PUFs), AI further strengthens data security by leveraging PUFs’ unique and unclonable attributes alongside AI’s adaptive and efficient management features. This paper investigates various edge security strategies and cutting-edge solutions. It presents a comparison between existing strategies, highlighting their benefits and limitations. Additionally, the paper offers a detailed discussion of EC security threats, including their characteristics and the classification of different attack types. The paper also provides an overview of the security and privacy needs of the EC, detailing the technological methods employed to address threats. Its goal is to assist future researchers in pinpointing potential research opportunities.
Article
Full-text available
Latency-sensitive applications such as autonomous vehicles, augmented reality, and real-time analytics require near-instantaneous data processing and decision-making. Cloud computing, while powerful and scalable, often suffers from high latency due to the physical distance between data centers and end devices. Edge computing addresses this limitation by bringing computation closer to the data source, thereby reducing response times. This paper presents a comparative analysis of edge and cloud computing paradigms, focusing on their performance for latency-sensitive applications. The study explores architectural differences, latency benchmarks, and cost-performance trade-offs, supplemented by a literature review of key studies.
Article
The increasing demand for real-time data processing in applications such as autonomous vehicles, industrial automation, and smart cities has intensified the focus on edge computing. Edge computing, which processes data closer to the source, addresses the limitations of centralized cloud computing by reducing latency and improving bandwidth efficiency. This paper explores the critical challenges associated with implementing real-time data processing at the edge, including resource constraints, network reliability, data privacy, and scalability. We provide an overview of recent technological advancements and architectural frameworks that address these challenges. In addition, this study evaluates edge computing solutions such as lightweight machine learning algorithms, efficient data compression techniques, and decentralized security measures. By analyzing current strategies and their efficacy in real-world applications, this paper contributes to understanding how edge computing can be optimized for diverse, latency-sensitive environments. Finally, we discuss the future potential and open research areas in edge computing for real-time applications.
Article
Full-text available
Modern business operations now draw their computational resources through the cloud-based revolution of computing technology. Organizations migrating their operations to the cloud are now making the future of this technology develop through trends like edge computing as well as serverless computing and artificial intelligence-powered cloud services. The progress in cloud technology faces ongoing obstacles related to security weaknesses and regulatory mandates that impede performance effectiveness. Innovations need to be developed to address these issues. The research evaluates contemporary cloud computing developments, innovative patterns affecting growth, adoption, and upcoming prospects. The study conducts an exhaustive assessment demonstrating how quantum computing, multi-cloud approaches, and cloud technology sustainability relate. Comprehensive evidence supports the argument from tables and images illustrating cloud computing market patterns to demonstrate adoption evolution. Cloud solutions receive practical validation through specific enterprises, which shows how these solutions upgrade operational efficiency in current business operations. The future direction of cloud technology development will depend on how well we perceive its current trends and overcome their associated challenges.
Article
Nowadays, federated learning (FL) has been widely adopted to train deep neural networks (DNNs) among massive devices without revealing their local data in edge computing (EC). To relieve the communication bottleneck of the central server in FL, hierarchical federated learning (HFL), which leverages edge servers as intermediaries to perform model aggregation among devices in proximity, comes into being. Nevertheless, the existing HFL systems may not perform training effectively due to bandwidth constraints and non-IID issues on devices. To conquer these challenges, we introduce an H FL system with device- e dge a ssignment and l ayer selection, namely Heal. Specifically, Heal organizes all the devices into a hierarchical structure ( i.e. , device-edge assignment) and enables each device to forward only a sub-model with several valuable layers for aggregation ( i.e. , layer selection). This processing procedure is called layer-wise aggregation. To further save communication resource and improve the convergence performance, we then design an iteration-based algorithm to optimize the development of our layer-wise aggregation strategy by considering the data distribution as well as resource constraints among devices. Extensive experiments on both the physical platform and the simulated environment show that Heal accelerates DNN training by about 1.4-12.5×, and reduces the network traffic consumption by about 31.9-64.1%, compared with the existing HFL systems.
Article
Full-text available
This study presents the News Accessibility Platform (NAP), an AI-driven solution designed to improve news access for people with disabilities, particularly those with visual and auditory impairments. Recognizing the unique challenges these groups face in obtaining timely, accessible news, NAP utilizes artificial intelligence to convert multimedia content into formats that cater to diverse needs, including text-to-speech for the visually impaired and sign language interpretations for the deaf community. By leveraging adaptive technologies, NAP provides an inclusive news experience, enabling real-time access to global information. This case study explores the platform's design, development, and implementation, assessing its impact on users' engagement and access. Findings indicate that NAP significantly enhances information accessibility, empowering people with disabilities to stay informed and engaged with current events. This study contributes to discussions on inclusive media practices and the role of AI in facilitating equitable information access.
Article
Full-text available
Analog conductance switching characteristics of memristor devices have been studied to be utilized for constituent elements of synaptic weight matrix in neural networks, related to system design of hardware‐level parallel neuromorphic computing architecture for the artificial intelligence application. In this manner, it is important to systematically investigate the specific requirements of memristor characteristics associated with the capability to emulate plenty of synaptic weight elements linked between constituent layers in neural networks. Here, the learning capabilities of analog conductance state of memristor device for the perceptron of unstructured complex dataset in multilayer neural network are analyzed in terms of the number of analog state, nonlinearity, and conductance error. It is found that the requirable number of analog state is analyzed in about ≈50 states and conductance deviation of each analog state is until ≈5% of original value with nonlinearity of ≈0.142 according to constant programming pulse scheme. With the memristor characteristics enough to mimic synaptic weight to be learnt and infer the Fashion‐mnist dataset, the classification accuracy is satisfied as ≈84.36% with the loss of ≈16.8% to original level. Owing to this investigation, applicability of novel memristor device could be conveniently examined for the utilization as synaptic weight in multilayer neural networks.
Article
With the development of artificial intelligence, there is an increasing demand for edged computing visual sensors, particularly those integrated with visual object detection capability. However, the deployment of object detection models in edge computing environments faces technical challenges in many aspects, such as model size, inference speed, accuracy, and deployment optimization. This paper systematically summarizes the knowledge related to the deployment of object detection at the edge computing side, including mainstream models, deployment optimization methods, edge computing deployment frameworks, and deployment devices, based on practical technical experience. The strengths and weaknesses of the various models and their applicability are analyzed, and the optimization and deployment of models on edge devices are explored in depth, with a focus on adapting them to the specific characteristics of edge computing environments. This paper is intended to provide valuable reference and insight to researchers and developers in the field, we provide deployment code for reference at repository https://github.com/shouxieai/tensorRT Pro.
Article
Meeting the deterministic demands of industrial tasks can be quite challenging due to the diversity of devices and the unclear relationship between tasks and platforms in industrial edge computing scenarios. To tackle this issue, this study introduces an entropy-weighted scheduling method grounded in resource quantification. Firstly, we scrutinized the affinity challenge when tasks operate across different platforms and broadened the scope of scheduling evaluation criteria within existing real-time systems. This expansion was accomplished by examining the alignment between various task attributes and platform characteristics through resource quantification. Subsequently, we employed the entropy weight method to handle the information entropy of all scheduling evaluation criteria and calculated the weighted sums to allocate the optimal scheduling device for each task. Ultimately, the entropy-weighted scheduling algorithm, which relies on resource quantification, was formulated to assess the algorithm’s scheduling performance under various parameter configurations. The experimental analysis indicated that the scheduling method based on resource quantification could effectively optimize the resource demand relationship between tasks and platforms, and the scheduling success rate of the proposed algorithm was 5.1%, 7.7%, and 34.5% higher than those of MTT-SS, D-Quantify, and RRA algorithms, respectively.
Article
Full-text available
Driven by the visions of Internet of Things and 5G communications, recent years have seen a paradigm shift in mobile computing, from the centralized Mobile Cloud Computing towards Mobile Edge Computing (MEC). The main feature of MEC is to push mobile computing, network control and storage to the network edges (e.g., base stations and access points) so as to enable computation-intensive and latency-critical applications at the resource-limited mobile devices. MEC promises dramatic reduction in latency and mobile energy consumption, tackling the key challenges for materializing 5G vision. The promised gains of MEC have motivated extensive efforts in both academia and industry on developing the technology. A main thrust of MEC research is to seamlessly merge the two disciplines of wireless communications and mobile computing, resulting in a wide-range of new designs ranging from techniques for computation offloading to network architectures. This paper provides a comprehensive survey of the state-of-the-art MEC research with a focus on joint radio-and-computational resource management. We also discuss a set of issues, challenges and future research directions for MEC research, including MEC system deployment, cache-enabled MEC, mobility management for MEC, green MEC, as well as privacy-aware MEC. Advancements in these directions will facilitate the transformation of MEC from theory to practice. Finally, we introduce recent standardization efforts on MEC as well as some typical MEC application scenarios.
Conference Paper
Full-text available
We investigate the design and implementation of Where's The Bear (WTB), an end-to-end, distributed, IoT system for wildlife monitoring. WTB implements a multi-tier (cloud, edge, sensing) system that integrates recent advances in machine learning based image processing to automatically classify animals in images from remote, motion-triggered camera traps. We use non-local, resource-rich, public/private cloud systems to train the machine learning models, and "in-the-field," resource-constrained edge systems to perform classification near the IoT sensing devices (cameras). We deploy WTB at the UCSB Sedgwick Reserve, a 6000 acre site for environmental research and use it to aggregate, manage, and analyze over 1.12M images. WTB integrates Google TensorFlow and OpenCV applications to perform automatic image classification and tagging. To avoid transferring large numbers of training images for TensorFlow over the low-bandwidth network linking Sedgwick to public clouds, we devise a technique that uses stock Google Images to construct a synthetic training set using only a small number of empty, background images from Sedgwick. Our system is able to accurately identify bears, deer, coyotes, and emtpy images and significantly reduces the time and bandwidth requirements for image transfer, as well as end-user analysis time, since WTB automatically filters the images on-site.
Article
Full-text available
Cyber-physical systems (CPS) help create new services and applications by revolutionising our world in different fields through their tight interactions and automated decisions. This is especially true with the ongoing increase in the number of physical things (sensors, actuators, smartphones, tablets, and so on) along with the explosive increase in the usage of online networking services and applications. Future fifth generation (5G) cellular networks will facilitate the enabling of CPS communications over current network infrastructure through different technologies such as device-to-device (D2D) communications. In this study, the authors discuss about the main challenges that cellular providers will face as the massive number of CPS devices attempt to access the cellular spectrum. A case study is presented on how to ease the spectrum access of these devices through D2D spatial spectrum sensing. Furthermore, the authors discuss about protecting these D2D links from eavesdropping, since security is becoming a critical aspect in the cyber-physical space, especially with the large amount of traffic that is constantly flowing through the network.
Article
Full-text available
The Internet of Things (IoT) can support collaboration and communication between objects automatically. However, with the increasing number of involved devices, IoT systems may consume substantial amounts of energy. Thus, the relevant energy efficiency issues have recently been attracting much attention from both academia and industry. In this article we adopt an energy-efficient architecture for Industrial IoT (IIoT), which consists of a sense entities domain, RESTful service hosted networks, a cloud server, and user applications. Under this architecture, we focus on the sense entities domain where huge amounts of energy are consumed by a tremendous number of nodes. The proposed framework includes three layers: the sense layer, the gateway layer, and the control layer. This hierarchical framework balances the traffic load and enables a longer lifetimeof the whole system. Based on this deployment, a sleep scheduling and wake-up protocol is designed, supporting the prediction of sleep intervals. The shifts of states support the use of the entire system resources in an energy-efficient way. Simulation results demonstrate the significant advantages of our proposed architecture in resource utilization and energy consumption.
Article
Full-text available
Cloud computing has demonstrated itself to be a scalable and cost-efficient solution for many real-world applications. However, its modus operandi is not ideally suited to resource-constrained environments that are characterized by limited network bandwidth and high latencies. With the increasing proliferation and sophistication of edge devices, the idea of fog computing proposes to offload some of the computation to the edge. To this end, micro-clouds---which are modular and portable assemblies of small single-board computers---have started to gain attention as infrastructures to support fog computing by offering isolated resource provisioning at the edge in a cost-effective way. We investigate the feasibility and readiness of micro-clouds for delivering the vision of fog computing. Through a number of experiments, we showcase the potential of micro-clouds formed by collections of Raspberry Pi computers to host a range of fog-related applications, particularly for locations where there is limited network bandwidths and long latencies.
Article
Full-text available
Pok\'emon Go has received unprecedented media coverage for a location-based game that uses augmented reality techniques. The game has been commonly associated with greater access to public spaces, increasing the number of people out on the streets, and generally improving health, social, and security indices. However, the true impact of Pok\'emon Go on people's mobility patterns in a city is still largely unknown. In this paper we perform a natural experiment using data from mobile networks to evaluate the effect of Pok\'emon Go on the pulse of a big city: Santiago of Chile. We found a significant effect of Pok\'emon Go on the floating population of Santiago: up to 13.8\% more people being outside at certain times, even if they do not seem to go out of their usual way. These effects at specific times were found by performing several regressions using count models over snapshots of the cell phone network. The effect is significant after controlling for land use, daily patterns, and points of interest in the city. Particularly, we found that, in business days, there is more people on the street at commuting times, meaning that people did not change their daily routines but slightly adapted them to play the game. Conversely, on Saturday and Sunday night, people indeed went out to play to places nearby their homes. Even if the statistical effects of the game do not reflect the massive reach portrayed by the media, it still allowed the streets to become a new place for people to spend time in. This is important, because results like these are expected to inform long-term infrastructure investments by city officials, jointly with public policies aimed at, for example, stimulating pedestrian traffic or suggesting alternative routes. Our work supports the notion that location-based games like Pok\'emon Go have benefits for the life in the city.
Conference Paper
Full-text available
Designing multiplayer virtual reality games is a challenging task since immersion is easily destroyed by real world in uences. However, providing fun and social virtual reality experiences is inevitable for establishing virtual reality gaming as a convincing new medium. We propose a design approach to integrate social interactions into the game design while retaining immersion, and present design methods to implement this approach. Furthermore, we describe the game design of a collaborative local multi-player/platform virtual reality game to demonstrate the application and effectiveness of our methods.
Conference Paper
Full-text available
In this paper, we present a novel hybrid approach, where Fog computing features are used to support a dynamic cloud cooperation of mobile IoT devices. In particular, we introduce the so-called Mobile-IoT-Federation-as-a-Service (MI-FaaS) paradigm, according to which edge nodes operate as orchestrators to manage federations among public/private IoT clouds that enable integrated IoT applications. To this aim, we propose an algorithm to foster the federation of local IoT clouds based on an utility function accounting for the number of executed tasks to be maximized. The presented performance evaluation validates the enhancements achievable by the proposed solutions in terms of number of task requests being successfully executed.
Article
Full-text available
Big data strongly demands a network infrastructure having the capability to efficiently collect, process, cache, share, and deliver the data, instead of simple transmissions. Such network designs show the requirements of energy efficiency, availability, high performance, and data-aware intelligence. To meet these requirements, we adopt the information-centric networking (ICN) approach, where data are retrieved through names and in-network caching is utilized. However, as the typical existing ICN architectures, content centric network (CCN) cannot efficiently utilize the caches for data sharing because of the on-path caching strategy, and network of information (NetInf) demonstrates the resolution latency for data retrievals. To design an efficient and effective ICN architecture for big data sharing, we combine the strong points of CCN and NetInf, where information islands (IOIs) and management plane are utilized for direct data retrieval and global data discovery, respectively. We provide a reference architecture and propose an aggregatable name-based routing (ANBR), which can naturally enable consumers to retrieve the closest copy of information. In this network, each piece of data can be cached at one IOI at most once, which greatly improves the efficiency of cache usages. The consumers first try to retrieve the data in the local IOI, and then try to globally retrieve it from the closest IOI, holding the copy of the data if necessary. We investigate the impact from the key factor, IOI size, to the energy consumption of ANBR. It shows that energy consumption first decreases and then increases as the IOI size increases, and the optimized IOI size can be found for deployment. Furthermore, we study the relation between the optimized IOI size and the average retrieval times for the data. The result shows that the optimized IOI size increases as the average retrieval times increase.
Article
Full-text available
High-data-rate sensors, such as video cameras, are becoming ubiquitous in the Internet of Things. This article describes GigaSight, an Internet-scale repository of crowd-sourced video content, with strong enforcement of privacy preferences and access controls. The GigaSight architecture is a federated system of VM-based cloudlets that perform video analytics at the edge of the Internet, thus reducing the demand for ingress bandwidth into the cloud. Denaturing, which is an owner-specific reduction in fidelity of video content to preserve privacy, is one form of analytics on cloudlets. Content-based indexing for search is another form of cloudlet-based analytics. This article is part of a special issue on smart spaces.
Conference Paper
Full-text available
Mobile micro-cloud is an emerging technology in distributed computing, which is aimed at providing seamless computing/data access to the edge of the network when a centralized service may suffer from poor connectivity and long latency. Different from the traditional cloud, a mobile micro-cloud is smaller and deployed closer to users, typically attached to a cellular basestation or wireless network access point. Due to the relatively small coverage area of each basestation or access point, when a user moves across areas covered by different basestations or access points which are attached to different micro-clouds, issues of service performance and service migration become important. In this paper, we consider such migration issues. We model the general problem as a Markov decision process (MDP), and show that, in the special case where the mobile user follows a one-dimensional asymmetric random walk mobility model, the optimal policy for service migration is a threshold policy. We obtain the analytical solution for the cost resulting from arbitrary thresholds, and then propose an algorithm for finding the optimal thresholds. The proposed algorithm is more efficient than standard mechanisms for solving MDPs.
Conference Paper
Full-text available
Edge services become increasingly important as the Internet transforms into an Internet of Things (IoT). Edge services require bounded latency, bandwidth reduction between the edge and the core, service resiliency with graceful degradation, and access to resources visible only inside the NATed and secured edge networks. While the data center based cloud excels at providing general purpose computation/storage at scale, it is not suitable for edge services. We present a new model for cloud computing, which we call the Edge Cloud, that addresses edge computing specific issues by augmenting the traditional data center cloud model with service nodes placed at the network edges. We describe the architecture of the Edge Cloud and its implementation as an overlay hybrid cloud using the industry standard OpenStack cloud management framework. We demonstrate the advantages garnered by two new classes of applications enabled by the Edge Cloud - a highly accurate indoor localization that saves on latency, and a scalable and resilient video monitoring that saves on bandwidth.
Conference Paper
Full-text available
This paper introduces a new paradigm for service oriented networking being developed in the FUSION project'. Despite recent proposals in the area of information centric networking, a similar treatment of services - where networked software functions, rather than content, are dynamically deployed, replicated and invoked - has received little attention by the network research community to date. Our approach provides the mechanisms required to deploy a replicated service instance in the network and to route client requests to the closest instance in an efficient manner. We address the main issues that such a paradigm raises including load balancing, resource registration, domain monitoring and inter-domain orchestration. We also present preliminary evaluation results of current work.
Article
Full-text available
We describe the architecture and prototype implementation of an assistive system based on Google Glass devices for users in cognitive decline. It combines the first-person image capture and sensing capabilities of Glass with remote processing to perform real-time scene interpretation. The system architecture is multi-tiered. It offers tight end-to-end latency bounds on compute-intensive operations, while addressing concerns such as limited battery capacity and limited processing capability of wearable devices. The system gracefully degrades services in the face of network failures and unavailability of distant architectural tiers.
Conference Paper
Full-text available
Fog computing is expected to be an enabler of mobile cloud computing, which extends the cloud computing paradigm to the edge of the network. In the mobile cloud, not only central data centers but also pervasive mobile devices share their heterogeneous resources (e. g. CPUs, bandwidth, content) and support services. The mobile cloud based on such resource sharing is expected to be a powerful platform for mobile cloud applications and services. In this paper, we propose an architecture and mathematical framework for heterogeneous resource sharing based on the key idea of service-oriented utility functions. Since heterogeneous resources are often measured/quantified in disparate scales/units (e.g. power, bandwidth, latency), we present a unified framework where all these quantities are equivalently mapped to "time" resources. We formulate optimization problems for maximizing (i) the sum of the utility functions, and (ii) the product of the utility functions, and solve them via convex optimization approaches. Our numerical results show that service-oriented heterogeneous resource sharing reduces service latencies effectively and achieves high energy efficiency, making it attractive for use in the mobile cloud.
Article
Full-text available
We present the first open source cloud gaming system, called GamingAnywhere. In addition to its openness, we have designed, GamingAnywhere for high extensibility, portability, and reconfigurability. We implemented it on Windows, Linux, OS X, and Android. We conducted extensive experiments to evaluate its performance. Our experimental results indicate that GamingAnywhere is efficient, scalable, adaptable to network conditions, and achieves high responsiveness and streaming quality. GamingAnywhere can be employed by researchers, game developers, service providers, and end users for setting up cloud gaming testbeds, which we believe, will stimulate more research into innovations for cloud gaming systems and applications.
Chapter
Full-text available
Internet of Things (IoT) brings more than an explosive proliferation of endpoints. It is disruptive in several ways. In this chapter we examine those disrup- tions, and propose a hierarchical distributed architecture that extends from the edge of the network to the core nicknamed Fog Computing. In particular, we pay attention to a new dimension that IoT adds to Big Data and Analytics: a massively distributed number of sources at the edge.
Article
Full-text available
Mobile systems have limited resources, such as battery life, network bandwidth, storage capacity, and processor performance. These restrictions may be alleviated by computation offloading: sending heavy computation to resourceful servers and receiving the results from these servers. Many issues related to offloading have been investigated in the past decade. This survey paper provides an overview of the background, techniques, systems, and research areas for offloading computation. We also describe directions for future research.
Article
Full-text available
The information-centric networking (ICN) concept is a significant common approach of several future Internet research activities. The approach leverages in-network caching, multiparty communication through replication, and interaction models decoupling senders and receivers. The goal is to provide a network infrastructure service that is better suited to today¿s use (in particular. content distribution and mobility) and more resilient to disruptions and failures. The ICN approach is being explored by a number of research projects. We compare and discuss design choices and features of proposed ICN architectures, focusing on the following main components: named data objects, naming and security, API, routing and transport, and caching. We also discuss the advantages of the ICN approach in general.
Article
Full-text available
This paper analyzes a worldwide GPS treasure hunt game that is played in over 200 countries with game pieces that travel the globe and are tracked online. The game players hide geocache containers in public areas, marking them with GPS coordinates. Players use their mobile devices (from GPS receivers to iPhones) to track down the container, sign the log, and leave tradable and trackable items in the cache. This mobile game offers the perfect example of the blending of material and virtual interfaces, notions of presence and absence, visible and invisible, and utilitarian and playful purposes of everyday objects. Embodied subjectivity in Geocaching is gaining through a correspondence between the user's location gained through GPS coordinates, the finding of a material object hidden in everyday space, and the signing of the logbook in the container. The act of physically signing the logbook as a way to prove embodied "presence" in material space is highly dependent on the screen space of the GPS receiver. Thus, I argue for a cohesive sense of embodiment gained through a "proprioceptive-semiotic" convening of bodies, technologies, and socially constructed spaces. ABSTRACT This paper analyzes a worldwide GPS treasure hunt game that is played in over 200 countries with game pieces that travel the globe and are tracked online. The game players hide geocache containers in public areas, marking them with GPS coordinates. Players use their mobile devices (from GPS receivers to iPhones) to track down the container, sign the log, and leave tradable and trackable items in the cache. This mobile game offers the perfect example of the blending of material and virtual interfaces, notions of presence and absence, visible and invisible, and utilitarian and playful purposes of everyday objects. Embodied subjectivity in Geocaching is gaining through a correspondence between the user's location gained through GPS coordinates, the finding of a material object hidden in everyday space, and the signing of the logbook in the container. The act of physically signing the logbook as a way to prove embodied "presence" in material space is highly dependent on the screen space of the GPS receiver. Thus, I argue for a cohesive sense of embodiment gained through a "proprioceptive-semiotic" convening of bodies, technologies, and socially constructed spaces.
Article
Full-text available
Mobile computing continuously evolve through the sustained effort of many researchers. It seamlessly augments users' cognitive abilities via compute-intensive capabilities such as speech recognition, natural language processing, etc. By thus empowering mobile users, we could transform many areas of human activity. This article discusses the technical obstacles to these transformations and proposes a new architecture for overcoming them. In this architecture, a mobile user exploits virtual machine (VM) technology to rapidly instantiate customized service software on a nearby cloudlet and then uses that service over a wireless LAN; the mobile device typically functions as a thin client with respect to the s 5a8 ervice. A cloudlet is a trusted, resource-rich computer or cluster of computers that's well-connected to the Internet and available for use by nearby mobile devices. Our strategy of leveraging transiently customized proximate infrastructure as a mobile device moves with its user through the physical world is called cloudlet-based, resource-rich, mobile computing. Crisp interactive response, which is essential for seamless augmentation of human cognition, is easily achieved in this architecture because of the cloudlet's physical proximity and one-hop network latency. Using a cloudlet also simplifies the challenge of meeting the peak bandwidth demand of multiple users interactively generating and receiving media such as high-definition video and high-resolution images. Rapid customization of infrastructure for diverse applications emerges as a critical requirement, and our results from a proof-of-concept prototype suggest that VM technology can indeed help meet this requirement.
Article
Full-text available
The data centers used to create cloud services represent a significant investment in capital outlay and ongoing costs. Accordingly, we first examine the costs of cloud service data centers today. The cost breakdown reveals the importance of optimizing work completed per dollar invested. Unfortunately, the resources inside the data centers often operate at low utilization due to resource stranding and fragmentation. To attack this first problem, we propose (1) increasing network agility, and (2) providing appropriate incentives to shape resource consumption. Second, we note that cloud service providers are building out geo-distributed networks of data centers. Geo-diversity lowers latency to users and increases reliability in the presence of an outage taking out an entire site. However, without appropriate design and management, these geo-diverse data center networks can raise the cost of providing service. Moreover, leveraging geo-diversity requires services be designed to benefit from it. To attack this problem, we propose (1) joint optimization of network and data center resources, and (2) new systems and mechanisms for geo-distributing state.
Conference Paper
Full-text available
Cloud Gaming is a new kind of service, which combines the successful concepts of Cloud Computing and Online Gaming. It provides the entire game experience to the users remotely from a data center. The player is no longer dependent on a specific type or quality of gaming hardware, but is able to use common devices. The end device only needs a broadband internet connection and the ability to display High Definition (HD) video. While this may reduce hardware costs for users and increase the revenue for developers by leaving out the retail chain, it also raises new challenges for service quality in terms of bandwidth and latency for the underlying network. In this paper we present the results of a subjective user study we conducted into the user-perceived quality of experience (QoE) in Cloud Gaming. We design a measurement environment, that emulates this new type of service, define tests for users to assess the QoE, derive Key Influence Factors (KFI) and influences of content and perception from our results.
Article
Full-text available
We have developed a prototype virtual reality-based balance training system using a single inertial orientation sensor attached to the upper surface of a wobble board. This input device has been interfaced with Neverball, an open source computer game to create the balance training platform. Users can exercise with the system by standing on the wobble board and tilting it in different directions to control an on-screen environment. We have also developed a customized instruction manual to use when setting up the system. To evaluate the usability our prototype system we undertook a user evaluation study with twelve healthy novice participants. Participants were required to assemble the system using an instruction manual and then perform balance exercises with the system. Following this period of exercise VRUSE, a usability evaluation questionnaire, was completed by participants. Results indicated a high level of usability in all categories evaluated.
Article
Advances in cloud computing and GPU virtualization are allowing the game industry to move into a cloud gaming era. In this paper, we consider multiplayer cloud gaming (MCG), which is the natural integration of multiplayer online gaming and cloud gaming paradigms. With MCG, a game server and a set of rendering servers for the players need to be located and launched in the clouds for each game session. We formulate an MCG server allocation problem with the objective of minimizing the total server rental and bandwidth cost charged by the cloud to support an MCG session. The MCG server allocation problem is hard to solve optimally. We propose several efficient heuristics to address the problem and carry out theoretical analysis for the proposed hill-climbing algorithm. We conduct extensive experiments using real Internet latency and cloud pricing datasets to evaluate the effectiveness of our proposed algorithms as well as several alternatives. Experimental results show that our best algorithm can achieve near-optimal cost under real-time latency constraints.
Article
Multi-access Edge Computing (MEC) is an emerging ecosystem, which aims at converging telecommunication and IT services, providing a cloud computing platform at the edge of the Radio Access Network (RAN). MEC offers storage and computational resources at the edge, reducing latency for mobile end users and utilizing more efficiently the mobile backhaul and core networks. This paper introduces a survey on MEC and focuses on the fundamental key enabling technologies. It elaborates MEC orchestration considering both individual services and a network of MEC platforms supporting mobility, bringing light into the different orchestration deployment options. In addition, this paper analyzes the MEC reference architecture and main deployment scenarios, which offer multi-tenancy support for application developers, content providers and third parties. Finally, this paper overviews the current standardization activities and elaborates further on open research challenges.
Article
Technological evolution of mobile user equipments (UEs), such as smartphones or laptops, goes hand-in-hand with evolution of new mobile applications. However, running computationally demanding applications at the UEs is constrained by limited battery capacity and energy consumption of the UEs. Suitable solution extending the battery life-time of the UEs is to offload the applications demanding huge processing to a conventional centralized cloud (CC). Nevertheless, this option introduces significant execution delay consisting in delivery of the offloaded applications to the cloud and back plus time of the computation at the cloud. Such delay is inconvenient and make the offloading unsuitable for real-time applications. To cope with the delay problem, a new emerging concept, known as mobile edge computing (MEC), has been introduced. The MEC brings computation and storage resources to the edge of mobile network enabling to run the highly demanding applications at the UE while meeting strict delay requirements. The MEC computing resources can be exploited also by operators and third parties for specific purposes. In this paper, we first describe major use cases and reference scenarios where the MEC is applicable. After that we survey existing concepts integrating MEC functionalities to the mobile networks and discuss current advancement in standardization of the MEC. The core of this survey is, then, focused on user-oriented use case in the MEC, i.e., computation offloading. In this regard, we divide the research on computation offloading to three key areas: i) decision on computation offloading, ii) allocation of computing resource within the MEC, and iii) mobility management. Finally, we highlight lessons learned in area of the MEC and we discuss open research challenges yet to be addressed in order to fully enjoy potentials offered by the MEC.
Article
In Information-Centric Internet of Things (ICIoT), IoT data can be cached throughout a network for close data copy retrievals. Such a distributed data caching environment, however, poses a challenge to flexible authorization in the network. To address this challenge, Ciphertext-Policy Attribute-Based Encryption (CP-ABE) has been identified as a promising approach. However in the existing CP-ABE scheme, publishers need to retrieve attributes from a centralized server for encrypting data, which leads to high communication overhead. To solve this problem, we incorporate CP-ABE and propose a novel Distributed Publisher-driven secure Data sharing for ICIoT (DPD-ICIoT) to enable only authorized users to retrieve IoT data from distributed cache. In DPDICIoT, newly introduced Attribute Manifest (AM) is cached in the network, through which publishers can retrieve the attributes from nearby copy holders instead of a centralized attribute server. In addition, a key chain mechanism is utilized for ecient cryptographic operations, and an Automatic Attribute Self-update Mechanism (AASM) is proposed to enable fast updates of attributes without querying centralized servers. According to the performance evaluation, DPD-ICIoT achieves lower bandwidth cost compared to the existing CPABE scheme.
Article
Data centers (DCs), owing to the exponential growth of Internet services, have emerged as an irreplaceable and crucial infrastructure to power this ever-growing trend. A data center typically houses a large number of computing and storage nodes, interconnected by a specially designed network, namely, data center network (DCN). The DCN serves as a communication backbone and plays a pivotal role in optimizing data center operations. However, compared to the traditional network, the unique requirements in the DCN, for example, large scale, vast application diversity, high power density, and high reliability, pose significant challenges to its infrastructure and operations. We have observed from the premium publication venues (e.g., journals and system conferences) that increasing research efforts are being devoted to optimize the design and operations of the DCN. In this paper, we aim to present a systematic taxonomy and survey of recent research efforts on the DCN. Specifically, we propose to classify these research efforts into two areas: i) DCN infrastructure and ii) DCN operations. For the former aspect, we review and compare the list of transmission technologies and network topologies used or proposed in the DCN infrastructure. For the latter aspect, we summarize the existing traffic control techniques in the DCN operations, and survey optimization methods to achieve diverse operational objectives, including high network utilization, fair bandwidth sharing, low service latency, low energy consumption, high resiliency, and etc., for efficient data center operations. We finally conclude this survey by envisioning a few open research opportunities in DCN infrastructure and operations.
Article
5G network architecture and its functions are yet to be defined. However, it is generally agreed that cloud computing, network function virtualization (NFV), and software defined networking (SDN) will be key enabling technologies for 5G. Indeed, putting all these technologies together ensures several advantages in terms of network configuration flexibility, scalability, and elasticity, which are highly needed to fulfill the numerous requirements of 5G. Furthermore, 5G network management procedures should be as simple as possible, allowing network operators to orchestrate and manage the lifecycle of their virtual network infrastructures (VNIs) and the corresponding virtual network functions (VNFs), in a cognitive and programmable fashion. To this end, we introduce the concept of "Anything as a Service" (ANYaaS), which allows a network operator to create and orchestrate 5G services on demand and in a dynamic way. ANYaaS relies on the reference ETSI NFV architecture to orchestrate and manage important services such as mobile Content Delivery Network as a Service (CDNaaS), Traffic Offload as a Service (TOFaaS), and Machine Type Communications as a Service (MTCaaS). Ultimately, ANYaaS aims for enabling dynamic creation and management of mobile services through agile approaches that handle 5G network resources and services.
Article
Fog is an emergent architecture for computing, storage, control, and networking that distributes these services closer to end users along the cloud-To-Things continuum. It covers both mobile and wireline scenarios, traverses across hardware and software, resides on network edge but also over access networks and among end users, and includes both data plane and control plane. As an architecture, it supports a growing variety of applications, including those in the Internet of Things (IoT), fifth-generation (5G) wireless systems, and embedded artificial intelligence (AI). This survey paper summarizes the opportunities and challenges of fog, focusing primarily in the networking context of IoT.
Article
Big data are widely recognized as being one of the most powerful drivers to promote productivity, improve efficiency, and support innovation. It is highly expected to explore the power of big data and turn big data into big values. To answer the interesting question whether there are inherent correlations between the two tendencies of big data and green challenges, a recent study has investigated the issues on greening the whole life cycle of big data systems. This paper would like to discover the relations between the trend of big data era and that of the new generation green revolution through a comprehensive and panoramic literature survey in big data technologies toward various green objectives and a discussion on relevant challenges and future directions.
Article
Nowadays, there are two significant tendencies, how to process the enormous amount of data, big data, and how to deal with the green issues related to sustainability and environmental concerns. An interesting question is whether there are inherent correlations between the two tendencies in general. To answer this question, this paper firstly makes a comprehensive literature survey on how to green big data systems in terms of the whole life cycle of big data processing, and then this paper studies the relevance between big data and green metrics and proposes two new metrics, effective energy efficiency and effective resource efficiency in order to bring new views and potentials of green metrics for the future times of big data.
Article
The world is increasingly information-driven. Vast amounts of data are being produced by different sources and in diverse formats. It is becoming critical to endow assessment systems with the ability to process streaming information from sensors in real time in order to better manage physical systems, derive informed decisions, tweak production processes, and optimize logistics choices. This article first surveys the works dealing with building, adapting, and managing networks of classifiers, then describes the challenges and limitations of the current approaches, discusses possible directions to deal with these limitations, and presents some open research questions that need to be investigated.
Article
Cloud gaming, where the game is rendered in the cloud and is streamed to an end-user device through a thin client, is rapidly gaining ground. Latency is still a key challenge to cloud gaming: highly interactive games can become unplayable even with response delays below 100 ms. To overcome this issue, we propose to deploy gaming services on a more distributed cloud infrastructure, and to instantiate gaming servers in close proximity of the user when necessary in order to shorten the response delay. Our prototype distributed cloud gaming platform also allows flexible configuration of gaming controls and video streams, enabling the use of public displays in mobile cloud gaming. We test our prototype with two games in different deployment scenarios, and measure the response delay and power consumption of the mobile devices. Our experiment results confirm that it is feasible to improve the quality of gaming experience through the deployment strategies provided by the proposed system.
Article
Mobile applications are increasingly exploiting cloud computing to overcome the resource limitations of mobile devices. To this end, the most computationally expensive tasks are offloaded to the cloud and the mobile application simply interacts with a remote service through a network connection. One way to establish such a connection is given by remote display access, in which a mobile device just operates as a thin client by relaying the input events to a server and updating the screen based on the content received. In this article, we specifically address remote access as a means for mobile cloud computing, with focus on its power consumption at mobile devices. Different from most of the existing literature, we take an experimental approach based on real user sessions employing different remote access protocols and types of applications, including gaming. Through several experiments, we characterize the impact of the different protocols and their features on the power consumption and the network utilization. We conclude our analysis with considerations on usability and user experience.
Article
The data centers used to create cloud services represent a significant investment in capital outlay and ongoing costs. Accordingly, we first examine the costs of cloud service data centers today. The cost breakdown reveals the importance of optimizing work completed per dollar invested. Unfortunately, the resources inside the data centers often operate at low utilization due to resource stranding and fragmentation. To attack this first problem, we propose (1) increasing network agility, and (2) providing appropriate incentives to shape resource consumption. Second, we note that cloud service providers are building out geo-distributed networks of data centers. Geo-diversity lowers latency to users and increases reliability in the presence of an outage taking out an entire site. However, without appropriate design and management, these geo-diverse data center networks can raise the cost of providing service. Moreover, leveraging geo-diversity requires services be designed to benefit from it. To attack this problem, we propose (1) joint optimization of network and data center resources, and (2) new systems and mechanisms for geo-distributing state.
Article
Optimizing cloud gaming experience is no easy task due to the complex tradeoff between gamer quality of experience (QoE) and provider net profit. We tackle the challenge and study an optimization problem to maximize the cloud gaming provider’s total profit while achieving just-good-enough QoE. We conduct measurement studies to derive the QoE and performance models. We formulate and optimally solve the problem. The optimization problem has exponential running time, and we develop an efficient heuristic algorithm. We also present an alternative formulation and algorithms for closed cloud gaming services with dedicated infrastructures, where the profit is not a concern and overall gaming QoE needs to be maximized. We present a prototype system and testbed using off-the-shelf virtualization software, to demonstrate the practicality and efficiency of our algorithms. Our experience on realizing the testbed sheds some lights on how cloud gaming providers may build up their own profitable services. Last, we conduct extensive trace-driven simulations to evaluate our proposed algorithms. The simulation results show that the proposed heuristic algorithms: (i) produce close-to-optimal solutions, (ii) scale to large cloud gaming services with 20,000 servers and 40,000 gamers, and (iii) outperform the state-of-the-art placement heuristic, e.g., by up to 3.5 times in terms of net profits.
Article
The Internet of Things (IoT) paradigm stands for virtually interconnected objects that are identifiable and equipped with sensing, computing, and communication capabilities. Implementation of services and applications over the IoT architecture can take benefit of the cloud computing concept. Sensing-as-a-Service (S 2 aaS) is a cloud-inspired service model which enables access to the IoT. In this paper, we present a framework where IoT can enhance public safety by crowd management via sensing services that are provided by smart phones equipped with various types of sensors. In order to ensure trustworthiness in the presented framework, we propose a reputation-based (S 2 aaS) scheme, namely, Trustworthy Sensing for Crowd Management (TSCM) for front-end access to the IoT. TSCM collects sensing data based on a cloud model and an auction procedure which selects mobile devices for particular sensing tasks and determines the payments to the users of the mobile devices that provide data. Performance evaluation of TSCM shows that the impact of malicious users in the crowdsourced data can be degraded by 75% while trustworthiness of a malicious user converges to a value below 40% following few auctions. Moreover, we show that TSCM can enhance the utility of the public safety authority up to 85%.
Article
As new mobile and gaming technologies become increasingly ubiquitous, they encourage new modes of storytelling and engagement. This article focuses on the Google game Ingress, which combines augmented reality with geomedia to create a robust and complex digital narrative. More importantly, Ingress combines globalism with regionalism in a way that rewrites the regional as global, and vice versa. In turn, the transformative nature of the smaller real-world regionalist narratives help to lend ethos to the overarching globalist (fictional) narrative within the game world. Through narrative analysis of this transmedia game world and community, this article considers ways that information and communications technologies are able to use storytelling to negotiate complex relationships between the regional and the global.
Conference Paper
The ubiquitous deployment of mobile and sensor devices is creating a new environment, namely the Internet of Things(IoT), that enables a wide range of future Internet applications. In this work, we present Mobile Fog, a high level programming model for future Internet applications that are geospatially distributed, large-scale, and latency-sensitive. We analyze use cases for the programming model with camera network and connected vehicle applications to show the efficacy of Mobile Fog We also evaluate application performance through simulation.
Conference Paper
We propose a scalable Internet system for continuous collection of crowd-sourced video from devices such as Google Glass. Our hybrid cloud architecture, GigaSight, is effectively a Content Delivery Network (CDN) in reverse. It achieves scalability by decentralizing the collection infrastructure using cloudlets based on virtual machines~(VMs). Based on time, location, and content, privacy sensitive information is automatically removed from the video. This process, which we refer to as denaturing, is executed in a user-specific VM on the cloudlet. Users can perform content-based searches on the total catalog of denatured videos. Our experiments reveal the bottlenecks for video upload, denaturing, indexing, and content-based search. They also provide insight on how parameters such as frame rate and resolution impact scalability.
Article
User perceptual sensitivity to changes of system latency was tested in three simple virtual environments: one with only a foreground object, a second with only a background object, and a third that combined both of these elements. Prior psychophysical measurements of sensitivity, Just Noticeable Difference; and bias, Points of Subjective Equality, from our laboratory are confirmed with measurements in 13 subjects. Our measurements indicate that perceptual stability across a variety of virtual environments will require latencies less than 16 ms. We discount a possible explanation that the differences between our results and those from a study by Allison et al. could be related to a visual capture effect initially reported by L. Matin. Instead, the differences may be due to the type of psychophysical judgment rendered by the subjects and the degree to which subjects were instructed and practiced.
Article
Together with an explosive growth of the mobile applications and emerging of cloud computing concept, mobile cloud computing (MCC) has been introduced to be a potential technology for mobile services. MCC integrates the cloud computing into the mobile environment and overcomes obstacles related to the performance (e.g., battery life, storage, and bandwidth), environment (e.g., heterogeneity, scalability, and availability), and security (e.g., reliability and privacy) discussed in mobile computing. This paper gives a survey of MCC, which helps general readers have an overview of the MCC including the definition, architecture, and applications. The issues, existing solutions, and approaches are presented. In addition, the future research directions of MCC are discussed. Copyright © 2011 John Wiley & Sons, Ltd.
Article
This paper surveys the current state-of-the-art in Augmented Reality. It describes work performed at many different sites and explains the issues and problems encountered when building Augmented Reality systems. It summarizes the tradeoffs and approaches taken so far to overcome these problems and speculates on future directions that deserve exploration. This paper does not present new research results. The contribution comes from consolidating existing information from many sources and publishing an extensive bibliography of papers in this field. While several other introductory papers have been written on this subject [Barfield95] [Bowskill95] [Caudell94] [Drascic93b] [Feiner94a] [Feiner94b] [Milgram94b] [Rolland94], this survey is more comprehensive and up-to-date. For anyone interested in starting research in this area, this survey should provide a good starting point. Section 1 describes what Augmented Reality is and the motivations for developing this technology. Four classes of potential applications that have been explored are described in Section 2. Then Section 3 discusses the issues involved in building an Augmented Reality system. Currently, two of the biggest problems are in registration and sensing, so those are the subjects of Sections 4 and 5. Finally, Section 6 describes some areas that require further work and research. 1.2 Definition
Scalable crowd-sourcing of video from mobile devices
  • P Simoens