Fig 5 - uploaded by Ahmed Hussein Ali
Content may be subject to copyright.
Source publication
There is no doubt that we are entering the era of big data. The challenge is on how to store, search, and analyze the huge amount of data that is being generated per second. One of the main obstacles to the big data researchers is how to find the appropriate big data analysis platform. The basic aim of this work is to present a complete investigati...
Similar publications
Spectral pretreatments, such as background removal from Raman big data, are crucial to have a smooth link to advanced spectral analysis. Recently, we developed an automated background removal method, where we considered the shortest length of a spectrum by changing the scaling factor of the background spectrum. Here, we propose a practical way to c...
Citations
... New technologies are needed for real data analytics since some data are continuously gathered, processed, analyzed, and the information is usable as soon as it is generated [1], [2]. There is no one-size-fits-all approach to create a Big Data architecture due to the rapid advancement of technology and the highly competitive market in which it operates. ...
This paper provides a simulation-based evaluation that addresses memory management problems throughout Big Data processing. A significant problem occurs with in-memory computing when there is not enough available memory for processing the whole chunk of data, and hence some data must be selected for deletion to make room for new ones. The selected research strategy is to use different cache selection and replacement algorithms, such as Adaptive Replacement Cache (ARC) and Low Inter-Reference Recency Set (LIRS) algorithms, besides the default one, which is Least Recently Used (LRU). A simulator was built by the authors to assess the use of different caching approaches on Big Data platforms. The evaluation showed that the LIRS and the ARC algorithms gave a better hit ratio for different workloads than the LRU algorithm.
... Horizontal scaling: It is a way to cope with load changes by increasing or decreasing the number of container instances [123], [124] (such as Pods, container groups, etc.). It can respond very quickly to load changes and adjust overall processing power by adding or removing container instances. ...
Edge Artificial Intelligence (AI) incorporates a network of interconnected systems and devices that receive, cache, process, and analyse data in close communication with the location where the data is captured with AI technology. Recent advancements in AI efficiency, the widespread use of Internet of Things (IoT) devices, and the emergence of edge computing have unlocked the enormous scope of Edge AI. The goal of Edge AI is to optimize data processing efficiency and velocity while ensuring data confidentiality and integrity. Despite being a relatively new field of research, spanning from 2014 to the present, it has shown significant and rapid development over the last five years. In this article, we present a systematic literature review for Edge AI to discuss the existing research, recent advancements, and future research directions. We created a collaborative edge AI learning system for cloud and edge computing analysis, including an in-depth study of the architectures that facilitate this mechanism. The taxonomy for Edge AI facilitates the classification and configuration of Edge AI systems while also examining its potential influence across many fields through compassing infrastructure, cloud computing, fog computing, services, use cases, ML and deep learning, and resource management. This study highlights the significance of Edge AI in processing real-time data at the edge of the network. Additionally, it emphasizes the research challenges encountered by Edge AI systems, including constraints on resources, vulnerabilities to security threats, and problems with scalability. Finally, this study highlights the potential future research directions that aim to address the current limitations of Edge AI by providing innovative solutions.
... Utilitarianism), BU approaches have no moral framework and instead aim to learn morality from the environment, and hybrid approaches combine aspects of the two [50]. Agents, like standard AI models, can be constructed at different scales which produce different performance capabilities [51]-we consider standard individual agents, high capacity individual agents (vertical scaling), and multi-agent systems (horizontal scaling) [52]. For simplicity in horizontally-scaled systems, we assume all agents are cooperative and that there are no unpredictable agent-agent interaction effects [13]. ...
As artificial intelligence (AI) models continue to scale up, they are becoming more capable and integrated into various forms of decision-making systems. For models involved in moral decision-making (MDM), also known as artificial moral agents (AMA), interpretability provides a way to trust and understand the agent’s internal reasoning mechanisms for effective use and error correction. In this paper, we bridge the technical approaches to interpretability with construction of AMAs to establish minimal safety requirements for deployed AMAs. We begin by providing an overview of AI interpretability in the context of MDM, thereby framing different levels of interpretability (or transparency) in relation to the different ways of constructing AMAs. Introducing the concept of the Minimum Level of Interpretability (MLI) and drawing on examples from the field, we explore two overarching questions: whether a lack of model transparency prevents trust and whether model transparency helps us sufficiently understand AMAs. Finally, we conclude by recommending specific MLIs for various types of agent constructions, aiming to facilitate their safe deployment in real-world scenarios.
... Moreover, it is also important to adhere to performance requirements (e.g., latency or throughput), which might, however, also be impacted by external factors [60]. Yet, through a proficient use of load balancing, sensible architectural decisions, and the scaling of overburdened parts of the system, there is still a great possibility to influence these factors to a certain degree [60][61][62]. ...
303 304 D. Staegemann et al. and their general nature is outlined. Then, subsequently, suitable approaches for the components' testing are discussed. In doing so, researchers and practitioners alike shall be provided with a starting point when it comes to the discussion of big data quality assurance and how it can be implemented in their own endeavors.
... (i) Horizontal scaling: It is a way to cope with load changes by increasing or decreasing the number of container instances [127], [128] (such as Pods, container groups, etc.). It can respond very quickly to load changes and adjust overall processing power by adding or removing container instances. ...
Edge Artificial Intelligence (AI) incorporates a network of interconnected systems and devices that receive, cache, process, and analyze data in close communication with the location where the data is captured with AI technology. Recent advancements in AI efficiency, the widespread use of Internet of Things (IoT) devices, and the emergence of edge computing have unlocked the enormous scope of Edge AI. Edge AI aims to optimize data processing efficiency and velocity while ensuring data confidentiality and integrity. Despite being a relatively new field of research from 2014 to the present, it has shown significant and rapid development over the last five years. This article presents a systematic literature review for Edge AI to discuss the existing research, recent advancements, and future research directions. We created a collaborative edge AI learning system for cloud and edge computing analysis, including an in-depth study of the architectures that facilitate this mechanism. The taxonomy for Edge AI facilitates the classification and configuration of Edge AI systems while examining its potential influence across many fields through compassing infrastructure, cloud computing, fog computing, services, use cases, ML and deep learning, and resource management. This study highlights the significance of Edge AI in processing real-time data at the edge of the network. Additionally, it emphasizes the research challenges encountered by Edge AI systems, including constraints on resources, vulnerabilities to security threats, and problems with scalability. Finally, this study highlights the potential future research directions that aim to address the current limitations of Edge AI by providing innovative solutions.
... Additionally, if the server fails, it can result in significant downtime, making it a less resilient option. On the other hand, horizontal scaling, or "scaling out," distributes the load across multiple servers [7]. When compared to vertical scaling, horizontal scaling offers a more significant number of advantages, some of which include load balance and failover protection. ...
Moodle is a widely used Learning Management System in various educational institutions worldwide. However, frequent reports on internet forums indicate performance degradation when massive simultaneous users access Moodle. One of the most resource-intensive components supporting Moodle is the database, as all user-accessed data is stored in it. This study aims to optimize Moodle’s performance through distributed databases. Distributing the database into multiple database servers allows the database load to be distributed across all the database servers, resulting in an overall improvement in Moodle performance. This study compares the performance of Moodle installed on a single server with that installed on multiple database servers. Various testing parameters are employed to get valid results. Namely, course read, course write, and database performance, utilizing the server performance plugin available in Moodle. This research reveals a performance improvement of 384% in course read, 193% in course write, and 260% in the Moodle database in the multi-server scenario compared to the single-server scenario. This result validates that the database is the most crucial part of Moodle.
... A big data platform for big data processing and analysis, Spark is an in-memory cluster computing platform that is designed for processing and analyzing enormous volumes of data in a distributed setting [15,16]. For the goal of processing large amounts of information quickly and efficiently, it provides a basic programming interface that enables an application developer to efficiently leverage the processing power, memory, and storage resources available across a cluster of machines [17]. ...
Big data and artificial intelligence are game-changing technologies for the underdeveloped healthcare industry because they help optimize the entire supply chain and deliver more exact patient outcome information. Machine learning approaches that have recently seen more growing popularity include deep learning models that have brought revolution within the healthcare system in the previous years due to more complicated data compared to previous years . Machine learning is an essential data analysis procedure to describe efficient and effective methods to extract hidden information from large amounts of data that it would take logical analytics too long to manage. Recent years have seen an expansion and growth of advanced intelligent systems that have been able to learn more about clinical treatments and glean untapped medical information emanating from vast quantities of data when it comes to drug discovery and chemistry. The aim of this chapter is, therefore, to assess which big data and artificial intelligence approaches are prevalent in healthcare systems by investigating the most advanced big data structures, applications, and industry trends today available. First and foremost, the purpose is to provide a comprehensive overview of how the artificial intelligence and big data models can allocation in healthcare solutions fill the gap between machine learning approaches’ lack of human coverage and the healthcare data’s complexity. Moreover, current artificial intelligence technologies, including generative models, Bayesian deep learning, reinforcement learning, and self-driving laboratories, are also increasingly being used for drug discovery and chemistry . Finally, the work presents the existing open challenges and the future directions in the drug formulation development field. To this end, the review will cover on published algorithms/automation tools for artificial intelligence applied to large scale-data in the case of healthcare .
... GPU parallel computing platform used to do one or more computations or processes are carried out parallel. So GPUs are fast and efficient [46]. Comparison of CPU and GPU are shown in Table 13 below. ...
Clustering is a data mining task used for the data extraction from the data bases or files. Clustering is used to find unknown groups present in the data sources like files or data bases. This paper focuses on clustering algorithms performance dependency on the parallel clustering platforms and the clustering algorithms along with their clustering criteria. The problems with the present Traditional clustering algorithms were throughput and data source size changes (scalability). So they can’t address the big data. So for handling the huge volumes of data, parallel clustering algorithms along with clustering criteria were used. For processing the big Data Parallel clustering algorithms are of two types based on computing platforms used. They were the horizontal scaling platforms and vertical scaling platforms.
... Utilitarianism), BU approaches have no moral framework and instead aim to learn morality from the environment, and hybrid approaches combine aspects of the two [50]. Agents, like standard AI models, can be constructed at different scales which produce different performance capabilities [51] -we consider standard individual agents, high capacity individual agents (vertical scaling), and multi-agent systems (horizontal scaling) [52]. For simplicity in horizontally-scaled systems, we assume all agents are cooperative and that there are no unpredictable agent-agent interaction effects [13]. ...
As artificial intelligence (AI) models continue to scale up, they are becoming more capable and integrated into various forms of decision-making systems. For models involved in moral decision-making, also known as artificial moral agents (AMA), interpretability provides a way to trust and understand the agent's internal reasoning mechanisms for effective use and error correction. In this paper, we provide an overview of this rapidly-evolving sub-field of AI interpretability, introduce the concept of the Minimum Level of Interpretability (MLI) and recommend an MLI for various types of agents, to aid their safe deployment in real-world settings.
... In particular, the digital world and-within it-the issues concerning digital circuits or digital electronics stand for a current area of research that is enjoying considerable interest among scholars and practitioners. This is a branch of electronics dealing with digital signals to perform various tasks such as elaborations of big data and machine learning (Ali, 2019;Huda et al., 2018). The main property of a digital circuit is the use of logical values (0 and 1) to work. ...
After clarifying the purposes that entrepreneurs aim to achieve, attention is now shifted to investigate the process of decision-making addressed by entrepreneurs. Particular attention is paid to the identification, selection, and exploitation of entrepreneurial opportunities. How do entrepreneurs proceed along these phases? At present, entrepreneurial studies do not offer largely agreed responses. Among several fields of knowledge that might be useful, the principles of mathematics may be of support. Someone might think about imaginary numbers—which stand for new values and could stand for new situations—but this does not help. Factually, entrepreneurs seem to be more inclined to leverage Boolean algebra classifying options as 1 or 0, yes or no.