Article

Enhancing Security and Efficiency in Decentralized Smart Applications through Blockchain Machine Learning Integration

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

This study investigates the integration of machine learning (ML) into blockchain-based smart applications, aiming to enhance security, efficiency, and scalability. The research contributes a novel framework that combines blockchain's decentralized ledger with privacy-preserving ML techniques, addressing key challenges in data integrity and computational efficiency. The primary objective is to evaluate the performance of this integration in a simulated smart grid environment, focusing on security, processing time, energy consumption, and scalability. Our findings reveal that the integrated system significantly improves security, achieving a 98% success rate in mitigating data breaches and reducing the impact of adversarial attacks by 90%. Computational efficiency is also enhanced, with the optimized blockchain-ML configuration reducing processing time by 33% and energy consumption by 20% compared to standard blockchain setups. However, scalability remains a challenge; the system demonstrates effective scalability up to 100 nodes, beyond which transaction processing time increases by 50%, indicating the need for further optimization. The results suggest that while the integration of ML and blockchain offers substantial improvements in security and efficiency, addressing scalability and environmental impact are critical for broader application. The novelty of this research lies in its dual focus on enhancing both security and efficiency within blockchain-ML systems, providing a foundation for future advancements in decentralized intelligent applications across industries. This work contributes to the field by offering empirical data that supports the viability of blockchain-ML integration and by highlighting the areas where further research is needed to realize its full potential.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... The Gaussian Adaptive Bilateral, locally based on the characteristics of the neighborhood, adjusts the filter of the spatial and range parameters. This adaptation can be achieved by computing the mean and standard deviation to maintain the information and edges along with the filter parameters of the image [19]. ...
... Quality of Processed Image Quality of Original Image × 100% (19) PSNR measures the ratio between the maximum possible power of an image and the power of corrupting noise that affects the fidelity of its representation. PSNR = 10 × log 10 ( ...
Article
The detection of geospatial objects in surveillance applications faces significant challenges due to the misclassification of object boundaries in noisy and blurry satellite images, which complicates the detection model's computational complexity, uncertainty, and bias. To address these issues and improve object detection accuracy, this paper introduces the Deep Wiener Deconvolution Denoising Sparse Autoencoder (DWDDSAE) model, a novel hybrid approach that integrates deep learning with Wiener deconvolution and Denoising Sparse Autoencoder (DSAE) techniques. The DWDDSAE model enhances image quality by extracting deep features and mitigating adversarial noise, ultimately leading to improved detection outcomes. Evaluations conducted on the NWPU VHR-10 and DOTA datasets demonstrate the effectiveness of the DWDDSAE model, achieving notable performance metrics: 96.32% accuracy, 86.88 edge similarity, 75.47 BRISQUE, 28.05 IQI, 38.08 PSNR (dB), 0.883 SSIM, 98.25 MSE, and 0.099 RMSE. The proposed model outperforms existing methods, offering superior noise and blur removal capabilities and contributing to Sustainable Development Goals (SDGs) such as SDG 9 (Industry, Innovation, and Infrastructure), SDG 11 (Sustainable Cities and Communities), and SDG 13 (Climate Action). This research highlights the model's potential for inclusive innovation in object detection applications, showcasing its contributions and novel approach to addressing existing limitations.
... The normalized data, comprising eight input components and their corresponding target outputs, are divided into training and testing sets for further analysis. The FDNN model consists of two main components: the forest section, which acts as a feature detector, learning sparse representations from raw inputs under the supervision of the training data, and the DNN section, which predicts outcomes based on these learned feature representations [31]. The forest is formed by constructing independent decision trees, and it operates as an ensemble of these trees. ...
Article
This study evaluates the performance of four neural network models—Artificial Neural Network (ANN), ANN optimized with Artificial Bee Colony (ANN-ABC), Multilayer Feedforward Neural Network (MLFNN), and Forest Deep Neural Network (FDNN)—across different iteration levels to assess their effectiveness in predictive tasks. The evaluation metrics include accuracy, precision, Area Under the Curve (AUC) values, and error rates. Results indicate that FDNN consistently outperforms the other models, achieving the highest accuracy of 99%, precision of 98%, and AUC of 99 after 250 iterations, while maintaining the lowest error rate of 2.8%. MLFNN also shows strong performance, particularly at higher iterations, with notable improvements in accuracy and precision, but does not surpass FDNN. ANN-ABC offers some improvements over the standard ANN, yet falls short compared to FDNN and MLFNN. The standard ANN model, though improving with iterations, ranks lowest in all metrics. These findings highlight FDNN's robustness and reliability, making it the most effective model for high-precision predictive tasks, while MLFNN remains a strong alternative. The study underscores the importance of model selection based on performance metrics to achieve optimal predictive accuracy and reliability.
Article
This study investigates the factors affecting sustainable innovation within Thailand's Provincial Electricity Authority, a nonprofit organization committed to sustainable energy solutions. With a focus on the Sufficiency Economy Philosophy as a developmental framework, the study examines how SEP principles of moderation, prudence, and resilience contribute to reducing greenhouse gas (GHG) emissions and achieving Sustainable Development Goals (SDGs). The research adopts a quantitative approach to analyze how SEP influences SI in PEA's operations alongside internal and external factors like disruptive leadership, digital transformation, and national sustainability initiatives. Through a series of correlation and regression analyses, the study identifies SEP as a critical component in fostering SI, with values of virtue, risk management, and informed decision-making emerging as influential elements. The findings indicate that integrating SEP's balanced approach to production and consumption facilitates organizational resilience, enabling the PEA to navigate internal and external shocks effectively. Furthermore, the results underscore the necessity of a holistic framework where internal initiatives align with broader cultural and ecological goals. The study highlights SEP's applicability beyond the energy sector, as seen in sustainable efforts in regions like Krabi and Koh Samui, which exemplify SEP-driven approaches toward low-carbon transitions. By leveraging SEP’s sufficiency principles, organizations can strengthen sustainable practices contributing to Thailand's environmental and social well-being. The research calls for further exploration into SEP’s role across sectors, positing that SEP could be a foundational pillar alongside economic, social, and environmental dimensions to drive sustainable innovation across diverse contexts.
Article
Full-text available
Introducing blockchain into Federated Learning (FL) to build a trusted edge computing environment for transmission and learning has attracted widespread attention as a new decentralized learning pattern. However, traditional consensus mechanisms and architectures of blockchain systems face significant challenges in handling large-scale FL tasks, especially on Internet of Things (IoT) devices, due to their substantial resource consumption, limited transaction throughput, and complex communication requirements. To address these challenges, this paper proposes ChainFL, a novel two-layer blockchain-driven FL system. It splits the IoT network into multiple shards within the subchain layer, effectively reducing the scale of information exchange, and employs a Direct Acyclic Graph (DAG)-based mainchain as the mainchain layer, enabling parallel and asynchronous cross-shard validation. Furthermore, the FL procedure is customized to integrate deeply with blockchain technology, and a modified DAG consensus mechanism is designed to mitigate distortion caused by abnormal models. To provide a proof-of-concept implementation and evaluation, multiple subchains based on Hyperledger Fabric and a self-developed DAG-based mainchain are deployed. Extensive experiments demonstrate that ChainFL significantly surpasses conventional FL systems, showing up to a 14
Article
Full-text available
Financial fraud cases are on the rise even with the current technological advancements. Due to the lack of inter-organization synergy and because of privacy concerns, authentic financial transaction data is rarely available. On the other hand, data-driven technologies like machine learning need authentic data to perform precisely in real-world systems. This study proposes a blockchain and smart contract-based approach to achieve robust Machine Learning (ML) algorithm for e-commerce fraud detection by facilitating inter-organizational collaboration. The proposed method uses blockchain to secure the privacy of the data. Smart contract deployed inside the network fully automates the system. An ML model is incrementally upgraded from collaborative data provided by the organizations connected to the blockchain. To incentivize the organizations, we have introduced an incentive mechanism that is adaptive to the difficulty level in updating a model. The organizations receive incentives based on the difficulty faced in updating the ML model. A mining criterion has been proposed to mine the block efficiently. And finally, the blockchain network is tested under different difficulty levels and under different volumes of data to test its efficiency. The model achieved 98.93% testing accuracy and 98.22% Fbeta score (recall-biased f measure) over eight incremental updates. Our experiment shows that both data volume and difficulty level of blockchain impacts the mining time. For difficulty level less than five, mining time and difficulty level has a positive correlation. For difficulty level two and three, less than a second is required to mine a block in our system. Difficulty level five poses much more difficulties to mine the blocks.
Article
Full-text available
Privacy and data security have become the new hot topic for regulators in recent years. As a result, Federated Learning (FL) (also called collaborative learning) has emerged as a new training paradigm that allows multiple, geographically distributed nodes to learn a Deep Learning (DL) model together without sharing their data. Blockchain is becoming a new trend as data protection and privacy are concerns in many sectors. Technology is leading the world and transforming into a global village where everything is accessible and transparent. We have presented a blockchain enabled security model using FL that can generate an enhanced DL model without sharing data and improve privacy through higher security and access rights to data. However, existing FL approaches also have unique security vulnerabilities that malicious actors can exploit and compromise the trained model. The FL method is compared to the other known approaches. Users are more likely to choose the latter option, i.e., providing local but private data to the server and using ML apps, performing ML operations on the devices without benefiting from other users’ data, and preventing direct access to raw data and local training of ML models. FL protects data privacy and reduces data transfer overhead by storing raw data on devices and combining locally computed model updates. We have investigated the feasibility of data and model poisoning attacks under a blockchain-enabled FL system built alongside the Ethereum network and the traditional FL system (without blockchain). This work fills a knowledge gap by proposing a transparent incentive mechanism that can encourage good behavior among participating decentralized nodes and avoid common problems and provides knowledge for the FL security literature by investigating current FL systems.
Article
Full-text available
In recent years, the emergence of blockchain technology (BT) has become a unique, most disruptive, and trending technology. The decentralized database in BT emphasizes data security and privacy. Also, the consensus mechanism in it makes sure that data is secured and legitimate. Still, it raises new security issues such as majority attack and double-spending. To handle the aforementioned issues, data analytics is required on blockchain based secure data. Analytics on these data raises the importance of arisen technology Machine Learning (ML). ML involves the rational amount of data to make precise decisions. Data reliability and its sharing are very crucial in ML to improve the accuracy of results. The combination of these two technologies (ML and BT) can provide highly precise results. In this paper, we present a detailed study on ML adoption for making BT-based smart applications more resilient against attacks. There are various traditional ML techniques, for instance, Support Vector Machines (SVM), clustering, bagging, and Deep Learning (DL) algorithms such as Convolutional Neural Network (CNN) and Long short-term memory (LSTM) can be used to analyse the attacks on a blockchain-based network. Further, we include how both the technologies can be applied in several smart applications such as Unmanned Aerial Vehicle (UAV), Smart Grid (SG), healthcare, and smart cities. Then, future research issues and challenges are explored. At last, a case study is presented with a conclusion. INDEX TERMS Blockchain, machine learning, smart grid, data security and privacy, data analytics, smart applications.
Article
Full-text available
Recently, with the rapid development of information and communication technologies, the infrastructures, resources, end devices, and applications in communications and networking systems are becoming much more complex and heterogeneous. In addition, the large volume of data and massive end devices may bring serious security, privacy, services provisioning, and network management challenges. In order to achieve decentralized, secure, intelligent, and efficient network operation and management, the joint consideration of blockchain and machine learning (ML) may bring significant benefits and have attracted great interests from both academia and industry. On one hand, blockchain can significantly facilitate training data and ML model sharing, decentralized intelligence, security, privacy, and trusted decision-making of ML. On the other hand, ML will have significant impacts on the development of blockchain in communications and networking systems, including energy and resource efficiency, scalability, security, privacy, and intelligent smart contracts. However, some essential open issues and challenges that remain to be addressed before the widespread deployment of the integration of blockchain and ML, including resource management, data processing, scalable operation, and security issues. In this paper, we present a survey on the existing works for blockchain and ML technologies. We identify several important aspects of integrating blockchain and ML, including overview, benefits, and applications. Then we discuss some open issues, challenges, and broader perspectives that need to be addressed to jointly consider blockchain and ML for communications and networking systems.
Article
Full-text available
Objectives This pilot study aimed to provide an overview of the potential for blockchain technology in the healthcare system. The review covers technological topics from storing medical records in blockchains through patient personal data ownership and mobile apps for patient outreach. Methods We performed a preliminary survey to fill the gap that exists between purely technically focused manuscripts about blockchains, on the one hand, and the literature that is mostly concerned with marketing discussions about their expected economic impact on the other hand. Results The findings show that new digital platforms based on blockchains are emerging to enabling fast, simple, and seamless interaction between data providers, including patients themselves. Conclusions We provide a conceptual understanding of the technical foundations of the potential for blockchain technology in healthcare, which is necessary to understand specific blockchain applications, evaluate business cases such as blockchain startups, or follow the discussion about its expected economic impacts.
Article
Full-text available
Blockchain technology is an inchoate technology whose current popularity is peaking. Some of the most pervasive blockchain technology use cases exist for supply chains. Sustainable, and especially green, supply chains can benefit from blockchain technology, but there are also caveats. The sustainability and environmental management research and academic literature is only starting to investigate this emergent field. This paper seeks to help advance the discussion and motivate additional practice and research related to green supply chains and blockchain technology. This viewpoint paper provides insight into some of the main dimensions of blockchain technology, an overview of the use cases and issues, and some general research areas for further investigation.
Conference Paper
Full-text available
Blockchain, the foundation of Bitcoin, has received extensive attentions recently. Blockchain serves as an immutable ledger which allows transactions take place in a decentralized manner. Blockchain-based applications are springing up, covering numerous fields including financial services, reputation system and Internet of Things (IoT), and so on. However, there are still many challenges of blockchain technology such as scalability and security problems waiting to be overcome. This paper presents a comprehensive overview on blockchain technology. We provide an overview of blockchain architechture firstly and compare some typical consensus algorithms used in different blockchains. Furthermore, technical challenges and recent advances are briefly listed. We also lay out possible future trends for blockchain.
Article
Full-text available
Motivated by the recent explosion of interest around blockchains, we examine whether they make a good fit for the Internet of Things (IoT) sector. Blockchains allow us to have a distributed peer-to-peer network where non-trusting members can interact with each other without a trusted intermediary, in a verifiable manner. We review how this mechanism works and also look into smart contracts-scripts that reside on the blockchain that allow for the automation of multi-step processes. We then move into the IoT domain, and describe how a blockchain-IoT combination: 1) facilitates the sharing of services and resources leading to the creation of a marketplace of services between devices and 2) allows us to automate in a cryptographically verifiable manner several existing, time-consuming workflows. We also point out certain issues that should be considered before the deployment of a blockchain network in an IoT setting: from transactional privacy to the expected value of the digitized assets traded on the network. Wherever applicable, we identify solutions and workarounds. Our conclusion is that the blockchain-IoT combination is powerful and can cause significant transformations across several industries, paving the way for new business models and novel, distributed applications.
Conference Paper
Full-text available
Deep learning based on artificial neural networks is a very popular approach to modeling, classifying, and recognizing complex data such as images, speech, and text. The unprecedented accuracy of deep learning methods has turned them into the foundation of new AI-based services on the Internet. Commercial companies that collect user data on a large scale have been the main beneficiaries of this trend since the success of deep learning techniques is directly proportional to the amount of data available for training. Massive data collection required for deep learning presents obvious privacy issues. Users' personal, highly sensitive data such as photos and voice recordings is kept indefinitely by the companies that collect it. Users can neither delete it, nor restrict the purposes for which it is used. Furthermore, centrally kept data is subject to legal subpoenas and extra-judicial surveillance. Many data owners--for example, medical institutions that may want to apply deep learning methods to clinical records--are prevented by privacy and confidentiality concerns from sharing the data and thus benefitting from large-scale deep learning. In this paper, we design, implement, and evaluate a practical system that enables multiple parties to jointly learn an accurate neural-network model for a given objective without sharing their input datasets. We exploit the fact that the optimization algorithms used in modern deep learning, namely, those based on stochastic gradient descent, can be parallelized and executed asynchronously. Our system lets participants train independently on their own datasets and selectively share small subsets of their models' key parameters during training. This offers an attractive point in the utility/privacy tradeoff space: participants preserve the privacy of their respective data while still benefitting from other participants' models and thus boosting their learning accuracy beyond what is achievable solely on their own inputs. We demonstrate the accuracy of our privacy-preserving deep learning on benchmark datasets.
Article
The rapid development of information technology such as the Internet of Things, Big Data, artificial intelligence, and blockchain has changed the transaction mode of the financial industry and greatly improved the convenience of financial transactions, but it has also brought about new hidden frauds, which have caused huge losses to the development of Internet and IoT finance. As the size of financial transaction data continues to grow, traditional machine-learning models are increasingly difficult to use for financial fraud detection. Some graph-learning methods have been widely used for Internet financial fraud detection, however, these methods ignore the stronger structural homogeneity and cannot aggregate features for two structurally similar but distant nodes. To address this problem, in this article, we propose a graph-learning algorithm TA-Struc2Vec for Internet financial fraud detection, which can learn topological features and transaction amount features in a financial transaction network graph and represent them as low-dimensional dense vectors, allowing intelligent and efficient classification and prediction by training classifier models. The proposed method can improve the efficiency of Internet financial fraud detection with better Precision, F1 -score, and AUC.
Article
Decentralized learning involves training machine learning models over remote mobile devices, edge servers, or cloud servers while keeping data localized. Even though many studies have shown the feasibility of preserving privacy, enhancing training performance or introducing Byzantine resilience, but none of them simultaneously considers all of them. Therefore we face the following problem: how can we efficiently coordinate the decentralized learning process while simultaneously maintaining learning security and data privacy for the entire system? To address this issue, in this paper we propose SPDL, a blockchain-secured and privacy-preserving decentralized learning system. SPDL integrates blockchain, Byzantine Fault-Tolerant (BFT) consensus, BFT Gradients Aggregation Rule (GAR), and differential privacy seamlessly into one system, ensuring efficient machine learning while maintaining data privacy, Byzantine fault tolerance, transparency, and traceability. To validate our approach, we provide rigorous analysis on convergence and regret in the presence of Byzantine nodes. We also build a SPDL prototype and conduct extensive experiments to demonstrate that SPDL is effective and efficient with strong security and privacy guarantees.
Article
In recent years, the emergence of blockchain technology (BT) has become a novel, most disruptive, and trending technology. The redistributed database in BT emphasizes data security and privacy. Also, the consensus mechanism makes positive that data is secured and bonafide. Still, it raises new security issues like majority attacks and double-spending. To handle the said problems, data analytics is required on blockchain-based secure knowledge. Analytics on these data raises the importance of arising technology Machine Learning (ML). ml involves the rational quantity of data to create precise selections. data reliability and its sharing are terribly crucial in ml to enhance the accuracy of results. the combination of those two technologies (ML and BT) provide give highly precise results. in this paper, present gift a detailed study on ml adoption we BTbased present applications additional resilient against attacks. There area unit varied ancient ML techniques, for example, Support Vector Machines (SVM), clustering, bagging, and Deep Learning (DL) algorithms like Convolutional Neural Network (CNN) and Long STM (LSTM) are often used to analyze the attacks on a blockchain-based network. Further, we tend to embody however each the technologies are often applied in many sensible applications like unmanned Aerial Vehicle (UAV), sensible Grid (SG), healthcare, and sensible cities. Then, future analysis problems and challenges are explored. At last, a case study is presented with a conclusion. Keywords: Blockchain, machine learning, smart grid, data security and privacy, data analytics, smart applications.
Article
enMachine learning and blockchain are two of the most notable technologies of recent years. The first is the foundation of artificial intelligence and big data analysis, and the second has significantly disrupted the financial industry. Both technologies are data-driven, and thus there are rapidly growing interests in integrating both for more secure and efficient data sharing and analysis. In this article, we review existing research on combining machine learning and blockchain technologies and demonstrate that they can collaborate efficiently and effectively. In the end, we point out some future directions and expect more research on deeper integration of these two promising technologies. Résumé fr L'apprentissage machine et les chaînes de blocs sont deux technologies récentes et remarquables. Alors que la première constitue les assises de l'intelligence artificielle et de l'analyse des mégadonnées, la deuxième a perturbé substantiellement l'industrie financière. Les deux technologies étant axées sur les données, il existe un intérêt croissant pour leur intégration afin d'améliorer l'efficacité et la sécurité du partage et de l'analyse de données. Les auteurs font une revue de la recherche portant sur la combinaison de l'apprentissage machine avec les chaînes de blocs, puis constatent que ces technologies s'harmonisent de façon efficace. Ils concluent en identifiant de futurs sujets de recherche et s'attendent à davantage de recherche pour une intégration plus complète des deux technologies prometteuses.
Article
Black box machine learning models are currently being used for high-stakes decision making throughout society, causing problems in healthcare, criminal justice and other domains. Some people hope that creating methods for explaining these black box models will alleviate some of the problems, but trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practice and can potentially cause great harm to society. The way forward is to design models that are inherently interpretable. This Perspective clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications where interpretable models could potentially replace black box models in criminal justice, healthcare and computer vision. There has been a recent rise of interest in developing methods for ‘explainable AI’, where models are created to explain how a first ‘black box’ machine learning model arrives at a specific decision. It can be argued that instead efforts should be directed at building inherently interpretable models in the first place, in particular where they are applied in applications that directly affect human lives, such as in healthcare and criminal justice.
Article
Federated Learning is a machine learning setting where the goal is to train a high-quality centralized model with training data distributed over a large number of clients each with unreliable and relatively slow network connections. We consider learning algorithms for this setting where on each round, each client independently computes an update to the current model based on its local data, and communicates this update to a central server, where the client-side updates are aggregated to compute a new global model. The typical clients in this setting are mobile phones, and communication efficiency is of utmost importance. In this paper, we propose two ways to reduce the uplink communication costs. The proposed methods are evaluated on the application of training a deep neural network to perform image classification. Our best approach reduces the upload communication required to train a reasonable model by two orders of magnitude.
Article
Machine learning addresses the question of how to build computers that improve automatically through experience. It is one of today’s most rapidly growing technical fields, lying at the intersection of computer science and statistics, and at the core of artificial intelligence and data science. Recent progress in machine learning has been driven both by the development of new learning algorithms and theory and by the ongoing explosion in the availability of online data and low-cost computation. The adoption of data-intensive machine-learning methods can be found throughout science, technology and commerce, leading to more evidence-based decision-making across many walks of life, including health care, manufacturing, education, financial modeling, policing, and marketing.
Article
A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution. Digital signatures provide part of the solution, but the main benefits are lost if a trusted third party is still required to prevent double-spending. We propose a solution to the double-spending problem using a peer-to-peer network. The network timestamps transactions by hashing them into an ongoing chain of hash-based proof-of-work, forming a record that cannot be changed without redoing the proof-of-work. The longest chain not only serves as proof of the sequence of events witnessed, but proof that it came from the largest pool of CPU power. As long as a majority of CPU power is controlled by nodes that are not cooperating to attack the network, they'll generate the longest chain and outpace attackers. The network itself requires minimal structure. Messages are broadcast on a best effort basis, and nodes can leave and rejoin the network at will, accepting the longest proof-of-work chain as proof of what happened while they were gone.
Survey on Blockchain-Enhanced Machine Learning
  • O Ural
  • K Yoshigoe
O. Ural and K. Yoshigoe, "Survey on Blockchain-Enhanced Machine Learning," IEEE Access, vol. 11, pp. 145331-145362, 2023.
Fairness in Machine Learning: Lessons from Political Philosophy
  • R Binns
R. Binns, "Fairness in Machine Learning: Lessons from Political Philosophy," in Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency, 2018, pp. 149-159.