Chapter

Blockchain-Based Scalable Network for Bioinformatics and Internet of Medical Things (IoMT)

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

The major problem in today’s data creation and monetization is that the data creators (individual people trading, traveling, and interacting on social media) are not the data aggregators (the Googles, Facebooks, and Amazons of the world). As such, the full potential of the personal data value in the age of informatics has yet to fully materialize. This leads to constant conflict within the data ecosystem regarding who has the right to own and monetize data; the creators or aggregators. It has also led to a protracted debate on data sovereignty and expanded legislation for data privacy that we deal with every day when we navigate any website. The holy-grail solution for such a problem is vertical integration, i.e., integrating the data value chain by combining and ensuring that data creators and aggregators are the same in the data value stack. Until recently, this was deemed technologically impossible because individuals in society cannot be their own bank, e-commerce platform, their own search engine, and their own social media. However, the advent of miniaturized sensors driven by advancements in device engineering and miniaturization ushered in a new age of multifunctional sensors, often called the Internet of Things (IoT). In particular, the distributed miniaturized devices that measure the biological attributes of individuals are called the Internet of Medical Things (IoMT). This chapter describes an end-to-end ecosystem that offers a solution to this problem and the commercial pilot model it has implemented utilizing the nascent but promising blockchain technology.KeywordsInternet of Medical Things (IoMT)Quantified wellnessProof of identityHomomorphic algorithmsData monetizationBioinformatics

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... To avoid affecting the weight learned by a model, we can freeze the pre-trained models on datasets such as ImageNet, and add full connection layers at the end of the network to obtain a high-precision neural network model [19], [20]. However, due to the great variability of EEG signals and the deep differences between subjects, data augmentation, and transfer learning methods directly applied to EGG signals cannot achieve the expected results [21], [22]. The cross-domain problem that test and training data do not obey independently identical distribution needs Adaptive data enhancement and DL strategies. ...
Article
The integration of healthcare monitoring with Internet of Things (IoT) networks radically transforms the management and monitoring of human well-being. Portable and lightweight electroencephalography (EEG) systems with fewer electrodes have improved convenience and flexibility while retaining adequate accuracy. However, challenges emerge when dealing with real-time EEG data from IoT devices due to the presence of noisy samples, which impedes improvements in brainwave detection accuracy. Moreover, high inter-subject variability and substantial variability in EEG signals present difficulties for conventional data augmentation and subtask learning techniques, leading to poor generalizability. To address these issues, we present a novel framework for enhancing EEG-based recognition through multi-resolution data analysis, capturing features at different scales using wavelet fractals. The original data can be expanded many times after continuous wavelet transform (CWT) and recombination, alleviating insufficient training samples. In the transfer stage of deep learning (DL) models, we adopt a subtask learning approach to train the recognition model to generalize efficiently. This incorporates wavelets at various scales instead of exclusively considering average prediction performance across scales and paradigms. Through extensive experiments, we demonstrate that our proposed DL-based method excels at extracting features from small-scale and noisy EEG data. This significantly improves healthcare monitoring performance by mitigating the impact of noise introduced by the external environment.
Chapter
Full-text available
With new technologies related to the development of computers, graphics, and hardware, the virtual world has become a reality. As COVID-19 spreads around the world, the demand for virtual reality increases, and the industry represented by the Metaverse is developing. In the Metaverse, a virtual world that transcends reality, artificial intelligence and blockchain technology are being combined. This chapter explains how artificial intelligence and blockchain can affect the Metaverse.
Article
Full-text available
Smart cities achieved digital transformation of patients’ health records through the use of new technology in IoT healthcare industry. Such technologies of using IoT and remote patient monitoring systems have become dramatically fundamental to reduce the movement of patients, and hence reducing the risk of spreading Covid-19 infection. The Ministry of Health in the Kingdom of Bahrain strives to achieve digital transformation in the healthcare industry, where the National Health Information System (I-SEHA) was launched to provide higher-quality health services. The system interconnects the public healthcare institutes, allowing access to patient’s data from any location without the hassle of moving the files physically. Digitization of medical data of patients and sharing some of the data with other institutions outside the protected networks may lead to major privacy and integrity concerns. This paper introduces Blockchain-based Zero-Knowledge Proof (BZKP) model, which is an IoT-based patient-centric model that fuses a zero-knowledge proof solution to be developed for protecting patient’s privacy, and ensures patients prior consent on any access to their data including their health status and account balance. The proposed model is developed to provide a robust and scalable architecture for data sharing, which protects the privacy of sensitive data while maintaining high availability. It also provides strong trust and integrity of data by using the immutability features of the blockchain. BZKP is based on pre-approved blockchain access tokens to address challenges of accountability and privacy in Bahrain smart cities. As a result, the model provides a secure and trusted access model between different stakeholders to share patient data while maintaining privacy, trust, and high availability. The zero-knowledge proof can be used with the smart contracts, which provides programmable actions that can be used for automating the prescriptions dispensation process for private pharmacies in a decentralized manner with high confidence. Finally, it recommends enhanced electronic key (eKey) procedures used by eGovernment of the Kingdom of Bahrain to update the smart card which stores the personal keys for protecting patient’s privacy and provide better consent.
Article
Full-text available
In this article, we examine some of the expectations, frictions and uncertainties involved with the assetization of de-identified NHS patient data by (primary care) research services in UK. Pledges to Electronic Health Record (EHR) data-driven research attempt to reconfigure public health data as an asset for realizing multiple values across healthcare, research and finance. We introduce the concept of ‘asymmetrical divergence’ in public health data assetization to study the various practices of configuring and using this data, both as a continuously generated resource to be extracted and as an asset to be circulated in the knowledge economy. As data assetization and exploitations grow bigger and more diverse, the capitalization of these datasets may constitute EHR data-driven research in healthcare as an attractive technoscientific activity, but one limited to those actors with specific sociotechnical resources in place to fully exploit them at the required scale.
Conference Paper
Full-text available
Data is becoming one of the world's most valuable resources and it is suggested that those who own the data will own the future. However, despite data being an important asset, data owners struggle to assess its value. Some recent pioneer works have led to an increased awareness of the necessity for measuring data value. They have also put forward some simple but engaging survey-based methods to help with the first-level data assessment in an organisation. However, these methods are manual and they depend on the costly input of domain experts. In this paper, we propose to extend the manual survey-based approaches with additional metrics and dimensions derived from the evolving literature on data value dimensions and tailored specifically for our use case study. We also developed an automatic, metric-based data value assessment approach that (i) automatically quantifies the business value of data in Relational Databases (RDB), and (ii) provides a scoring method that facilitates the ranking and extraction of the most valuable RDB tables. We evaluate our proposed approach on a real-world RDB database from a small online retailer (MyVolts) and show in our experimental study that the data value assessments made by our automated system match those expressed by the domain expert approach.
Article
Full-text available
The protection and processing of sensitive data in big data systems are common problems as the increase in data size increases the need for high processing power. Protection of the sensitive data on a system that contains multiple connections with different privacy policies, also brings the need to use proper cryptographic key exchange methods for each party, as extra work. Homomorphic encryption methods can perform similar arithmetic operations on encrypted data in the same way as a plain format of the data. Thus, these methods provide data privacy, as data are processed in the encrypted domain, without the need for a plain form and this allows outsourcing of the computations to cloud systems. This also brings simplicity on key exchange sessions for all sides. In this paper, we propose novel privacy preserving clustering methods, alongside homomorphic encryption schemes that can run on a common high performance computation platform, such as a cloud system. As a result, the parties of this system will not need to possess high processing power because the most power demanding tasks would be done on any cloud system provider. Our system offers a privacy preserving distance matrix calculation for several clustering algorithms. Considering both encrypted and plain forms of the same data for different key and data lengths, our privacy preserving training method's performance results are obtained for four different data clustering algorithms, while considering six different evaluation metrics.
Article
Full-text available
Healthcare systems around the world are facing incredible challenges due to the ageing population and the related disability, and the increasing use of technologies and citizen's expectations. Improving health outcomes while containing costs acts as a stumbling block. In this context, Big Data can help healthcare providers meet these goals in unprecedented ways. The potential of Big Data in healthcare relies on the ability to detect patterns and to turn high volumes of data into actionable knowledge for precision medicine and decision makers. In several contexts, the use of Big Data in healthcare is already offering solutions for the improvement of patient care and the generation of value in healthcare organizations. This approach requires, however, that all the relevant stakeholders collaborate and adapt the design and performance of their systems. They must build the technological infrastructure to house and converge the massive volume of healthcare data, and to invest in the human capital to guide citizens into this new frontier of human health and well-being. The present work reports an overview of best practice initiatives in Europe related to Big Data analytics in public health and oncology sectors, aimed to generate new knowledge, improve clinical care and streamline public health surveillance.
Chapter
Full-text available
Data has become an essential resource in the new economy. Artificial intelligence is increasing its computational capacity and uses big data techniques to analyse large datasets in real-time and extract precious knowledges. As the data-driven transformation reaches into the society, ever-increasing amounts of data are generated by autonomous connected machines or processes based on the Internet of Things (IoT). The value of the “data economy” in the EU was estimated more than EUR 285 billion in 2015, with a 5.03% annual growth. With the right policy and legal framework conditions, its value is expected to increase to EUR 739 billion by 2020. The new data economy raises however new challenges and unsolved issues. The enormous diversity of data sources and the variety of opportunities for applying insights into this data are only beginning to emerge. To unleash the potential of these opportunities, players in the data market need to have access to large and diverse datasets. Access in relation to machine-generated-data is therefore a crucial factor for developing a data economy. In big data, IoT and smart technologies, where a multitude of actors interact in the elaboration of data, it is often questioned: who owns the data? While organized datasets can be subject to intellectual property rights, and the use of personal data is regulated by data protection laws, this question particularly applies to raw (machine-generated) data, that are increasing their value as a source of precious insights and fall outside the scope of classical ownership/property schemes. As part of its Digital Single Market strategy, the European Commission has started a series of initiatives aimed at addressing the data ownership issue. They culminated with the idea of introducing a novel right in raw machine-generated data. Do we really need new ownership rights in data? This paper briefly summarizes the European Commission’s strategy. It recalls the main characteristics of the data value chain. It then elaborates on the existing EU acquis on data ownership, deriving from intellectual property rights (namely copyright and database right), trade secrets, and factual control situations derived by data protection laws. These ownership mechanisms are powerful although difficulty extend to raw data. Likewise, raw data appears excluded from traditional property rights. Despite this, gaps in law has been filled through contractual schemes and technological access restrictions that enhance the ability to control data. Thus, many commentators believe this ownership framework does not need further intervention. The paper further explores the position of those that support the idea of a new property right in data and elaborates on the new proposed right. This analysis concludes that creating new monopolies capable of restricting open access to data, may result in a threat to development of an EU data market. Further economic evidence is needed before discussing the introduction of such a new right. Indeed, we should learn from past lessons, as it happened with the Database Directive: new rights, once introduced in the legal system, even in the absence of any evidence about their positive effect, are here to stay. Other suggested approaches seem more able to fit the needs of the data-economy. In particular, sector-based access against remuneration can be an option to investigate. Also in this case, however, this must come together with economic evidences and in dept analysis of possible market failures.
Article
Full-text available
The ability to perform laboratory testing near the patient and with smaller blood volumes would benefit patients and physicians alike. We describe our design of a miniaturized clinical laboratory system with 3 components: a hardware platform (ie, the miniLab) that performs pre-analytical and analytical processing steps using miniaturized sample manipulation and detection modules, an assay-configurable cartridge that provides consumable materials and assay reagents, and a server that communicates bi-directionally with the miniLab to manage assay-specific protocols and analyze, store, and report results (ie, the virtual analyzer). The miniLab can detect analytes in blood using multiple methods, including molecular diagnostics, immunoassays, clinical chemistry, and hematology. Analytical performance results show that our qualitative Zika virus assay has a limit of detection of 55 genomic copies/mL. For our anti-herpes simplex virus type 2 immunoglobulin G, lipid panel, and lymphocyte subset panel assays, the miniLab has low imprecision, and method comparison results agree well with those from the United States Food and Drug Administration-cleared devices. With its small footprint and versatility, the miniLab has the potential to provide testing of a range of analytes in decentralized locations. This article is protected by copyright. All rights reserved.
Article
Full-text available
Purpose: The viability of online anonymity is questioned in today’s online environment where many technologies enable tracking and identification of individuals. By highlighting the shortcomings of government, industry and consumers in protecting anonymity, it is clear that a new perspective for ensuring anonymity is needed. Where current stakeholders have failed to protect anonymity, some proponents argue economic models exist for valuation of anonymity. By placing monetary value on anonymity through Rawls’ concept of primary goods, it is possible to create a marketplace for anonymity, therefore allowing users full control of how their personal data is used. Such a marketplace offers users the possibility to engage with companies and other entities to sell and auction personal data. Importantly, participation in a marketplace does not sacrifice one’s anonymity since there are different levels of anonymity in online systems. Design/methodology/approach: The paper utilizes a conceptual framework based on the abstractions of anonymity and data valuation. Findings: The manuscript constructs a conceptual foundation for exploring the development and deployment of a personal data marketplace. By suggesting features allowing individuals’ control of their personal data, and properly establishing monetary valuation of one’s personal data, it is argued that individuals will take a more proactive management of personal data. Originality/value: An overview of available services and products offering increased anonymity is explored, in turn, illustrating the beginnings of a market response for anonymity as a valuable good. By placing monetary value on individuals’ anonymity, it is reasoned that individuals will more consciously protect their anonymity in ways where legislation and other practices (i.e., privacy policies, marketing opt-out) have failed.
Article
Differential privacy is a formal mathematical framework for quantifying and managing privacy risks. It provides provable privacy protection against a wide range of potential attacks, including those currently unforeseen. Differential privacy is primarily studied in the context of the collection, analysis, and release of aggregate statistics. These range from simple statistical estimations, such as averages, to machine learning. Tools for differentially private analysis are now in early stages of implementation and use across a variety of academic, industry, and government settings. Interest in the concept is growing among potential users of the tools, as well as within legal and policy communities, as it holds promise as a potential approach to satisfying legal requirements for privacy protection when handling personal information. In particular, differential privacy may be seen as a technical solution for analyzing and sharing data while protecting the privacy of individuals in accordance with existing legal or policy requirements for de-identification or disclosure limitation. This primer seeks to introduce the concept of differential privacy and its privacy implications to non-technical audiences. It provides a simplified and informal, but mathematically accurate, description of differential privacy. Using intuitive illustrations and limited mathematical formalism, it discusses the definition of differential privacy, how differential privacy addresses privacy risks, how differentially private analyses are constructed, and how such analyses can be used in practice. A series of illustrations is used to show how practitioners and policymakers can conceptualize the guarantees provided by differential privacy. These illustrations are also used to explain related concepts, such as composition (the accumulation of risk across multiple analyses), privacy loss parameters, and privacy budgets. This primer aims to provide a foundation that can guide future decisions when analyzing and sharing statistical data about individuals, informing individuals about the privacy protection they will be afforded, and designing policies and regulations for robust privacy protection.
Article
Algorand is a truly decentralized, new, and secure way to manage a shared ledger. Unlike prior approaches based on {\em proof of work}, it requires a negligible amount of computation, and generates a transaction history that does not fork with overwhelmingly high probability. This approach cryptographically selects ---in a way that is provably immune from manipulations, unpredictable until the last minute, but ultimately universally clear--- a set of verifiers in charge of constructing a block of valid transactions. This approach applies to any way of implementing a shared ledger via a tamper-proof sequence of blocks, including traditional blockchains. This paper also presents more efficient alternatives to blockchains, which may be of independent interest. Algorand significantly enhances all applications based on a public ledger: payments, smart contracts, stock settlement, etc. But, for concreteness, we shall describe it only as a money platform.
Article
We present the first demonstration of a proprietary non-contact atmospheric electron beam-induced plasma probe technique for current-based electrical characterization of flat panel display backplanes. Accurate I-V curves were measured and single line defect sensitivity was demonstrated. This technology is expected to greatly benefit AMOLED display fabrication yields.
Should you worry about your health data now that Google owns Fitbit? MUO
  • J Frew
FDA clears “world’s first” portable, low-cost MRI following positive clinical research. Health Imaging
  • M O'connor
Engineering of a miniaturized, robotic clinical laboratory
  • M B Nourse
  • K Engel
  • S G Anekal
  • J A Bailey
  • P Bhatta
  • D P Bhave
  • S Chandrasekaran
  • Y Chen
  • S Chow
  • U Das
  • E Galil
  • X Gong
  • S F Gessert
  • K D Ha
  • R Hu
  • L Hyland
  • A Jammalamadaka
  • K Jayasurya
  • T M Kemp
  • E A Holmes
  • MB Nourse
Maskless plasma patterning of fluidic channels for multiplexing fluid flow on microfluidic devices
  • N Saleh
  • K Sahagian
  • P S Ehrlich
  • E Sterling
  • D Toet
  • L M Levine
  • M Larne
Address geocoding services in geospatial-based epidemiological analysis: a comparative reliability for domestic disease mapping
  • N Monir
  • Abdul Rasam
  • A R Ghazali
  • R Suhandri
  • H F Cahyono
Who owns a photograph in the social media age? JD Supra
  • A Lewis
Self-flowing microfluidic analytical chip (United States Patent Application
  • N Saleh
  • W Khalid
  • F Saleh
Apparatus and method for programmable spatially selective nanoscale surface functionalization (United States Patent Application
  • N Saleh
  • W Khalid
  • F Saleh
Portable sequencing is reshaping genetics research
  • C De Rojas
Who really owns your health data? Forbes
  • R Sharma
The network effects bible
  • J Currier
Medline Plus; National Library of Medicine
  • Reportable Diseases
Wyoming wants to be the crypto capital of the
  • E Botella
Preserving privacy in mobile health systems using non-interactive zero-knowledge proof and blockchain
  • Aeb Tomaz
  • Jcd Nascimento
  • A S Hafid
  • De Souza