To read the full-text of this research, you can request a copy directly from the authors.
Abstract
This chapter summarizes four global challenges for AI: health, education, the environment, and science. In each area, AI has enormous potential to enhance human well-being, yet very substantial obstacles remain in both basic research and global deployment. Beyond these four areas, we ask whether reliance on AI to solve our problems is a viable strategy for humanity.
To read the full-text of this research, you can request a copy directly from the authors.
... Technology adoption is one of the driving forces of economic growth [3]. In particular, this adoption can help in tackling global challenges such as health, education, environment, science and it has significant capability to address our regional, local, and organizational challenges [4]. However, technology adoption itself can be a challenge that leads to success or failure based on how it is tackled. ...
... ABS ranking list is a guide to the range and quality of journals in which business and management academics publish their research. Its purpose is to give both emerging and established scholars greater clarity as to which journals to aim for, and where the best work in their field tends to be clustered.4 CORE provides assessments of major journals in the computing disciplines (https://www.core.edu.au/ ...
Background: Intelligent software is a significant societal change agent. Recent research indicates that organizations must change to reap the full benefits of AI. We refer to this change as AI transformation (AIT). The key challenge is to determine how to change and which are the consequences of increased AI use.
Aim: The aim of this study is to aggregate the body of knowledge on AIT research.
Method: We perform an systematic mapping study (SMS) and follow Kitchenham's procedure. We identify 52 studies from Scopus, IEEE, and Science Direct (2010--2020). We use the Mixed-Methods Appraisal Tool (MMAT) to critically assess empirical work.
Results Work on AIT is mainly qualitative and originates from various disciplines. We are unable to identify any useful definition of AIT. To our knowledge, this is the first SMS that focuses on empirical AIT research. Only a few empirical studies were found in the sample we identified.
Conclusions We define AIT and propose a research agenda. Despite the popularity and attention related to AI and its effects on organizations, our study reveals that a significant amount of publications on the topic lack proper methodology or empirical data.
Climate change is one of the greatest challenges facing humanity, and we, as machine learning (ML) experts, may wonder how we can help. Here we describe how ML can be a powerful tool in reducing greenhouse gas emissions and helping society adapt to a changing climate. From smart grids to disaster management, we identify high impact problems where existing gaps can be filled by ML, in collaboration with other fields. Our recommendations encompass exciting research questions as well as promising business opportunities. We call on the ML community to join the global effort against climate change.
The emergence of artificial intelligence (AI) and its progressively wider impact on many sectors requires an assessment of its effect on the achievement of the Sustainable Development Goals. Using a consensus-based expert elicitation process, we find that AI can enable the accomplishment of 134 targets across all the goals, but it may also inhibit 59 targets. However, current research foci overlook important aspects. The fast development of AI needs to be supported by the necessary regulatory insight and oversight for AI-based technologies to enable sustainable development. Failure to do so could result in gaps in transparency, safety, and ethical standards. Artificial intelligence (AI) is becoming more and more common in people’s lives. Here, the authors use an expert elicitation method to understand how AI may affect the achievement of the Sustainable Development Goals.
Personalised computational models of the heart are of increasing interest for clinical applications due to their discriminative and predictive abilities. However, the simulation of a single heartbeat with a 3D cardiac electromechanical model can be long and computationally expensive, which makes some practical applications, such as the estimation of model parameters from clinical data (the personalisation), very slow. Here we introduce an original multifidelity approach between a 3D cardiac model and a simplified “0D” version of this model, which enables to get reliable (and extremely fast) approximations of the global behaviour of the 3D model using 0D simulations. We then use this multifidelity approximation to speed-up an efficient parameter estimation algorithm, leading to a fast and computationally efficient personalisation method of the 3D model. In particular, we show results on a cohort of 121 different heart geometries and measurements. Finally, an exploitable code of the 0D model with scripts to perform parameter estimation will be released to the community.
Bayesian Knowledge Tracing (BKT)[1] is a user modeling method extensively used in the area of Intelligent Tutoring Systems. In the standard BKT implementation, there are only skill-specific parameters. However, a large body of research strongly suggests that student- specific variability in the data, when accounted for, could enhance model accuracy [5,6,8]. In this work, we revisit the problem of introducing student-specific parameters into BKT on a larger scale. We show that student-specific parameters lead to a tangible improvement when predicting the data of unseen students, and that parameterizing students’ speed of learning is more beneficial than parameterizing a priori knowledge.
The machine learning program GOLEM from the field of inductive logic programming was applied to the drug design problem of modeling structure-activity relationships. The training data for the program were 44 trimethoprim analogues and their observed inhibition of Escherichia coli dihydrofolate reductase. A further 11 compounds were used as unseen test data. GOLEM obtained rules that were statistically more accurate on the training data and also better on the test data than a Hansch linear regression model. Importantly machine learning yields understandable rules that characterized the chemistry of favored inhibitors in terms of polarity, flexibility, and hydrogen-bonding character. These rules agree with the stereochemistry of the interaction observed crystallographically.
Our research work focuses on computer-aided grouping of students based on questions answered in an assessment for effective reading intervention in early education. The work can facilitate placement of students with similar reading disabilities in the same intervention group to optimize corrective actions. We collected ELA (English Language Arts) assessment data from two different schools in USA, involving 365 students. Each student performed three mock assessments. We formulated the problem as a matching problem—an assessment should be matched to other assessments performed by the same student in the feature space. In this paper, we present a study on a number of matching schemes with low-level features gauging the grade-level readability of a piece of writing. The matching criterion for assessments is the consistency measure of matched questions based on the students’ answers of the questions. An assessment is matched to other assessments using K-Nearest-Neighbor. The best result is achieved by the matching scheme that considers the best match for each question, and the success rate is 17.6%, for a highly imbalanced data of only about 5% belonging to the true class.
Due to the rapid emergence of antibiotic-resistant bacteria, there is a growing need to discover new antibiotics. To address this challenge, we trained a deep neural network capable of predicting molecules with antibacterial activity. We performed predictions on multiple chemical libraries and discovered a molecule from the Drug Repurposing Hub-halicin-that is structurally divergent from conventional antibiotics and displays bactericidal activity against a wide phylogenetic spectrum of pathogens including Mycobacterium tuberculosis and carbapenem-resistant Enterobacteriaceae. Halicin also effectively treated Clostridioides difficile and pan-resistant Acinetobacter baumannii infections in murine models. Additionally, from a discrete set of 23 empirically tested predictions from >107 million molecules curated from the ZINC15 database, our model identified eight antibacterial compounds that are structurally distant from known antibiotics. This work highlights the utility of deep learning approaches to expand our antibiotic arsenal through the discovery of structurally distinct antibacterial molecules.
Machine learning (ML) encompasses a broad range of algorithms and modeling tools used for a vast array of data processing tasks, which has entered most scientific disciplines in recent years. This article reviews in a selective way the recent research on the interface between machine learning and the physical sciences. This includes conceptual developments in ML motivated by physical insights, applications of machine learning techniques to several domains in physics, and cross fertilization between the two fields. After giving a basic notion of machine learning methods and principles, examples are described of how statistical physics is used to understand methods in ML. This review then describes applications of ML methods in particle physics and cosmology, quantum many-body physics, quantum computing, and chemical and material physics. Research and development into novel computing architectures aimed at accelerating ML are also highlighted. Each of the sections describe recent successes as well as domain-specific methodology and challenges.
‘Water’ is one of the key components available to mankind. All living things consist mostly of water; e.g., the human body is made up of 67% of water. Water is the crucial component of life and is essential for sustenance, so as it is a most vital component for upgrading agricultural productivity, and hence, the utilization of water in a most efficient manner is the key concept we must follow to improve the farming/gardening in the nation. Sensoponics helps the farmers/gardeners to distribute water to crops/plants by providing them with water when they need the water, and this helps to prevent wastage of water and soil degradation. In this project, we will develop an automated smart monitoring and irrigation system that helps farmers/gardeners to know the status of their crops/plants from home or from any part of the world. This system helps farmer/gardeners to irrigate the land in a very organized manner based on soil humidity, atmospheric temperature and humidity, and water consumption of the plant. Water surplus irrigation reduces plant production and degrades soil fertility and stimuli ecological hazards like water wasting and soil degradation. The smart system not only provides comfort but also reduces energy conservation, efficiency, and time-saving. Nowadays, farmers are not financially stable to use industry graded automation and control machine which are high in cost. So, in this project, we will implement a concept of Internet of things (IoT) to read the data from sensors using Arduino Uno and send it to ThingSpeak, an open-source cloud to store and analyze the data of sensors.
With public and academic attention increasingly focused on the new role of machine learning in the health information economy, an unusual and no-longer-esoteric category of vulnerabilities in machine-learning systems could prove important. These vulnerabilities allow a small, carefully designed change in how inputs are presented to a system to completely alter its output, causing it to confidently arrive at manifestly wrong conclusions. These advanced techniques to subvert otherwise-reliable machine-learning systems—so-called adversarial attacks—have, to date, been of interest primarily to computer science researchers (1). However, the landscape of often-competing interests within health care, and billions of dollars at stake in systems' outputs, implies considerable problems. We outline motivations that various players in the health care system may have to use adversarial attacks and begin a discussion of what to do about them. Far from discouraging continued innovation with medical machine learning, we call for active engagement of medical, technical, legal, and ethical experts in pursuit of efficient, broadly available, and effective health care that machine learning will enable.
Machine learning approaches are increasingly used to extract patterns and insights from the ever-increasing stream of geospatial data, but current approaches may not be optimal when system behaviour is dominated by spatial or temporal context. Here, rather than amending classical machine learning, we argue that these contextual cues should be used as part of deep learning (an approach that is able to extract spatio-temporal features automatically) to gain further process understanding of Earth system science problems, improving the predictive ability of seasonal forecasting and modelling of long-range spatial connections across multiple timescales, for example. The next step will be a hybrid modelling approach, coupling physical process models with the versatility of data-driven machine learning.
Skin cancer, the most common human malignancy, is primarily diagnosed visually, beginning with an initial clinical screening and followed potentially by dermoscopic analysis, a biopsy and histopathological examination. Automated classification of skin lesions using images is a challenging task owing to the fine-grained variability in the appearance of skin lesions. Deep convolutional neural networks (CNNs) show potential for general and highly variable tasks across many fine-grained object categories. Here we demonstrate classification of skin lesions using a single CNN, trained end-to-end from images directly, using only pixels and disease labels as inputs. We train a CNN using a dataset of 129,450 clinical images-two orders of magnitude larger than previous datasets-consisting of 2,032 different diseases. We test its performance against 21 board-certified dermatologists on biopsy-proven clinical images with two critical binary classification use cases: keratinocyte carcinomas versus benign seborrheic keratoses; and malignant melanomas versus benign nevi. The first case represents the identification of the most common cancers, the second represents the identification of the deadliest skin cancer. The CNN achieves performance on par with all tested experts across both tasks, demonstrating an artificial intelligence capable of classifying skin cancer with a level of competence comparable to dermatologists. Outfitted with deep neural networks, mobile devices can potentially extend the reach of dermatologists outside of the clinic. It is projected that 6.3 billion smartphone subscriptions will exist by the year 2021 (ref. 13) and can therefore potentially provide low-cost universal access to vital diagnostic care.
Importance:
Deep learning is a family of computational methods that allow an algorithm to program itself by learning from a large set of examples that demonstrate the desired behavior, removing the need to specify rules explicitly. Application of these methods to medical imaging requires further assessment and validation.
Objective:
To apply deep learning to create an algorithm for automated detection of diabetic retinopathy and diabetic macular edema in retinal fundus photographs.
Design and setting:
A specific type of neural network optimized for image classification called a deep convolutional neural network was trained using a retrospective development data set of 128 175 retinal images, which were graded 3 to 7 times for diabetic retinopathy, diabetic macular edema, and image gradability by a panel of 54 US licensed ophthalmologists and ophthalmology senior residents between May and December 2015. The resultant algorithm was validated in January and February 2016 using 2 separate data sets, both graded by at least 7 US board-certified ophthalmologists with high intragrader consistency.
Exposure:
Deep learning-trained algorithm.
Main outcomes and measures:
The sensitivity and specificity of the algorithm for detecting referable diabetic retinopathy (RDR), defined as moderate and worse diabetic retinopathy, referable diabetic macular edema, or both, were generated based on the reference standard of the majority decision of the ophthalmologist panel. The algorithm was evaluated at 2 operating points selected from the development set, one selected for high specificity and another for high sensitivity.
Results:
The EyePACS-1 data set consisted of 9963 images from 4997 patients (mean age, 54.4 years; 62.2% women; prevalence of RDR, 683/8878 fully gradable images [7.8%]); the Messidor-2 data set had 1748 images from 874 patients (mean age, 57.6 years; 42.6% women; prevalence of RDR, 254/1745 fully gradable images [14.6%]). For detecting RDR, the algorithm had an area under the receiver operating curve of 0.991 (95% CI, 0.988-0.993) for EyePACS-1 and 0.990 (95% CI, 0.986-0.995) for Messidor-2. Using the first operating cut point with high specificity, for EyePACS-1, the sensitivity was 90.3% (95% CI, 87.5%-92.7%) and the specificity was 98.1% (95% CI, 97.8%-98.5%). For Messidor-2, the sensitivity was 87.0% (95% CI, 81.1%-91.0%) and the specificity was 98.5% (95% CI, 97.7%-99.1%). Using a second operating point with high sensitivity in the development set, for EyePACS-1 the sensitivity was 97.5% and specificity was 93.4% and for Messidor-2 the sensitivity was 96.1% and specificity was 93.9%.
Conclusions and relevance:
In this evaluation of retinal fundus photographs from adults with diabetes, an algorithm based on deep machine learning had high sensitivity and specificity for detecting referable diabetic retinopathy. Further research is necessary to determine the feasibility of applying this algorithm in the clinical setting and to determine whether use of the algorithm could lead to improved care and outcomes compared with current ophthalmologic assessment.
Over the past 30 years, robots have become standard fixtures in operating rooms. During brain surgery, a NeuroMate robot may guide a neurosurgeon to a target within the pulsing cortex. In orthopedics, a Mako robot sculpts and drills bone during knee and hip replacement surgery. Dominating the general surgery field is the da Vinci robot, a multiarmed device that allows surgeons to conduct precise movements of tools through small incisions that they could not manage with their own hands.
Objectives:
This paper is a systematic literature review intended to gain an understanding of the most original, excellent, stateof- the-art research in the application of eHealth (including mHealth) in the management of chronic diseases with a focus on cancer over the past two years.
Method:
This review looks at peer-reviewed papers published between 2013 and 2015 and examines the background and trends in this area. It systematically searched peer-reviewed journals in databases PubMed, Proquest, Cochrane Library, Elsevier, Sage and the Institute of Electrical and Electronic Engineers (IEEE Digital Library) using a set of pre-defined keywords. It then employed an iterative process to filter out less relevant publications.
Results:
From an initial search return of 1,519,682 results returned, twenty nine of the most relevant peer reviewed articles were identified as most relevant.
Conclusions:
Based on the results we conclude that innovative eHealth and its subset mHealth initiatives are rapidly emerging as an important means of managing cancer and other chronic diseases. The adoption is following different paths in the developed and developing worlds. Besides governance and regulatory issues, barriers still exist around information management, interoperability and integration. These include medical records available online information for clinicians and consumers on cancer and other chronic diseases, mobile app bundles that can help manage co-morbidities and the capacity of supporting communication technologies.
Medical Imaging Informatics has become a fast evolving discipline at the crossing of Informatics, Computational Sciences, and Medicine that is profoundly changing medical practices, for the patients' benefit.
Asia's farmers are facing a dilemma. On the one hand they are trying to respond to an increasing demand for food and fibre. This demand is being driven by the region's fast growing population which is expected to grow by 160% over the next 25 years. Food production will need to grow by 50-75% over this period just to keep pace. On the other hand, this increased production will depend on an already overextended natural recourse base. Vast areas of fertile land are being converted to non-agricultural uses and what remains is threatened by degradation from erosion, nutrient mining, waterlogging and salinization. Water availability per capita in the region has reduced by 50% from 1950 to 1980 and the rate is now increasing. Given this situation, "increases in yields will be difficult to accomplish. The challenge of increasing agricultural production is even more difficult in Asia where cropping intensities are already highest in the developing world. The potential for yield increases is further limited by poor agricultural resource management practices that result in unsustainable farming systems" (Nath, 1999). As if these challenges weren't enough, the pressures of globalization mean that Asian farmers must compete with farmers the world over for a share of the market and to stay in business. Are there any solutions to these problems? Many argue that widespread adoption of modern technological farming options offers the best way forward. FAO notes that "Millions of poor rural people desperately need access to updated technologies, including machines, improved plant varieties and animal breeds, better crop and post-harvest techniques, and higher investment. Seeing subsistence farming as a "traditional way of life" is part of a "rural nostalgic atavism that is out of step with reality (FAO, 2000). Technological solutions, however, must contribute to and be compatible with the emerging principles of sustainable agriculture. A great deal of the focus of current agricultural research reflects this urgent priority. Unfortunately, farmers around the world have been slow to adopt sustainable agricultural practices. As Pretty (1995) states, "although there exists successful applications of sustainable agriculture throughout the world, very few farmers have adopted both the technologies or the practices".
The question of whether it is possible to automate the scientific process is of both great theoretical interest(1,2) and increasing practical importance because, in many scientific areas, data are being generated much faster than they can be effectively analysed. We describe a physically implemented robotic system that applies techniques from artificial intelligence(3-8) to carry out cycles of scientific experimentation. The system automatically originates hypotheses to explain observations, devises experiments to test these hypotheses, physically runs the experiments using a laboratory robot, interprets the results to falsify hypotheses inconsistent with the data, and then repeats the cycle. Here we apply the system to the determination of gene function using deletion mutants of yeast (Saccharomyces cerevisiae) and auxotrophic growth experiments(9). We built and tested a detailed logical model (involving genes, proteins and metabolites) of the aromatic amino acid synthesis pathway. In biological experiments that automatically reconstruct parts of this model, we show that an intelligent experiment selection strategy is competitive with human performance and significantly outperforms, with a cost decrease of 3-fold and 100-fold (respectively), both cheapest and random-experiment selection.
In cardiac surgery, aortic diseases represent an important chapter, and include treatments from the aortic valve to the descending aorta. The infra-diaphragmatic abdominal aorta, generally and historically, is the domain of vascular surgery. Despite the excellent and consolidated results obtained in the treatment of thoracic aortic disease, surgical mortality and morbidity are still relevant, also due to the presence of older patients with more extensive and complex aortic disease. In the last decades, the better knowledge of the aortic issues and the availability of new grafts have resulted in an important evolution of the management, both at the aortic valve and vessel level, with use of transcatheter grafts (transcatheter aortic valve implantation and thoracic endovascular aortic repair). The evidence of the right indications and the long-term results will determine the real usefulness and effectiveness of these "new" procedures and their role as safe and definitive aortic therapies.
This paper describes Ms. Malaprop, a program (currently being designed) which will answer questions about simple stories dealing with painting, where stories, questions and answers will be expressed in semantic representation rather than English in order ...
A large high-speed general-purpose digital computer (IBM 7090) was pro- grammed to solve elementary symbolic integration problems at approximately the level of a good college freshman. The program is called SAINT, an acronym for "Symbolic Automatic INTegrator." This paper discusses the SAINT program and its performance. SAINT performs indefinite integration. It also performs definite and multiple integration when these are trivial extensions of indefinite integration. It uses many of the methods and heuristics of students attacking the same problems. SAINT took an average of two minutes each to solve 52 of the 54 attempted problems taken from the Massachusetts Institute of Technology freshman calculus final examinations. Based on this and other experiments with SAINT, some conclusions concerning computer solution of such problems are: (1) Pattern recognition is of fundamental importance. (2) Great benefit would have been derived from a larger memory and more convenient symbol manipulating facilities. (3) The solution of a symbolic integration problem by a commercially available computer is far cheaper and faster than by man.