Conference Paper

Multivariate Sequential Analytics for Treatment Trajectory Forecasting

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Chronic conditions, especially cardiovascular disease account for a large burden on modern healthcare systems. These conditions are by their nature ones that unfold over a long period of time, typically involving many healthcare events, treatments and changes of patient status. The gold standard in public health informatics for risk assessment is regression-based. While these techniques are effective in identifying factors contributing to risk, they produce reductive scores (e.g. probability of a specific class of event, like a heart attack) or binary prediction results, and moreover, they are sequence agnostic. In the area of long-term chronic disease management, multivariate sequential modeling offers an opportunity to forecast disease progression and treatment trajectory in a fine-grained manner in order to aid clinical decision making. This paper investigates the suitability of Long short-term memory, a type of recurrent neural network, in conducting multivariate sequential modeling in the healthcare domain, specifically in the task of forecasting. The eventual goal is to apply this technique to linked New Zealand health data through the Vascular Informatics using Epidemiology and the Web (VIEW) research project. This paper presents initial experiments and results for modeling patients' treatment trajectories during hospitalization using the Medical Information Mart for Intensive Care (MIMIC-III) data set.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
With the improvement of medical data capturing, vast amount of continuous patient monitoring data, e.g., electrocardiogram (ECG), real-time vital signs and medications, become available for clinical decision support at intensive care units (ICUs). However, it becomes increasingly challenging to model such data, due to high density of the monitoring data, heterogeneous data types and the requirement for interpretable models. Integration of these high-density monitoring data with the discrete clinical events (including diagnosis, medications, labs) is challenging but potentially rewarding since richness and granularity in such multimodal data increase the possibilities for accurate detection of complex problems and predicting outcomes (e.g., length of stay and mortality). We propose Recurrent Attentive and Intensive Model (RAIM) for jointly analyzing continuous monitoring data and discrete clinical events. RAIM introduces an efficient attention mechanism for continuous monitoring data (e.g., ECG), which is guided by discrete clinical events (e.g, medication usage). We apply RAIM in predicting physiological decompensation and length of stay in those critically ill patients at ICU. With evaluations on MIMIC-III Waveform Database Matched Subset, we obtain an AUC-ROC score of 90.1890.18% for predicting decompensation and an accuracy of 86.8286.82% for forecasting length of stay with our final model, which outperforms our six baseline models.
Conference Paper
Full-text available
Deep learning methods exhibit promising performance for predictive modeling in healthcare, but two important challenges remain: - • Data insufficiency: Often in healthcare predictive modeling, the sample size is insufficient for deep learning methods to achieve satisfactory results. - • Interpretation: The representations learned by deep learning methods should align with medical knowledge. To address these challenges, we propose GRaph-based Attention Model (GRAM) that supplements electronic health records (EHR) with hierarchical information inherent to medical ontologies. Based on the data volume and the ontology structure, GRAM represents a medical concept as a combination of its ancestors in the ontology via an attention mechanism. We compared predictive performance (i.e. accuracy, data needs, interpretability) of GRAM to various methods including the recurrent neural network (RNN) in two sequential diagnoses prediction tasks and one heart failure prediction task. Compared to the basic RNN, GRAM achieved 10% higher accuracy for predicting diseases rarely observed in the training data and 3% improved area under the ROC curve for predicting heart failure using an order of magnitude less training data. Additionally, unlike other methods, the medical concept representations learned by GRAM are well aligned with the medical ontology. Finally, GRAM exhibits intuitive attention behaviors by adaptively generalizing to higher level concepts when facing data insufficiency at the lower level concepts.
Article
Full-text available
Deep learning methods exhibit promising performance for predictive modeling in healthcare, but two important challenges remain: -Data insufficiency: Often in healthcare predictive modeling, the sample size is insufficient for deep learning methods to achieve satisfactory results. -Interpretation: The representations learned by deep learning models should align with medical knowledge. To address these challenges, we propose a GRaph-based Attention Model, GRAM that supplements electronic health records (EHR) with hierarchical information inherent to medical ontologies. Based on the data volume and the ontology structure, GRAM represents a medical concept as a combination of its ancestors in the ontology via an attention mechanism. We compared predictive performance (i.e. accuracy, data needs, interpretability) of GRAM to various methods including the recurrent neural network (RNN) in two sequential diagnoses prediction tasks and one heart failure prediction task. Compared to the basic RNN, GRAM achieved 10% higher accuracy for predicting diseases rarely observed in the training data and 3% improved area under the ROC curve for predicting heart failure using an order of magnitude less training data. Additionally, unlike other methods, the medical concept representations learned by GRAM are well aligned with the medical ontology. Finally, GRAM exhibits intuitive attention behaviors by adaptively generalizing to higher level concepts when facing data insufficiency at the lower level concepts.
Article
Full-text available
Regression models are extensively used in many epidemiological studies to understand the linkage between specific outcomes of interest and their risk factors. However, regression models in general examine the average effects of the risk factors and ignore subgroups with different risk profiles. As a result, interventions are often geared towards the average member of the population, without consideration of the special health needs of different subgroups within the population. This paper demonstrates the value of using rule-based analysis methods that can identify subgroups with heterogeneous risk profiles in a population without imposing assumptions on the subgroups or method. The rules define the risk pattern of subsets of individuals by not only considering the interactions between the risk factors but also their ranges. We compared the rule-based analysis results with the results from a logistic regression model in The Environmental Determinants of Diabetes in the Young (TEDDY) study. Both methods detected a similar suite of risk factors, but the rule-based analysis was superior at detecting multiple interactions between the risk factors that characterize the subgroups. A further investigation of the particular characteristics of each subgroup may detect the special health needs of the subgroup and lead to tailored interventions.
Article
Full-text available
Accuracy and interpretation are two goals of any successful predictive models. Most existing works have to suffer the tradeoff between the two by either picking complex black box models such as recurrent neural networks (RNN) or relying on less accurate traditional models with better interpretation such as logistic regression. To address this dilemma, we present REverse Time AttentIoN model (RETAIN) for analyzing EHR data that achieves high accuracy while remaining clinically interpretable. RETAIN is a two-level neural attention model that can find influential past visits and significant clinical variables within those visits (e.g,. key diagnoses). RETAIN mimics physician practice by attending the EHR data in a reverse time order so that more recent clinical visits will likely get higher attention. Experiments on a large real EHR dataset of 14 million visits from 263K patients over 8 years confirmed the comparable predictive accuracy and computational scalability to the state-of-the-art methods such as RNN. Finally, we demonstrate the clinical interpretation with concrete examples from RETAIN.
Article
Full-text available
Risk prediction plays an important role in clinical cardiology research. Traditionally, most risk models have been based on regression models. While useful and robust, these statistical methods are limited to using a small number of predictors which operate in the same way on everyone, and uniformly throughout their range. The purpose of this review is to illustrate the use of machine-learning methods for development of risk prediction models. Typically presented as black box approaches, most machine-learning methods are aimed at solving particular challenges that arise in data analysis that are not well addressed by typical regression approaches. To illustrate these challenges, as well as how different methods can address them, we consider trying to predicting mortality after diagnosis of acute myocardial infarction. We use data derived from our institution's electronic health record and abstract data on 13 regularly measured laboratory markers. We walk through different challenges that arise in modelling these data and then introduce different machine-learning approaches. Finally, we discuss general issues in the application of machine-learning methods including tuning parameters, loss functions, variable importance, and missing data. Overall, this review serves as an introduction for those working on risk modelling to approach the diffuse field of machine learning.
Article
Full-text available
MIMIC-III (‘Medical Information Mart for Intensive Care’) is a large, single-center database comprising information relating to patients admitted to critical care units at a large tertiary care hospital. Data includes vital signs, medications, laboratory measurements, observations and notes charted by care providers, fluid balance, procedure codes, diagnostic codes, imaging reports, hospital length of stay, survival data, and more. The database supports applications including academic and industrial research, quality improvement initiatives, and higher education coursework.
Article
Full-text available
Personalized predictive medicine necessitates modeling of patient illness and care processes, which inherently have long-term temporal dependencies. Healthcare observations, recorded in electronic medical records, are episodic and irregular in time. We introduce DeepCare, a deep dynamic neural network that reads medical records and predicts future medical outcomes. At the data level, DeepCare models patient health state trajectories with explicit memory of illness. Built on Long Short-Term Memory (LSTM), DeepCare introduces time parameterizations to handle irregular timing by moderating the forgetting and consolidation of illness memory. DeepCare also incorporates medical interventions that change the course of illness and shape future medical risk. Moving up to the health state level, historical and present health states are then aggregated through multiscale temporal pooling, before passing through a neural network that estimates future outcomes. We demonstrate the efficacy of DeepCare for disease progression modeling and readmission prediction in diabetes, a chronic disease with large economic burden. The results show improved modeling and risk prediction accuracy.
Technical Report
Full-text available
Personalized predictive medicine necessitates the modeling of patient illness and care processes, which inherently have long-term temporal dependencies. Healthcare observations, recorded in electronic medical records, are episodic and irregular in time. We introduce DeepCare, an end-to-end deep dynamic neural network that reads medical records, stores previous illness history, infers current illness states and predicts future medical outcomes. At the data level, DeepCare represents care episodes as vectors in space, models patient health state trajectories through explicit memory of historical records. Built on Long Short-Term Memory (LSTM), DeepCare introduces time parameterizations to handle irregular timed events by moderating the forgetting and consolidation of memory cells. DeepCare also incorporates medical interventions that change the course of illness and shape future medical risk. Moving up to the health state level, historical and present health states are then aggregated through multiscale temporal pooling, before passing through a neural network that estimates future outcomes. We demonstrate the efficacy of DeepCare for disease progression modeling, intervention recommendation, and future risk prediction. On two important cohorts with heavy social and economic burden -- diabetes and mental health -- the results show improved modeling and risk prediction accuracy.
Article
Full-text available
Clinical medical data, especially in the intensive care unit (ICU), consists of multivariate time series of observations. For each patient visit (or episode), sensor data and lab test results are recorded in the patient's Electronic Health Record (EHR). While potentially containing a wealth of insights, the data is difficult to mine effectively, owing to varying length, irregular sampling and missing data. Recurrent Neural Networks (RNNs), particularly those using Long Short-Term Memory (LSTM) hidden units, are powerful and increasingly popular models for learning from sequence data. They adeptly model varying length sequences and capture long range dependencies. We present the first study to empirically evaluate the ability of LSTMs to recognize patterns in multivariate time series of clinical measurements. Specifically, we consider multilabel classification of diagnoses, training a model to classify 128 diagnoses given 13 frequently but irregularly sampled clinical measurements. First, we establish the effectiveness of a simple LSTM network for modeling clinical data. Then we demonstrate a straightforward and effective deep supervision strategy in which we replicate targets at each sequence step. Trained only on raw time series, our models outperforms several strong baselines on a wide variety of metrics, and nearly matches the performance of a multilayer perceptron trained on carefully hand-engineered features, establishing the usefulness of LSTMs for modeling medical data. The best LSTM model accurately classifies many diagnoses, including diabetic ketoacidosis (F1 score of .714), scoliosis (.677), and status asthmaticus (.632).
Article
Full-text available
Countless learning tasks require awareness of time. Image captioning, speech synthesis, and video game playing all require that a model generate sequences of outputs. In other domains, such as time series prediction, video analysis, and music information retrieval, a model must learn from sequences of inputs. Significantly more interactive tasks, such as natural language translation, engaging in dialogue, and robotic control, often demand both. Recurrent neural networks (RNNs) are a powerful family of connectionist models that capture time dynamics via cycles in the graph. Unlike feedforward neural networks, recurrent networks can process examples one at a time, retaining a state, or memory, that reflects an arbitrarily long context window. While these networks have long been difficult to train and often contain millions of parameters, recent advances in network architectures, optimization techniques, and parallel computation have enabled large-scale learning with recurrent nets. Over the past few years, systems based on state of the art long short-term memory (LSTM) and bidirectional recurrent neural network (BRNN) architectures have demonstrated record-setting performance on tasks as varied as image captioning, language translation, and handwriting recognition. In this review of the literature we synthesize the body of research that over the past three decades has yielded and reduced to practice these powerful models. When appropriate, we reconcile conflicting notation and nomenclature. Our goal is to provide a mostly self-contained explication of state of the art systems, together with a historical perspective and ample references to the primary research.
Article
Full-text available
To compare the calibration performance of the original Framingham Heart Study risk prediction score for cardiovascular disease and an adjusted version of the Framingham score used in current New Zealand cardiovascular risk management guidelines for high and low risk ethnic groups. Since 2002 cardiovascular risk assessments have been undertaken as part of routine clinical care in many New Zealand primary care practices using PREDICT, a web-based decision support programme for assessing and managing cardiovascular risk. Individual risk profiles from PREDICT were electronically and anonymously linked to national hospital admissions and death registrations in January 2008. Calibration performance was investigated by comparing the observed 5-year cardiovascular event rates (deaths and hospitalisations) with predicted rates from the Framingham and New Zealand adjusted Framingham scores. Calibration was examined in a combined 'high risk' ethnic group (Maori, Pacific and Indian) and a European 'low risk' ethnic group. There was insufficient person-time follow-up for separate analyses in each ethnic group. The analyses were restricted to PREDICT participants aged 30-74 years with no history of previous cardiovascular disease. Of the 59,344 participants followed for a mean of 2.11 years (125,064 person years of follow-up), 1,374 first cardiovascular events occurred. Among the 35,240 European participants, 759 cardiovascular events occurred during follow-up, giving a mean observed 5-year cumulative incidence of 4.5%. There were 582 events among the 21,026 Maori, Pacific and Indian participants, corresponding to a mean 5-year cumulative incidence rate of 7.4%. For Europeans, the original Framingham score overestimated 5-year risk by 0.7-3.2% at risk levels below 15% and by about 5% at higher risk levels. In contrast, for Maori, Pacific, and Indian patients combined, the Framingham score underestimated 5-year cardiovascular risk by 1.1-2.2% in participants who scored below 15% 5-year predicted risk (the recommended threshold for drug treatment in New Zealand), and overestimated by 2.4-4.1% the risk in those who scored above the 15% threshold. For both high risk and low risk ethnic groups, the New Zealand adjusted score systematically overestimated the observed 5-year event rate ranging from 0.6-5.3% at predicted risk levels below 15% to 5.4-9.3% at higher risk levels. The original Framingham Heart Study risk prediction score overestimates risk for the New Zealand European population but underestimates risk for the combined high risk ethnic populations. However the adjusted Framingham score used in New Zealand clinical guidelines overcompensates for this underestimate, resulting in a score that overestimates risk among the European, Maori, Pacific and Indian ethnic populations at all predicted risk levels. When sufficient person years of follow-up are available in the PREDICT cohort, new cardiovascular risk prediction scores should be developed for each of the ethnic groups to allow for more accurate risk prediction and targeting of treatment.
Article
Full-text available
Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.
Article
Full-text available
Long short-term memory (LSTM; Hochreiter & Schmidhuber, 1997) can solve numerous tasks not solvable by previous learning algorithms for recurrent neural networks (RNNs). We identify a weakness of LSTM networks processing continual input streams that are not a priori segmented into subsequences with explicitly marked ends at which the network's internal state could be reset. Without resets, the state may grow indefinitely and eventually cause the network to break down. Our remedy is a novel, adaptive “forget gate” that enables an LSTM cell to learn to reset itself at appropriate times, thus releasing internal resources. We review illustrative benchmark problems on which standard LSTM outperforms other RNN algorithms. All algorithms (including LSTM) fail to solve continual versions of these problems. LSTM with forget gates, however, easily solves them, and in an elegant way.
Article
Full-text available
To describe the cardiovascular disease risk factor status and risk management of Māori compared with non-Māori patients opportunistically assessed in routine practice using PREDICT-CVD, an electronic clinical decision support programme. In August 2002, a primary healthcare organisation, ProCare, implemented PREDICT-CVD as an opportunistic cardiovascular risk assessment and management programme. Between 2002 and February 2006, over 20,000 cardiovascular risk assessments were undertaken on Māori and non-Māori patients. Odds ratios and mean differences in cardiovascular risk factors and risk management for Māori compared to non-Māori (European and other, Pacific, Indian, and other Asian) patients were calculated. Baseline risk assessments were completed for 1450 (7%) Māori patients and 19, 164 (93%) non-Māori patients. On average, Māori were risk assessed 3 years younger than non-Māori. Māori patients were three times more likely to be smokers, had higher blood pressure and TC/HDL levels, and twice the prevalence of diabetes and history of cardiovascular disease as non-Māori. Among patients with a personal history of cardiovascular disease, Māori were more likely than non-Māori to receive anticoagulants, blood pressure-lowering and lipid-lowering medications. However, of those patients with a history of ischaemic heart disease, Māori were only half as likely as non-Māori to have had a revascularisation procedure. An electronic decision support programme can be used to systematically generate cardiovascular disease risk burden and risk management data for Māori and non-Māori populations in routine clinical practice in real-time. Moreover, the PREDICT-CVD programme has established one of the largest cohorts of Māori and non-Māori ever assembled in New Zealand. Initial findings suggest that Māori are more likely than non-Māori to receive drug-based cardiovascular risk management if they have a personal history of cardiovascular disease. In contrast, among the subgroup of patients with a history of ischaemic heart disease, Māori appear to receive significantly fewer revascularisations than non-Māori.
Article
Full-text available
Recurrent neural networks can be used to map input sequences to output sequences, such as for recognition, production or prediction problems. However, practical difficulties have been reported in training recurrent neural networks to perform tasks in which the temporal contingencies present in the input/output sequences span long intervals. We show why gradient based learning algorithms face an increasingly difficult problem as the duration of the dependencies to be captured increases. These results expose a trade-off between efficient learning by gradient descent and latching on information for long periods. Based on an understanding of this problem, alternatives to standard gradient descent are considered.
Article
Background: Most cardiovascular disease risk prediction equations in use today were derived from cohorts established last century and with participants at higher risk but less socioeconomically and ethnically diverse than patients they are now applied to. We recruited a nationally representative cohort in New Zealand to develop equations relevant to patients in contemporary primary care and compared the performance of these new equations to equations that are recommended in the USA. Methods: The PREDICT study automatically recruits participants in routine primary care when general practitioners in New Zealand use PREDICT software to assess their patients' risk profiles for cardiovascular disease, which are prospectively linked to national ICD-coded hospitalisation and mortality databases. The study population included male and female patients in primary care who had no prior cardiovascular disease, renal disease, or congestive heart failure. New equations predicting total cardiovascular disease risk were developed using Cox regression models, which included clinical predictors plus an area-based deprivation index and self-identified ethnicity. Calibration and discrimination performance of the equations were assessed and compared with 2013 American College of Cardiology/American Heart Association Pooled Cohort Equations (PCEs). The additional predictors included in new PREDICT equations were also appended to the PCEs to determine whether they were independent predictors in the equations from the USA. Findings: Outcome events were derived for 401 752 people aged 30-74 years at the time of their first PREDICT risk assessment between Aug 27, 2002, and Oct 12, 2015, representing about 90% of the eligible population. The mean follow-up was 4·2 years, and a third of participants were followed for 5 years or more. 15 386 (4%) people had cardiovascular disease events (1507 [10%] were fatal, and 8549 [56%] met the PCEs definition of hard atherosclerotic cardiovascular disease) during 1 685 521 person-years follow-up. The median 5-year risk of total cardiovascular disease events predicted by the new equations was 2·3% in women and 3·2% in men. Multivariable adjusted risk increased by about 10% per quintile of socioeconomic deprivation. Māori, Pacific, and Indian patients were at 13-48% higher risk of cardiovascular disease than Europeans, and Chinese or other Asians were at 25-33% lower risk of cardiovascular disease than Europeans. The PCEs overestimated of hard atherosclerotic cardiovascular disease by about 40% in men and by 60% in women, and the additional predictors in the new equations were also independent predictors in the PCEs. The new equations were significantly better than PCEs on all performance metrics. Interpretation: We constructed a large prospective cohort study representing typical patients in primary care in New Zealand who were recommended for cardiovascular disease risk assessment. Most patients are now at low risk of cardiovascular disease, which explains why the PCEs based mainly on old cohorts substantially overestimate risk. Although the PCEs and many other equations will need to be recalibrated to mitigate overtreatment of the healthy majority, they also need new predictors that include measures of socioeconomic deprivation and multiple ethnicities to identify vulnerable high-risk subpopulations that might otherwise be undertreated. Funding: Health Research Council of New Zealand, Heart Foundation of New Zealand, and Healthier Lives National Science Challenge.
Article
The 20th century saw cardiovascular disease ascend as the leading cause of death in the world. In response to the new challenge that heart disease imposed, the cardiovascular community responded with ground breaking innovations in the form of evidence based medications that have improved survival, imaging modalities that allow for precise diagnosis and guide treatment; revascularization strategies that have not only reduced morbidity, but also improved survival following an acute myocardial infarction. However the benefits have not been distributed equitably and as a result disparities have arisen in cardiovascular care. There is tremendous data from the United States demonstrating the many phenotypical forms of disparities. This paper takes a global view of disparities and highlights that disparate care is not limited to the United States and it is another challenge that the medical community should rise and face head on.
Article
We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions. The method is straightforward to implement and is based an adaptive estimates of lower-order moments of the gradients. The method is computationally efficient, has little memory requirements and is well suited for problems that are large in terms of data and/or parameters. The method is also ap- propriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The method exhibits invariance to diagonal rescaling of the gradients by adapting to the geometry of the objective function. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. We demonstrate that Adam works well in practice when experimentally compared to other stochastic optimization methods.
Article
The treatment of chronic illnesses commonly includes the long-term use of pharmacotherapy. Although these medications are effective in combating disease, their full benefits are often not realized because approximately 50% of patients do not take their medications as prescribed. Factors contributing to poor medication adherence are myriad and include those that are related to patients (eg, suboptimal health literacy and lack of involvement in the treatment decision-making process), those that are related to physicians (eg, prescription of complex drug regimens, communication barriers, ineffective communication of information about adverse effects, and provision of care by multiple physicians), and those that are related to health care systems (eg, office visit time limitations, limited access to care, and lack of health information technology). Because barriers to medication adherence are complex and varied, solutions to improve adherence must be multifactorial. To assess general aspects of medication adherence using cardiovascular disease as an example, a MEDLINE-based literature search (January 1, 1990, through March 31, 2010) was conducted using the following search terms: cardiovascular disease, health literacy, medication adherence, and pharmacotherapy. Manual sorting of the 405 retrieved articles to exclude those that did not address cardiovascular disease, medication adherence, or health literacy in the abstract yielded 127 articles for review. Additional references were obtained from citations within the retrieved articles. This review surveys the findings of the identified articles and presents various strategies and resources for improving medication adherence.
Article
To assess the predictive value of general practice electronic prescribing records with respect to adherence to long-term medications as compared to claims-based pharmacy dispensing data. A total of 29772 electronic prescribing records relating to 2713 patients attending a New Zealand general medical practice were linked by national health identifier to 63 833 dispensing records used for community pharmacy reimbursement. Individual possession ratios-prescription possession ratio (PPR) for prescribing and medication possession ratio (MPR) for dispensing-were calculated for the 15-month period from 1 January 2006 to 30 March 2007 based on each data source for the common long-term medications simvastatin, metoprolol succinate, bendrofluazide, felodipine, cilazapril and metformin. Out of 646 patients prescribed at least one of the six medications by the practice during the 15-month period, 50% of patients maintained high adherence (MPR > or = 80%) to all (out of the 6) medications that they were prescribed over the period, with rates of high adherence to individual medications ranging from 68 (felodopine) to 55% (metformin). In 93% of 4043 cases where there was a prescription in the general practice data, a subsequent dispensing record for the same patient and drug was present with a time-stamp no more than seven days later. PPR < 80% demonstrated a positive predictive value (PPV) of 81.4% (95%CI 78-85%) and negative predictive value (NPV) of 76.3% (95%CI 73-79%) for MPR < 80%. There is potential for general practices to identify substantial levels of long-term medication adherence problems through their electronic prescribing records. Significant further adherence problems could be detected if an e-pharmacy network allowed practices to match dispensing against prescriptions.
Published by The New Zealand Ministry of Health
  • Ministry
  • Health
Cardiovascular Disease Risk Assessment and Management for Primary Care
  • Ministry
  • Health
RAIM: Recurrent Attentive and Intensive Model of Multimodal Patient Monitoring Data
  • Yanbo Xu
  • Siddharth Biswal
  • Kevin O Shriprasad R Deshpande
  • Jimeng Maher
  • Sun
Table mimic.mimiciii.patients. Published by MIMIC-III
  • Mimic-Iii
Ministry of Health. {n. d.}. Cardiovascular disease. Published by The New Zealand Ministry of Health
  • Ministry
  • Health
  • Health Ministry