Journal of General Internal Medicine

Published by Springer Verlag
Online ISSN: 1525-1497
Print ISSN: 0884-8734
Publications
Development of anaphylaxis to alpha-gal. Sensitization occurs after exposure to alpha-gal during tick bites. IgE to alpha-gal produced during sensitization binds to high affinity IgE receptors on mast cells and basophils, without causing symptoms. Re-exposure to alpha-gal in mammalian meat causes cross-linking of IgE:IgE receptor complexes on mast cells and basophils to induce secretion of various mediators that lead to anaphylaxis. (The oligosaccharide structures of the alpha-gal are shown in the symbolic depiction suggested by the Consortium of Functional Glycomics).  
In recent years, a newly recognized allergic disease has been uncovered, and seemingly idiopathic causes of anaphylaxis now have an explanation. Individuals bitten by the lone star tick may develop IgE antibodies to the carbohydrate galactose-α-1,3-galactose (alpha-gal). Upon exposure of sensitized subjects to mammalian meat containing alpha-gal on glycoproteins or glycolipids, delayed anaphylaxis may ensue, often three to six hours after ingestion.1 Many of these individuals have negative allergy skin prick tests to meat, further obscuring the diagnosis. With the recent development of IgE alpha-gal tests, the clinical diagnosis can be confirmed in the laboratory.
 
Food insecurity refers to limited or uncertain access to food resulting from inadequate financial resources. There is a clear association between food insecurity and obesity among women, but little is known about the relationship between food insecurity and type 2 diabetes. To evaluate whether there is an independent association between food insecurity and diabetes. Cross-sectional analysis of the nationally representative, population-based National Health and Nutrition Examination Survey (1999-2002 waves). Four thousand four hundred twenty-three adults > 20 years of age with household incomes < or = 300% of the federal poverty level. We categorized respondents as food secure, mildly food insecure, or severely food insecure using a well-validated food insecurity scale. Diabetes was determined by self-report or a fasting serum glucose > or = 126 mg/dl. Diabetes prevalence in the food secure, mildly food insecure, and severely food insecure categories was 11.7%, 10.0%, and 16.1%. After adjusting for sociodemographic factors and physical activity level, participants with severe food insecurity were more likely to have diabetes than those without food insecurity (adjusted odds ratio [AOR] 2.1, 95% CI 1.1-4.0, p = .02). This association persisted after further adjusting for body mass index (AOR 2.2, 95% CI 1.2-3.9, p = .01). Food insecurity may act as a risk factor for diabetes. Among adults with food insecurity, increased consumption of inexpensive food alternatives, which are often calorically dense and nutritionally poor, may play a role in this relationship. Future work should address how primary care clinicians can most effectively assist patients with food insecurity to make healthy dietary changes.
 
Tobacco Tactics Manuals used by month during the intervention period in the Ann Arbor VA, July 2007-May 2008. 
Nurse Responses after Tobacco Tactics Training n Percent
Patients 6-Month Post-Discharge Satisfaction with Tobacco Services Received During Hospitalization Pre-and Post Intervention in Experimental and Control Groups
Smoking cessation services in the Department of Veterans Affairs (VA) are currently provided via outpatient groups, while inpatient cessation programs have not been widely implemented. The objective of this paper is to describe the implementation of the Tobacco Tactics program for inpatients in the VA. This is a pre-/post-non-randomized control study initially designed to teach inpatient staff nurses on general medical units in the Ann Arbor and Detroit VAs to deliver the Tobacco Tactics intervention using Indianapolis as a control group. Coupled with cessation medication sign-off, physicians are reminded to give patients brief advice to quit. Approximately 96% (210/219) of inpatient nurses in the Ann Arbor, MI site and 57% (159/279) in the Detroit, MI site have been trained, with an additional 282 non-targeted personnel spontaneously attending. Nurses' self-reported administration of cessation services increased from 57% pre-training to 86% post-training (p = 0.0002). Physician advice to quit smoking ranged between 73-85% in both the pre-intervention and post-intervention period in both the experimental and control group. Volunteers made follow-up telephone calls to 85% (n = 230) of participants in the Ann Arbor site. Hospitalized smokers (N = 294) in the intervention group are reporting an increase in receiving and satisfaction with the selected cessation services following implementation of the program, particularly in regards to medications (p < 0.05). A large proportion of inpatient nursing staff can rapidly be trained to deliver tobacco cessation interventions to inpatients resulting in increased provision of services.
 
Generic analytic framework for evaluating predictive genetic tests. 
ACCE Model Questions for Reviews of Genetic Tests 6
Generic analytic framework for evaluating predictive genetic tests when the impact on family members is important. 
Questions for Assessing Preanalytic, Analytic, and Postanalytic Factors for Evaluating Predictive Genetic Tests*
Analytic framework for evidence gathering on CYP450 genotype testing for SSRI treatment of depression. Abbreviation: SSRI = selective serotonin reuptake inhibitor. Numbers in this figure represent the research questions addressed in the systematic review:45 1 (overarching question): Does testing for cytochrome P450 (CYP450) polymorphisms in adults entering selective serotonin reuptake inhibitor (SSRI) treatment for non-psychotic depression lead to improvement in outcomes, or are testing results useful in medical, personal, or public health decisionmaking? 2: What is the analytic validity of tests that identify key CYP450 polymorphisms? 3a: How well do particular CYP450 genotypes predict metabolism of particular SSRIs? Do factors such as race/ethnicity, diet, or other medications, affect this association? 3b: How well does CYP450 testing predict drug efficacy? Do factors such as race/ethnicity, diet, or other medications, affect this association? 3c: How well does CYP450 testing predict adverse drug reactions? Do factors such as race/ethnicity, diet, or other medications, affect this association? 4a: Does CYP450 testing influence depression management decisions by patients and providers in ways that could improve or worsen outcomes? 4b: Does the identification of the CYP450 genotypes in adults entering SSRI treatment for non-psychotic depression lead to improved clinical outcomes compared to not testing? 4c: Are the testing results useful in medical, personal or public health decisionmaking? 5: What are the harms associated with testing for CYP450 polymorphisms and subsequent management options? 
In this paper, we discuss common challenges in and principles for conducting systematic reviews of genetic tests. The types of genetic tests discussed are those used to 1). determine risk or susceptibility in asymptomatic individuals; 2). reveal prognostic information to guide clinical management in those with a condition; or 3). predict response to treatments or environmental factors. This paper is not intended to provide comprehensive guidance on evaluating all genetic tests. Rather, it focuses on issues that have been of particular concern to analysts and stakeholders and on areas that are of particular relevance for the evaluation of studies of genetic tests. The key points include: The general principles that apply in evaluating genetic tests are similar to those for other prognostic or predictive tests, but there are differences in how the principles need to be applied or the degree to which certain issues are relevant. A clear definition of the clinical scenario and an analytic framework is important when evaluating any test, including genetic tests. Organizing frameworks and analytic frameworks are useful constructs for approaching the evaluation of genetic tests. In constructing an analytic framework for evaluating a genetic test, analysts should consider preanalytic, analytic, and postanalytic factors; such factors are useful when assessing analytic validity. Predictive genetic tests are generally characterized by a delayed time between testing and clinically important events. Finding published information on the analytic validity of some genetic tests may be difficult. Web sites (FDA or diagnostic companies) and gray literature may be important sources. In situations where clinical factors associated with risk are well characterized, comparative effectiveness reviews should assess the added value of using genetic testing along with known factors compared with using the known factors alone. For genome-wide association studies, reviewers should determine whether the association has been validated in multiple studies to minimize both potential confounding and publication bias. In addition, reviewers should note whether appropriate adjustments for multiple comparisons were used.
 
Awareness of the need for ambulatory care teaching skills training for clinician-educators is increasing. A recent Health Resources and Services Administration (HRSA)-funded national initiative trained 110 teams from U.S. teaching hospitals to implement local faculty development (FD) in teaching skills. To assess the rate of successful implementation of local FD initiatives by these teams. A prospective observational study followed the 110 teams for up to 24 months. Self-reported implementation, our outcome, was defined as the time from the training conference until the team reported that implementation of their FD project was completely accomplished. Factors associated with success were assessed using Kaplan-Meier analysis. The median follow-up was 18 months. Fifty-nine of the teams (54%) implemented their local FD project and subsequently trained over 1,400 faculty, of whom over 500 were community based. Teams that implemented their FD projects were more likely than those that did not to have the following attributes: met more frequently (P=.001), had less turnover (P=.01), had protected time (P=.01), rated their likelihood of success high (P=.03), had some project or institutional funding for FD (P=.03), and came from institutions with more than 75 department of medicine faculty (P=.03). The cost to the HRSA was $22,033 per successful team and $533 per faculty member trained. This national initiative was able to disseminate teaching skills training to large numbers of faculty at modest cost. Smaller teaching hospitals may have limited success without additional support or targeted funding.
 
To define racial similarities and differences in mobility among community-dwelling older adults and to identify predictors of mobility change. Prospective, observational, cohort study. Nine hundred and five community-dwelling older adults. Baseline in-home assessments were conducted to assess life-space mobility, sociodemographic variables, disease status, geriatric syndromes, neuropsychological factors, and health behaviors. Disease reports were verified by review of medications, physician questionnaires, or hospital discharge summaries. Telephone interviews defined follow-up life-space mobility at 18 months of follow-up. African Americans had lower baseline life-space (LS-C) than whites (mean 57.0 +/- standard deviation [SD] 24.5 vs. 72.7 +/- SD 22.6; P < .001). This disparity in mobility was accompanied by significant racial differences in socioeconomic and health status. After 18 months of follow-up, African Americans were less likely to show declines in LS-C than whites. Multivariate analyses showed racial differences in the relative importance and strength of the associations between predictors and LS-C change. Age and diabetes were significant predictors of LS-C decline for both African Americans and whites. Transportation difficulty, kidney disease, dementia, and Parkinson's disease were significant for African Americans, while low education, arthritis/gout, stroke, neuropathy, depression, and poor appetite were significant for whites. There are significant disparities in baseline mobility between older African Americans and whites, but declines were more likely in whites. Improving transportation access and diabetes care may be important targets for enhancing mobility and reducing racial disparities in mobility.
 
Learners are given a scenario involving a patient with suspicion of pulmonary embolism. They independently record their estimate of probability that the patient actually has PE. The vertical lines to the right of the scale correspond to estimates of individual learners, grouped by nearest decile.
Learners who have provided estimates of probability of pulmonary embolism in a patient based on a hypothetical scenario are now led to consider the impact of possible results of selected tests on their choice of actions. How high or low the horizontal action threshold is drawn will influence clinical action. 
Clinical prediction rules (CPR) are tools that clinicians can use to predict the most likely diagnosis, prognosis, or response to treatment in a patient based on individual characteristics. CPRs attempt to standardize, simplify, and increase the accuracy of clinicians' diagnostic and prognostic assessments. The teaching tips series is designed to give teachers advice and materials they can use to attain specific educational objectives. In this article, we present 3 teaching tips aimed at helping clinical learners use clinical prediction rules and to more accurately assess pretest probability in every day practice. The first tip is designed to demonstrate variability in physician estimation of pretest probability. The second tip demonstrates how the estimate of pretest probability influences the interpretation of diagnostic tests and patient management. The third tip exposes learners to various examples and different types of Clinical Prediction Rules (CPR) and how to apply them in practice. PILOT TESTING: We field tested all 3 tips with 16 learners, a mix of interns and senior residents. Teacher preparatory time was approximately 2 hours. The field test utilized a board and a data projector; 3 handouts were prepared. The tips were felt to be clear and the educational objectives reached. Potential teaching pitfalls were identified. Teaching with these tips will help physicians appreciate the importance of applying evidence to their every day decisions. In 2 or 3 short teaching sessions, clinicians can also become familiar with the use of CPRs in applying evidence consistently in everyday practice.
 
(continued) 
Unadjusted and Adjusted Proportion of Patients Who Received Complete Follow-up Within 2 Years of an Elevated PSA Test 
The occurrence and timing of prostate biopsy following an elevated prostate-specific antigen (PSA) test varied considerably in randomized screening trials. Examine practice patterns in routine clinical care in response to an elevated PSA test (≥4 ng/μl) and determine whether time to biopsy was associated with cancer stage at diagnosis. Retrospective cohort study. All veterans (n=13,591) in the Pacific Northwest VA Network with a PSA ≥4 ng/μl between 1998 and 2006 and no previous elevated PSA tests or prostate biopsy. We assessed follow-up care including additional PSA testing, urology consults, and biopsies. We compared stage at diagnosis for men who were biopsied within 24 months vs. those men biopsied and diagnosed>24 months after the elevated PSA test. Two-thirds of patients received follow-up evaluation within 24 months of the elevated PSA test: 32.8% of men underwent a biopsy, 15.5% attended a urology visit but were not biopsied, and 18.8% had a subsequent normal PSA test. Younger age, higher PSA levels, more prior PSA tests, no co-payment requirements, existing urologic conditions, low body mass index, and low comorbidity scores were associated with more complete follow-up. Among men who underwent radical prostatectomy, a delayed diagnosis was not significantly associated with having a pathologically advanced-stage cancer (T3/T4), although we found an increased likelihood of presenting with stage T2C relative to stage T2A or T2B cancer. Follow-up after an elevated PSA test is highly variable with more than a third of men receiving care that could be considered incomplete. A delayed diagnosis was not associated with poorer prognosis.
 
BACKGROUND: Dietary quality may impact heart failure outcomes. However, the current status of the dietary quality of persons with heart failure has not been previously reported. OBJECTIVE: To describe sodium intake, patient factors associated with sodium intake and overall dietary quality in a national sample of persons with heart failure. DESIGN: Analysis of repeated cross-sectional probability sample surveys using data from National Health and Nutrition Examination Surveys (NHANES) of 1999-2000, 2001-2002, 2003-2004 and 2005-2006. PARTICIPANTS: The study sample consisted of 574 persons with self-reported heart failure (mean age = 70 years; 52% women). MEASUREMENTS: Diet of each survey participant was assessed using single 24 hour recall. Dietary nutrients of interest included sodium, the mainstay of heart failure dietary recommendations, and additionally potassium, calcium, magnesium, fish oils, saturated fat and fiber. Specific dietary goals were based on established guidelines. RESULTS: Mean sodium intake was 2,719 mg, with 34% consuming less than 2,000 mg per day. Patient factors associated with greater sodium intake included male gender, lower education, lower income and no reported diagnosis of hypertension. Mean potassium intake was 2,367 mg/day, with no differences by type of diuretic used or renal disease status. Adherence rates to established guidelines for other nutrients were 13% for calcium, 10% for magnesium, 2% for fish oils, 13% for saturated fat and 4% for fiber. CONCLUSIONS: Dietary quality of persons with self-reported heart failure was poor. Public health approaches and clinical dietary interventions are needed for persons with this increasingly prevalent clinical syndrome.
 
To assess whether physicians would be more likely to override a do-not-resuscitate (DNR) order when a hypothetical cardiac arrest is iatrogenic. Mailed survey of 358 practicing physicians. A university-affiliated community teaching hospital. Of 358 physicians surveyed, 285 (80%) responded. Each survey included three case descriptions in which a patient negotiates a DNR order, and then suffers a cardiac arrest. The arrests were caused by the patient's underlying disease, by an unexpected complication of treatment, and by the physician's error. Physicians were asked to rate the likelihood that they would attempt cardiopulmonary resuscitation for each case description. Physicians indicated that they would be unlikely to override a DNR order when the arrest was caused by the patient's underlying disease (mean score 2.55 on a scale from 1 "certainly would not" to 7 "certainly would"). Physicians reported they would be much more likely to resuscitate when the arrest was due to a complication of treatment (5.24 vs 2. 55; difference 95% confidence interval [CI] 2.44, 2.91; p <.001), and that they would be even more likely to resuscitate when the arrest was due to physician error (6.32 vs 5.24; difference 95% CI 0. 88, 1.20; p <.001). Eight percent, 29%, and 69% of physicians, respectively, said that they "certainly would" resuscitate in these three vignettes (p <.001). Physicians may believe that DNR orders do not apply to iatrogenic cardiac arrests and that patients do not consider the possibility of an iatrogenic arrest when they negotiate a DNR order. Physicians may also believe that there is a greater obligation to treat when an illness is iatrogenic, and particularly when an illness results from the physician's error. This response to iatrogenic cardiac arrests, and its possible generalization to other iatrogenic complications, deserves further consideration and discussion.
 
To determine the reliability and validity of an evaluation form for assessing the humanistic behavior of internal medicine (IM) housestaff. The form is for use by nurses. Evaluations were gathered three times during the 1987-88 academic year. Generalizability coefficients (interpreted like traditional reliability coefficients) were generated to establish the form's reliability, while data from attending physicians and from housestaff evaluation committee members were used to help establish its validity. Three hospitals in central Ohio: a large university tertiary care center, a large private hospital, and an urban community hospital. The nurse raters were volunteers solicited by their head nurses. The criteria governing their participation were two years of postgraduate experience in nursing and regular contact with residents, which was self-determined. All IM residents who had worked on a medicine inpatient service at least once during the months under study were included. A total of 493 nurses and 116 residents participated. Sixty-four percent of the generalizability coefficients were 0.90 or higher, and 82% were above 0.75, indicating stable, reliable ratings. The nurses' ratings were positively and significantly correlated with attending faculty's and evaluation committee members' ratings (r = 0.38, p less than 0.01; r = 0.49, p less than 0.001). The evaluation form and the nurses provided consistent, reliable information about medical residents' humanistic behavior; data from five to six nurses should provide statistically reliable ratings using this form. Also, nurses' data yielded information somewhat different from those provided by physicians, suggesting that the form is a useful instrument for assessing this dimension of residents' performance.
 
Existing systems of in-training evaluation (ITE) have been criticized as being unreliable and invalid methods for assessing student performance during clinical education. The purpose of this study was to assess the feasibility, reliability, and validity of a clinical work sampling (CWS) approach to ITE. This approach focused on the following: (1) basing performance data on observed behaviors, (2) using multiple observers and occasions, (3) recording data at the time of performance, and (4) allowing for a feasible system to receive feedback. Sixty-two third-year University of Ottawa students were assessed during their 8-week internal medicine inpatient experience. Four performance rating forms (Admission Rating Form, Ward Rating Form, Multidisciplinary Team Rating Form, and Patient's Rating Form) were introduced to document student performance. Voluntary participation rates were variable (12%-64%) with patients excluded from the analysis because of low response rate (12%). The mean number of evaluations per student per rotation (19) exceeded the number of evaluations needed to achieve sufficient reliability. Reliability coefficients were high for the Ward Form (.86) and the Admission Form (.73) but not for the Multidisciplinary Team (.22) Form. There was an examiner effect (rater leniency), but this was small relative to real differences between students. Correlations between the Ward Form and the Admission Form were high (.47), while those with the Multidisciplinary Team Form were lower (.37 and .26, respectively). The CWS approach ITE was considered to be content valid by expert judges. The collection of ongoing performance data was reasonably feasible, reliable, and valid.
 
To compare the predictive validity of several measures of motivation to quit smoking among inpatients enrolled in a smoking cessation program. Data collected during face-to-face counseling sessions included a standard measure of motivation to quit (stage of readiness [Stage]: precontemplation, contemplation, or preparation) and four items with responses grouped in three categories: "How much do you want to quit smoking" (Want), "How likely is it that you will stay off cigarettes after you leave the hospital" (Likely), "Rate your confidence on a scale from 0 to 100 about successfully quitting in the next month" (Confidence), and a counselor assessment in response to the question, "How motivated is this patient to quit?" (Motivation). Patients were classified as nonsmokers if they reported not smoking at both the 6-month and 12-month interviews. All patients lost to follow-up were considered smokers. At 1 year, the smoking cessation rate was 22. 5%. Each measure of motivation to quit was independently associated with cessation ( p <.001) when added individually to an adjusted model. Likely was most closely associated with cessation and Stage was least. Likely had a sensitivity, specificity, positive predictive value, negative predictive value, and likelihood ratio of 70.2%, 68.1%, 39.3%, 88.6%, and 2.2, respectively. The motivation of inpatient smokers to quit may be as easily and as accurately predicted with a single question as with the series of questions that are typically used.
 
On-Call Residents ’ Reported Reasons for Late Hospital Departure (n=148). 
Unadjusted and Adjusted Associations Between 16-Hour Long Call Shift Variables and Extended Shifts
Duty hour restrictions limit shift length to 16 hours during the 1(st) post-graduate year. Although many programs utilize a 16-hour "long call" admitting shift on inpatient services, compliance with the 16-hour shift length and factors responsible for extended shifts have not been well examined. To identify the incidence of and operational factors associated with extended long call shifts and residents' perceptions of the safety and educational value of the 16-hour long call shift in a large internal medicine residency program. DESIGN, PARTICIPANTS, AND MAIN MEASURES: Between August and December of 2010, residents were sent an electronic survey immediately following 16-hour long call shifts, assessing departure time and shift characteristics. We used logistic regression to identify independent predictors of extended shifts. In mid-December, all residents received a second survey to assess perceptions of the long call admitting model. Two-hundred and thirty surveys were completed (95 %). Overall, 92 of 230 (40 %) shifts included ≥1 team member exceeding the 16-hour limit. Factors independently associated with extended shifts per 3-member team were 3-4 patients (adjusted OR 5.2, 95 % CI 1.9-14.3) and > 4 patients (OR 10.6, 95 % CI 3.3-34.6) admitted within 6 hours of scheduled departure and > 6 total admissions (adjusted OR 2.9, 95 % CI 1.05-8.3). Seventy-nine of 96 (82 %) residents completed the perceptions survey. Residents believed, on average, teams could admit 4.5 patients after 5 pm and 7 patients during long call shifts to ensure compliance. Regarding the long call shift, 73 % agreed it allows for safe patient care, 60 % disagreed/were neutral about working too many hours, and 53 % rated the educational value in the top 33 % of a 9-point scale. Compliance with the 16-hour long call shift is sensitive to total workload and workload timing factors. Knowledge of such factors should guide systems redesign aimed at achieving compliance while ensuring patient care and educational opportunities.
 
Pre-post-changes in Self-assessed Skills and Enjoyment in Curriculum Development Activities, Cohorts 2-9*
Program Evaluation: Assessment of Program Quality, Educational Methods, and Facilitation, Cohorts 1-16, N=138* Mean Rating (SD)
Despite increased demand for new curricula in medical education, most academic medical centers have few faculty with training in curriculum development. To describe and evaluate a longitudinal mentored faculty development program in curriculum development. A 10-month curriculum development program operating one half-day per week of each academic year from 1987 through 2003. The program was designed to provide participants with the knowledge, attitudes, skills, and experience to design, implement, evaluate, and disseminate curricula in medical education using a 6-step model. One-hundred thirty-eight faculty and fellows from Johns Hopkins and other institutions and 63 matched nonparticipants. Pre- and post-surveys from participants and nonparticipants assessed skills in curriculum development, implementation, and evaluation, as well as enjoyment in curriculum development and evaluation. Participants rated program quality, educational methods, and facilitation in a post-program survey. Sixty-four curricula were produced addressing gaps in undergraduate, graduate, or postgraduate medical education. At least 54 curricula (84%) were implemented. Participant self-reported skills in curricular development, implementation, and evaluation improved from baseline (p < .0001), whereas no improvement occurred in the comparison group. In multivariable analyses, participants rated their skills and enjoyment at the end of the program significantly higher than nonparticipants (all p < .05). Eighty percent of participants felt that they would use the 6-step model again, and 80% would recommend the program highly to others. This model for training in curriculum development has long-term sustainability and is associated with participant satisfaction, improvement in self-rated skills, and implementation of curricula on important topics.
 
Because shared decision making has been recommended for screening mammography by women under age 50, we studied women's decision-making process regarding the procedure. Qualitative research design using in-depth semi-structured interviews. Sixteen white and African-American women aged 38 to 45 receiving care at a large New England medical practice. We identified the following content areas in women's decision-making process: intentions for screening, motivating factors to undergo screening, attitudes toward screening mammography, attitudes toward breast cancer, and preferences for information and shared decision making. In our sample, all women had or intended to have a screening mammogram before age 50. They were motivated by the awareness of the recommendation to begin screening at age 40, knowing others with breast cancer, and a sense of personal responsibility for their health. Participants feared breast cancer and thought the benefits of screening mammography far outweighed its risks. Women's preferences for involvement in decision making varied from wanting full responsibility for screening decisions to deferring to their medical providers. All preferred the primary care provider to be the main source of information, yet the participants stated that their own providers played a limited role in educating them about the risks and benefits of screening and the mammography procedure itself. Most of their information was derived from the media. The women in this study demonstrated little ambivalence in their desire for mammography screening prior to age 50. They reported minimal communication with their medical providers about the risks and benefits of screening. Better information flow regarding mammography screening is necessary. Given the lack of uncertainty among women's perceptions regarding screening mammography, shared decision making in this area may be difficult to achieve.
 
Trajectories of drug use, combining last 30 days' use of cocaine, amphetamines and opioids as reported over 18 years follow-up from 1987/88 to 2005/06, with individuals self-reporting at up to six in-person examinations (mean 5.25, SD 1.1). Drug use days were summed to generate a score ranging from 0 to 90, and group-based trajectory models included a covariate reflecting age at cohort entry (see Methods).
Characteristics of Participants in Four Drug Use Trajectory Groups (Years 1987/88-2005/06; Ages 20-50) Demographic and Psychosocial Characteristics
Predictors of Membership in Drug Use Trajectories. Adjusted for Demographics and Substance Use Drug Use Trajectories (N=4301) In relation to "Nonuser"
For adults in general population community settings, data regarding long-term course and outcomes of illicit drug use are sparse, limiting the formulation of evidence-based recommendations for drug use screening of adults in primary care. To describe trajectories of three illicit drugs (cocaine, opioids, amphetamines) among adults in community settings, and to assess their relation to all-cause mortality. Longitudinal cohort, 1987/88-2005/06. Community-based recruitment from four cities (Birmingham, Chicago, Oakland, Minneapolis). Healthy adults, balanced for race (black and white) and gender were assessed for drug use from 1987/88-2005/06, and for mortality through 12/31/2008 (n = 4301) Use of cocaine, amphetamines, and opioids (last 30 days) was queried in the following years: 1987/88, 1990/91, 1992/93, 1995/96, 2000/01, 2005/06. Survey-based assessment of demographics and psychosocial characteristics. Mortality over 18 years. Trajectory analysis identified four groups: Nonusers (n = 3691, 85.8%), Early Occasional Users (n = 340, 7.9%), Persistent Occasional Users (n = 160, 3.7%), and Early Frequent/Later Occasional Users (n = 110, 2.6%). Trajectories conformed to expected patterns regarding demographics, other substance use, family background and education. Adjusting for demographics, baseline health status, health behaviors (alcohol, tobacco), and psychosocial characteristics, Early Frequent/Later Occasional Users had greater all-cause mortality (Hazard Ratio, HR = 4.94, 95% CI = 1.58-15.51, p = 0.006). Study is restricted to three common drugs, and trajectory analyses represent statistical approximations rather than identifiable "types". Causal inferences are tentative. Four trajectories describe illicit drug use from young adulthood to middle age. Two trajectories, representing over one third of adult users, continued use into middle age. These persons were more likely to continue harmful risk behaviors such as smoking, and more likely to die.
 
Effect of Guided Care on Patient-Reported Quality of Chronic Illness Care (PACIC) Scores After 18 Months 
Effect of Guided Care on Patient Reported "High-Quality" Health Care After 18 Months 
The quality of health care for older Americans with chronic conditions is suboptimal. To evaluate the effects of "Guided Care" on patient-reported quality of chronic illness care. Cluster-randomized controlled trial of Guided Care in 14 primary care teams. Older patients of these teams were eligible to participate if, based on analysis of their recent insurance claims, they were at risk for incurring high health-care costs during the coming year. Small teams of physicians and their at-risk older patients were randomized to receive either Guided Care (GC) or usual care (UC). "Guided Care" is designed to enhance the quality of health care by integrating a registered nurse, trained in chronic care, into a primary care practice to work with 2-5 physicians in providing comprehensive chronic care to 50-60 multi-morbid older patients. Eighteen months after baseline, interviewers blinded to group assignment administered the Patient Assessment of Chronic Illness Care (PACIC) survey by telephone. Logistic and linear regression was used to evaluate the effect of the intervention on patient-reported quality of chronic illness care. Of the 13,534 older patients screened, 2,391 (17.7%) were eligible to participate in the study, of which 904 (37.8%) gave informed consent and were cluster-randomized. After 18 months, 95.3% and 92.2% of the GC and UC recipients who remained alive and eligible completed interviews. Compared to UC recipients, GC recipients had twice greater odds of rating their chronic care highly (aOR = 2.13, 95% CI = 1.30-3.50, p = 0.003). Guided Care improves self-reported quality of chronic health care for multi-morbid older persons.
 
Non-adherence to essential medications represents an important public health problem. Little is known about the frequency with which patients fail to fill prescriptions when new medications are started ("primary non-adherence") or predictors of failure to fill. Evaluate primary non-adherence in community-based practices and identify predictors of non-adherence. 75,589 patients treated by 1,217 prescribers in the first year of a community-based e-prescribing initiative. We compiled all e-prescriptions written over a 12-month period and used filled claims to identify filled prescriptions. We calculated primary adherence and non-adherence rates for all e-prescriptions and for new medication starts and compared the rates across patient and medication characteristics. Using multivariable regressions analyses, we examined which characteristics were associated with non-adherence. Primary medication non-adherence. Of 195,930 e-prescriptions, 151,837 (78%) were filled. Of 82,245 e-prescriptions for new medications, 58,984 (72%) were filled. Primary adherence rates were higher for prescriptions written by primary care specialists, especially pediatricians (84%). Patients aged 18 and younger filled prescriptions at the highest rate (87%). In multivariate analyses, medication class was the strongest predictor of adherence, and non-adherence was common for newly prescribed medications treating chronic conditions such as hypertension (28.4%), hyperlipidemia (28.2%), and diabetes (31.4%). Many e-prescriptions were not filled. Previous studies of medication non-adherence failed to capture these prescriptions. Efforts to increase primary adherence could dramatically improve the effectiveness of medication therapy. Interventions that target specific medication classes may be most effective.
 
To determine effectiveness and costs of different guideline dissemination and implementation strategies. MEDLINE (1966 to 1998), HEALTHSTAR (1975 to 1998), Cochrane Controlled Trial Register (4th edn 1998), EMBASE (1980 to 1998), SIGLE (1980 to 1988), and the specialized register of the Cochrane Effective Practice and Organisation of Care group. Randomized-controlled trials, controlled clinical trials, controlled before and after studies, and interrupted time series evaluating guideline dissemination and implementation strategies targeting medically qualified health care professionals that reported objective measures of provider behavior and/or patient outcome. Two reviewers independently abstracted data on the methodologic quality of the studies, characteristics of study setting, participants, targeted behaviors, and interventions. We derived single estimates of dichotomous process variables (e.g., proportion of patients receiving appropriate treatment) for each study comparison and reported the median and range of effect sizes observed by study group and other quality criteria. We included 309 comparisons derived from 235 studies. The overall quality of the studies was poor. Seventy-three percent of comparisons evaluated multifaceted interventions. Overall, the majority of comparisons (86.6%) observed improvements in care; for example, the median absolute improvement in performance across interventions ranged from 14.1% in 14 cluster-randomized comparisons of reminders, 8.1% in 4 cluster-randomized comparisons of dissemination of educational materials, 7.0% in 5 cluster-randomized comparisons of audit and feedback, and 6.0% in 13 cluster-randomized comparisons of multifaceted interventions involving educational outreach. We found no relationship between the number of components and the effects of multifaceted interventions. Only 29.4% of comparisons reported any economic data. Current guideline dissemination and implementation strategies can lead to improvements in care within the context of rigorous evaluative studies. However, there is an imperfect evidence base to support decisions about which guideline dissemination and implementation strategies are likely to be efficient under different circumstances. Decision makers need to use considerable judgment about how best to use the limited resources they have for quality improvement activities.
 
To examine trends in study design and other characteristics of original research published in JAMA, Lancet, and the New England Journal of Medicine (NEJM) between 1971 and 1991. A retrospective cross-sectional study of original clinical research published in JAMA, Lancet, and NEJM during 1971, 1981, and 1991. Four hundred forty-four articles were independently reviewed by at least two investigators and classified according to study design and other preselected study characteristics. Changes over time were analyzed by chi-square tests for categorical variables and analysis of variance for continuous variables. Clinical results doubled, from 17% of all articles in 1971 to 35% in 1991 (p < 0.004), while case series decreased from 30% to 4% (p < 0.0001). Of 118 clinical trials, randomized controlled trials increased from 31% to 76% (p < 0.003) and nonrandomized controlled trials decreased from 42% to 8% (p < 0.002). Multicenter studies increased from 10% to 39% (p < 0.0001) and the prevalence of health services research increased from none in 1971 to 12% in 1991 (p < 0.001). The proportion of the studies explicitly excluding women from the subject population decreased from 11% in 1971 to 3% in 1991 (p < 0.03). In 1991 7% of the studies were composed entirely of men subjects, while only 0.7% of the studies were specific to men's health. Twelve percent of the studies in 1991 were specific to women's health. Between 1971 and 1991 there was no change in the prevalence of women first authors or studies addressing women's or minorities' health issues. Several important changes in clinical research studies published in JAMA, Lancet, and NEJM have taken place between 1971 and 1991. Clinical trials have increased in frequency, largely replacing studies containing ten or fewer subjects. Health services research has increased in prevalence, reflecting growing interest in studies addressing the delivery of health care. Our data support the hypothesis that exclusion of women from clinical research studies is an important contributor to the paucity of data concerning women's health.
 
A primary care (PC) pathway was initiated within the medical residency program at Boston City Hospital (BCH) in 1974. The authors studied the PC and traditional (TD) track graduates of the program to compare career development, goals, and practice patterns. The 185 graduates of the nine resident cohorts from 1974 through 1983 were surveyed; the overall response rate was 74%. Primary care careers have been chosen by 81% of PC graduates, compared with 38% of TD graduates (p less than 0.001); career satisfaction is equally high in the two groups. Among the PC graduates, 68% are practicing in high-need areas, compared with only 37% of TD graduates (p less than 0.001). PC graduates are more likely to make house calls, provide extended office hours, round in nursing homes or chronic care facilities, and co-practice with nurse practitioners or physician's assistants, and they are more active in women's health care, care of the terminally ill, and treating patients with sexual dysfunction (all p less than 0.05). PC graduates utilize various community agencies more frequently and supplement patient education with outside resources more intensively (p less than 0.001). The career choices and practice locations of PC graduates reflect the training goals of the PC curriculum and differ from the career choices and practices of the TD graduates from the same program.
 
The authors surveyed 297 internists who completed residency or fellowship training at six San Francisco institutions from 1979 through 1984 to assess how the recent expanded supply of physicians has affected their intensity of practice and their decisions about location of practice. The vast majority of internists (93%) settled in metropolitan areas, with 56% remaining in the San Francisco Bay Area, despite that region's already high concentration of physicians. Mean annual income, in 1984 dollars, was slightly more than figures from national surveys of physicians of similar age ($72,560 vs. $71,900), but reported mean work week was shorter (54.8 hours vs. 60.5). Although subspecialists earned significantly more than generalists, this was because they worked more hours. Those who graduated later were significantly less likely to be in private practice in 1985, mainly because they initially selected salaried institutional work more often than earlier graduates (p less than 0.001). Women worked 85% of the men's work week and subspecialized significantly less often (p less than 0.05). These findings suggest that internists trained in already "over-doctored" areas will continue to settle there or in similar communities.
 
To determine whether improvements have occurred since a survey of the 1982 literature assessing diagnostic tests, the authors evaluated all English-language articles that assessed clinical diagnostic tests in abridged Index Medicus journals in 1985, and that had the terms sensitivity and specificity in the title, abstract, or key words. The 89 articles were assessed against seven methodologic criteria, including use of a well-defined "gold standard," clearly defined test interpretation, blinding, clear data presentation, correct use of sensitivity and specificity, calculation of predictive values, and consideration of prevalence. In comparisons of 1985 vs. 1982 articles, there were significant improvements in five of the seven criteria. For example, the proportion of articles using a well-defined "gold standard" rose from 68% to 88%. Overall, the frequency of papers demonstrating five or more of the seven criteria increased from 26% to 47%. However, predictive values were discussed in only 54% of the articles without, necessarily, consideration of the influence of prevalence as well. This study raises the concern that while the concepts of sensitivity and specificity are now accepted, predictive values remain less well understood. Although there has been an improvement in the assessment of diagnostic tests in published research, attention to accepted methodologic standards is still needed on the part of researchers, reviewers, and editors.
 
To determine the number of physician office visits by adults in which an anxiety disorder diagnosis was recorded and rates of treatment during these visits. We used data from the 1985, 1993, 1994, 1997, and 1998 National Ambulatory Medical Care Surveys, which is a nationally representative series of surveys of office-based practice employing clustered sampling. Office-based physician practices in the United States. A systematically sampled group of office-based physicians. The number of office visits with a recorded anxiety disorder diagnosis increased from 9.5 million in 1985 to 11.2 million per year in 1993-1994 and 12.3 million per year in 1997-1998, representing 1.9%, 1.6%, and 1.5% of all office visits in 1985, 1993-1994, and 1997-1998, respectively. The majority of recorded anxiety disorder diagnoses were not for specific disorders, with 70% of anxiety disorder visits to primary care physicians coded as "anxiety state, unspecified." Visits to primary care physicians accounted for 48% of all anxiety disorder visits in 1985 and 1997-1998. Treatment for anxiety was offered in over 95% of visits to psychiatrists but in only 60% of visits to primary care physicians. Primary care physicians were less likely to offer treatment for anxiety when specific anxiety disorders were diagnosed than when "anxiety state, unspecified" was diagnosed (54% vs 62% in 1997-1998). Prescriptions for medications to treat anxiety disorders increased between 1985 and 1997-1998 while use of psychotherapy decreased over the same time period in visits to both primary care physicians and psychiatrists. Although there is a large number of office visits with a recorded anxiety disorder diagnosis, under-recognition and under-treatment appear to be a continuing problem, especially in the primary care sector. Medication is being substituted for psychotherapy in visits to both psychiatrists and primary care physicians over time.
 
National data describing the placement of feeding tubes demonstrated a rapid increase in use in the early and mid-1990s. In the past several years, substantial concerns have arisen regarding the appropriateness of the procedure in many chronically ill patients. The purpose of this study is to determine whether the use of feeding tubes has continued to increase through the 1990s despite these widely publicized concerns. Repeated measure cross-sectional study of the North Carolina Discharge Database. Analyses of all nonfederal hospital inpatient admissions in North Carolina. We examined the absolute numbers and rates of feeding tube placements from 1989 to 2000. The rate of feeding tube placement increased from 59/100,000 persons in 1989 to 94/100,000 persons in 2000, an overall 60% increase with slowing in the rate of increase in the late 1990s. However, when outpatient procedures were included, the increase in tube feeding continued throughout the 11-year period of observation. The increase was due to an increase in utilization within all hospitals over the time period. Utilization did not differ between profit and not for profit hospitals. The relative growth rate of inpatient feeding tube placement did not differ by age group but the absolute increase was greatest in those age 75 years and over. Our study demonstrates that the use of feeding tubes has continued to increase through the 1990s. This increase occurred despite ongoing controversy in the medical literature about feeding tube placement in chronically ill patients.
 
To determine whether changes in the demographic/educational mix of those entering internal medicine from 1986 to 1989 were associated with differences among them at the time of certification. Included in the study were all candidates for the 1989 to 1992 American Board of Internal Medicine certifying examinations in internal medicine. Demographic information and medical school, residency training, and examination experience were available for each candidate. Data defining quality, size, and number of subspecialties were available for internal medicine training programs. From 1990 to 1992, the total number of men and women candidates increased as did the numbers of foreign-citizen non-U.S. medical school graduates and osteopathic medical school graduates; the number of U.S. medical school graduates remained nearly constant and the number of U.S.-citizen graduates of non-U.S. medical schools declined. The pass rates for all groups of first-time examination takers decreased, while the ratings of program directors remained relatively constant. Program quality, size, and number of subspecialty programs had modest positive relationships with examination performance. Changes in the characteristics of those entering internal medicine from 1986 to 1989 were associated with declines in performance at the time of certification. These declines occurred in all content areas of the test and were apparent regardless of program quality. These data identify some of the challenges internal medicine faces in the years ahead.
 
To determine the prevalence and duration of postmenopausal hormone replacement therapy (HRT) use and identify correlates of adherence to therapy. Population-based cohort study. Staff-model health maintenance organization. Female members, 40 years and older. Prevalence and duration of use were measured between 1990 and 1995. Duration was assessed by Kaplan-Meier and proportional hazards methods. Hormone replacement therapy use increased from 10.3% in 1990 to 20.7% in 1995. Greatest use (24%) occurred among menopausal women age 50 to 54 years. Less than 5% of women 75 and older used HRT. Among 1,680 first-time recipients of HRT, two thirds of initial prescriptions were written by internists. Thirty-eight percent discontinued HRT within 1 year. For the subset whose indication for therapy was ascertained, prevention of chronic disease was associated with a 33% 1-year discontinuation rate. Factors associated with longer duration of therapy included white race (relative risk [RR], 1.63; 95% confidence interval [95% CI], 1.32 to 2.02), younger age (RR, 1.02 per year; 95% CI 1.01 to 1.03), and changing the preparation or dose of estrogen (RR, 5.62; 95% CI, 4.33 to 7.25). The formulation (esterified estrogens 0.625 mg versus conjugated estrogens 0.625 mg) was also associated with greater duration of use; all other estrogens were, as a group, associated with shorter duration of use. Those who received their initial HRT prescription from an internist were more likely to continue therapy than those who received it from a gynecologist. Despite increased use of HRT, only a minority of women in this population used HRT, and many of those discontinued therapy within 1 year.
 
To estimate the percentage of California smokers who visit physicians each year and thus determine the extent of the opportunity for physicians to advise their smoking patients to quit; to identify sociodemographic and other characteristics related to smokers' reporting that advice was given; and to look for evidence that physician advice influences quitting behavior. Data were collected as part of the 1990 California Tobacco Survey, a large (n = 24,296) population-based telephone survey. 9,796 current smokers, including 5,559 daily smokers who had visited a physician in the preceding year. Two-thirds of all smokers had visited a physician in the year before the interview, but only about 50% of Hispanic and Asian smokers had done so. Multivariate analysis showed that advice at the last visit was independently related to older age, higher cigarette consumption, and poorer perceived health. Compared with smokers never advised to quite by a physician, those advised to quit at the last visit were 1.61 (95% confidence interval, 1.31-1.98) times more likely to report a quit attempt in the preceding year and 1.90 (95% confidence interval, 1.45-2.48) times more likely to be preparing to quit; however, those advised previously but not at the last visit showed no more quitting activity than did smokers never advised to quit. Physicians have considerable opportunity to reach all demographic subgroups of the population, but the nature of the subgroups advised most (those who are older, have high consumption of cigarettes, or have poor health) suggests that physicians tend to treat such advice as a therapeutic rather than a preventive intervention. Physician advice at the most recent visit encourages patients to think about quitting and probably leads to quit attempts. Thus, it is vital that physicians perform the simple intervention of advising every smoker to quit at every visit.
 
Given the changes that are taking place in medical practice, it is important to reexamine traditional teaching methods in internal medicine residencies. One such component is morning report, which usually focuses exclusively on patients recently admitted to the hospital by the housestaff. A new morning report format described here adds several new components, including the review of patients who have been recently discharged from the hospital. The new format has been well received by the residents in the program and is an important step toward preparing them for the medical practice climate of the future.
 
To evaluate the interrater reproducibility of scientific abstract review. Retrospective analysis. Review for the 1991 Society of General Internal Medicine (SGIM) annual meeting. 426 abstracts in seven topic categories evaluated by 55 reviewers. Reviewers rated abstracts from 1 (poor) to 5 (excellent), globally and on three specific dimensions: interest to the SGIM audience, quality of methods, and quality of presentation. Each abstract was reviewed by five to seven reviewers. Each reviewer's ratings of the three dimensions were added to compute that reviewer's summary score for a given abstract. The mean of all reviewers' summary scores for an abstract, the final score, was used by SGIM to select abstracts for the meeting. Final scores ranged from 4.6 to 13.6 (mean = 9.9). Although 222 abstracts (52%) were accepted for publication, the 95% confidence interval around the final score of 300 (70.4%) of the 426 abstracts overlapped with the threshold for acceptance of an abstract. Thus, these abstracts were potentially misclassified. Only 36% of the variance in summary scores was associated with an abstract's identity, 12% with the reviewer's identity, and the remainder with idiosyncratic reviews of abstracts. Global ratings were more reproducible than summary scores. Reviewers disagreed substantially when evaluating the same abstracts. Future meeting organizers may wish to rank abstracts using global ratings, and to experiment with structured review criteria and other ways to improve raters' agreement.
 
Breast-conserving surgery (BCS) has been the recommended treatment for early-stage breast cancer since 1990 yet many women still do not receive this procedure. To examine the relationship between birthplace and use of BCS in Asian-American and Pacific-Islander (AAPI) women, and to determine whether disparities between white and AAPI women persist over time. Retrospective cohort study. Women with newly diagnosed stage I or II breast cancer from 1992 to 2000 in the Surveillance, Epidemiology, and End Results program. Receipt of breast -conserving surgery for initial treatment of stage I or II breast cancer. Overall, AAPI women had lower rates of BCS than white women (47% vs 59%; P<.01). Foreign-born AAPI women had lower rates of BCS than U.S.-born AAPI and white women (43% vs 56% vs 59%; P<.01). After adjustment for age, marital status, tumor registry, year of diagnosis, stage at diagnosis, tumor size, histology, grade, and hormone receptor status, foreign-born AAPI women (adjusted OR [aOR], 0.49; 95% CI, 0.32 to 0.76) and U.S.-born AAPI women (aOR, 0.77; 95% CI, 0.62 to 0.95) had lower odds of receiving BCS than white women. Use of BCS increased over time for each racial/ethnic group; however, foreign-born AAPI women had persistently lower rates of BCS than non-Hispanic white women. AAPI women, especially those who are foreign born, are less likely to receive BCS than non-Hispanic white women. Of particular concern, differences in BCS use among foreign-born and U.S.-born AAPI women and non-Hispanic white women have persisted over time. These differences may reflect inequities in the treatment of early-stage breast cancer for AAPI women, particularly those born abroad.
 
The impact of national efforts to limit antibiotic prescribing has not been fully evaluated. To analyze trends in outpatient visits associated with antibiotic prescription for U.S. adults. Cross-sectional study of data (1995 to 2002) from the National Ambulatory Medical Care Survey and the National Hospital Ambulatory Medical Care Survey. Adults > or =18 years with an outpatient visit to an office- or hospital-based medical practice or to an emergency department. All visits were classified into 1 of 4 diagnostic categories: (1) acute respiratory infection (ARI)-antibiotics rarely indicated, (2) ARI-antibiotics often indicated, (3) nonrespiratory infection-antibiotics often indicated, and (4) all others. Trends in: (1) Proportion of outpatient visits associated with an antibiotic prescription; (2) proportion of antibiotic prescriptions that were broad spectrum; and (3) number of visits and antibiotic prescriptions per 1,000 U.S. adults > or =18 years of age. From 1995-1996 to 2001-2002, the proportion of all outpatient visits that generated an antibiotic prescription decreased from 17.9% to 15.3% (adjusted odds ratio [OR] 0.84, 95 % confidence interval [CI] 0.76 to 0.92). The entire reduction was because of a decrease in antibiotic prescriptions associated with visits for ARIs where antibiotics are rarely indicated from 59.9% to 49.1% (adjusted OR 0.64 95% CI 0.51 to 0.80). However, the proportion of prescribed antibiotics for these visits that were classified as broad-spectrum antibiotic prescription increased from 41.0% to 76.8%. Overall outpatient visits increased from 1693 to 1986 per 1,000 adults over the 8 years studied, but associated antibiotic prescriptions changed little, from 302 to 304 per 1,000 adults. During the study period, outpatient antibiotic prescribing for respiratory infections where antibiotics are rarely indicated has declined, while the proportion of broad-spectrum antibiotics prescribed for these diagnoses has increased significantly. This trend resulted in a 15% decline in the total proportion of outpatient visits in which antibiotics were prescribed. However, because outpatient visits increased 17% over this time period, the population burden of outpatient antibiotic prescriptions changed little.
 
Disclosure of medical research results to the public creates tension between lay medical reporters and the medical profession. To explore the early effect of media attention on the risks associated with short-acting calcium channel blockers (CCBs) for treating hypertension after publication at a national meeting and following publication. Time-series analysis of prescription claims data. SETTING AND DATA SOURCE: National third-party pharmaceutical benefits manager. Employed or retired persons and their families, 18 years of age or older, receiving prescription benefits from 1 of 4 national companies that contracted with the pharmaceutical benefits manager exclusively for prescription drug coverage. Prescription claims for antihypertensive drugs by fill date converted to a percentage of all cardiovascular drug claims. Data were grouped into weekly intervals before and immediately after the national release of negative information about CCBs on March 10, 1995 and following publication of the results on August 23, 1995. The most prevalent antihypertensive drugs were diuretics (21% of cardiovascular prescription claims) and calcium channel blockers (19%). A 10% decline in prescriptions filled for CCBs occurred 4 weeks following the intense media attention. Only prescriptions for long-acting calcium channel blockers declined. Alpha-1-blocker prescriptions increased by approximately the same amount that prescriptions for CCBs declined, suggesting substitution of one drug for the other. Changes in diuretic or beta-blocker prescriptions filled were not statistically significant. No immediate change in other cardiovascular drug classes occurred following journal publication. Intense media publicity regarding a controversial study measurably and unpredictably changed prescription claims.
 
Comparison of Birmingham Homeless Survey Samples in 1995 and 2005 
Percentage of homeless persons in Birmingham unable to obtain care by demographic characteristics and year of survey (n=161 for each survey). 
Reasons Care was not Obtained among Persons Reporting Unmet Need for Care (n=139)* 
Homeless persons depend disproportionately on the health-care safety net for medical services. National reports identify financial strains to this safety net. Whether this has affected homeless persons is unknown. We quantified changes in the proportion of homeless persons reporting unmet need for health care in Birmingham, Alabama, comparing two periods, 1995 and 2005. We assessed whether a period effect was independent of characteristics of persons surveyed. Analysis of two surveys conducted with identical methods among representative samples of homeless persons in 1995 (n = 161) and 2005 (n = 161). Report of unmet need (inability to obtain care when needed) was the dependent variable. Two survey periods (1995 and 2005) were compared, with multivariable adjustment for sociodemographic and health characteristics. Reasons for unmet need were determined among the subset of persons reporting unmet need. Unmet need for health care was more common in 2005 (54%) than in 1995 (32%) (p < 0.0001), especially for non-Blacks (64%) and females (65%). Adjusting for individual characteristics, a survey year of 2005 independently predicted unmet need (odds ratio 2.68, 95% CI 1.49-4.83). Among persons reporting unmet need (87 of 161 in 2005; 52 of 161 in 1995), financial barriers were more commonly cited in 2005 (67% of 87) than in 1995 (42% of 52) (p = 0.01). A rise in unmet health-care needs was reported among Birmingham's homeless from 1995 to 2005. This period effect was independent of population characteristics and may implicate a local safety net inadequacy. Additional data are needed to determine if this represents a national trend.
 
Use of cardiac devices has been increasing rapidly along with concerns over their safety and effectiveness. This study used hospital administrative data to assess cardiac device implantations in the United States, selected perioperative outcomes, and associated patient and hospital characteristics. We screened hospital discharge abstracts from the 1997-2004 Healthcare Cost and Utilization Project Nationwide Inpatient Samples. Patients who underwent implantation of pacemaker (PM), automatic cardioverter/defibrillator (AICD), or cardiac resynchronization therapy pacemaker (CRT-P) or defibrillator (CRT-D) were identified using ICD-9-CM procedure codes. Outcomes ascertainable from these data and associated hospital and patient characteristics were analyzed. Approximately 67,000 AICDs and 178,000 PMs were implanted in 2004 in the United States, increasing 60% and 19%, respectively, since 1997. After FDA approval in 2001, CRT-D and CRT-P reached 33,000 and 7,000 units per year in the United States in 2004. About 70% of the patients were aged 65 years or older, and more than 75% of the patients had 1 or more comorbid diseases. There were substantial decreases in length of stay, but marked increases in charges, for example, the length of stay of AICD implantations halved (from 9.9 days in 1997 to 5.2 days in 2004), whereas charges nearly doubled (from $66,000 in 1997 to $117,000 in 2004). Rates of in-hospital mortality and complications fluctuated slightly during the period. Overall, adverse outcomes were associated with advanced age, comorbid conditions, and emergency admissions, and there was no consistent volume-outcome relationship across different outcome measures and patient groups. The numbers of cardiac device implantations in the United States steadily increased from 1997 to 2004, with substantial reductions in length of stay and increases in charges. Rates of in-hospital mortality and complications changed slightly over the years and were associated primarily with patient frailty.
 
Age, sex, and race-adjusted prevalence of CVD over time among adults with diabetes (plain lines) and without diabetes (dotted lines), overall and by education. Legends: * Time trend: p<0.05. HS: high school. 
Relative index of inequality † (and 95% confidence intervals) over time among adults with and without diabetes, by gender, age, race/ 
Adjusted † prevalence rate ratios of CVD associated with diabetes (and 95% confidence intervals) over time, by gender and educational attainment. Legends: † Adjusted for age, race/ethnicity, poverty status, comorbidity and survey year. The “ Overall ” model is additionally adjusted for educational attainment. * Time trend: p<0.05. 
Diabetes and its cardiovascular complications are more common in adults of low socioeconomic position (SEP). In the US, the past decade has seen the establishment of many programs to reduce cardiovascular risk in persons with diabetes, but their effect on socioeconomic disparities is uncertain. We sought to investigate recent time trends in socioeconomic disparities in cardiovascular disease (CVD) among persons with and without diabetes. Two hundred fifty-five thousand nine hundred sixty-six individuals aged 25 years or older included in the National Health Interview Survey between 1997 and 2005. Educational attainment was used as a marker for SEP and self-reported history of CVD as the main outcome. Educational disparities were measured using prevalence rate ratios (PRR) and the relative index of inequalities (RII). Among adults with diabetes, CVD prevalence was persistently higher in those who did not complete high school (HS) than in college graduates (adjusted PRR [aPRR] 1.20, 95% confidence interval [95%CI] 1.05-1.38 in 1997-1999, and aPRR 1.12, 95% CI 1.00-1.25 in 2003-2005). However, the HS vs. college graduates disparity in CVD declined from 1997-1999 (aPRR 1.20, 95% CI 1.04-1.37) to 2003-2005 (aPRR 1.01, 95% CI 0.90-1.12). Among adults without diabetes educational disparities in CVD widened markedly over time. Concurrently with improvements in diabetes management, the widening of socioeconomic health disparities has remained limited in the diabetic population during the past decade. This provides evidence for the potential impact of improvements in disparities in health care access and process, such as experienced among persons with diabetes, in limiting socioeconomic health disparities.
 
Despite reductions in morbidity and mortality and changes in guidelines, little is known regarding changes in asthma treatment patterns. To examine national trends in the office-based treatment of asthma between 1997 and 2009. We used the National Ambulatory Care Survey (NAMCS) and the National Disease and Therapeutic Index™ (NDTI), nationally representative audits of office-based physicians, to examine patients diagnosed with asthma less than 50 years of age. Visits where asthma was diagnosed and use of six therapeutic classes (short-acting β(2) agonists [SABA], long-acting β(2) -agonists [LABA], inhaled steroids, antileukotrienes, anticholinergics, and xanthines). Estimates from NAMCS indicated modest increases in the number of annual asthma visits from 9.9 million [M] in 1997 to 10.3M during 2008; estimates from the NDTI suggested more gradual continuous increases from 8.7M in 1997 to 12.6M during 2009. NAMCS estimates indicated declines in use of SABAs (from 80% of treatment visits in 1997 to 71% in 2008), increased inhaled steroid use (24% in 1997 to 33% in 2008), increased use of fixed dose LABA/steroid combinations (0% in 1997 to 19% in 2008), and increased leukotriene use (9% in 1997 to 24% in 2008). The ratio of controller to total asthma medication use increased from 0.5 (1997) to a peak of 0.7 (2004). In 2008, anticholinergics, xanthines, and LABA use without concomitant steroids accounted for fewer than 4% of all treatment visits. Estimates from NDTI corroborated these trends. Changes in office-based treatment, including increased inhaled steroid use and increased combined steroid/long-acting β(2)-agonist use coincide with reductions in asthma morbidity and mortality that have been demonstrated over the same period. Xanthines, anticholinergics, and increasingly, LABA without concomitant steroid use, account for a very small fraction of all asthma treatments.
 
Receipt of Individual Clinical Preventive Services and Measures of Being Up To Date, Men age 50-64, 1997, 2002, and 2004 BRFSS 
Results of Multiple Logistic Regression Modeling* for Being Up-to-Date †, ‡ for Cancer Screening and Adult Immunization by Age/Sex Group and Demographic Characteristics: 2004 Behavioral Risk Factor Surveillance System, (BRFSS), Adults Aged 50-64 
Population-based rates for the delivery of adult vaccinations or screenings are typically tracked as individual services. The current approach is useful in monitoring progress toward national health goals but does not yield information regarding how many U.S. adults receive a combination of preventive services routinely recommended based on a person's age and gender. A composite measure is important for policymaking, for developing public health interventions, and for monitoring the quality of clinical care. During the period under study, influenza vaccination was newly recommended (2000) to be routinely delivered to adults in this age range. The objective of the study was to compare the delivery of routine clinical preventive services to U.S. adults aged 50-64 years between 1997 and 2004 using a composite measure that includes cancer screenings and vaccinations. Data were collected via telephone surveys in 1997, 2002, and 2004 as part of the Behavioral Risk Factor Surveillance System. The participants were randomly selected adults aged 50-64 years in the 50 states and the District of Columbia in the selected years. Sample sizes ranged from 24,917 to 77,244. The composite measure includes screening for colorectal cancer, cervical cancer, breast cancer, and vaccination against influenza (2002 and 2004 only). The composite measure quantifies the percentage of adults who are up-to-date with the complete set according to recommended schedules. With the inclusion of newly recommended influenza vaccination, the percentage of men and women aged 50-64 who were up-to-date on all selected measures in 2004 was 23.4% and 23.0%, respectively, compared with 37.6% and 30.5% in 1997. Without including influenza vaccination, the percentage of up-to-date adults aged 50-64 would have risen in 2004 to 50.5% (men) and to 44.7% (women). For both sexes, results varied by education, race/ethnicity, marriage status, insurance status, health status, and state. In 2004, the percentage of adults aged 50-64 years receiving routinely recommended cancer screenings and influenza vaccination was low with fewer than 1 in 4 being up to date.
 
Thirty-day mortality rates by patient age. 
Characteristics of Patients, Physicians and Hospitals (N=788,011)
Multilevel Logistic Regression Analysis of 30-Day Mortality (N=788,011) OR 95% CI
Pneumonia is the most common infectious cause of death worldwide. Over the last decade, patient characteristics and health care factors have changed. However, little information is available regarding systematically and simultaneously exploring effects of these changes on pneumonia outcomes. We used nationwide longitudinal population-based data to examine which patient characteristics and health care factors were associated with changes in 30-day mortality rates for pneumonia patients. Trend analysis using multilevel techniques. General acute care hospitals throughout Taiwan. A total of 788,011 pneumonia admissions. Thirty-day mortality rates. Taiwan's National Health Insurance claims data from 1997 to 2008 were used to identify the effects of patient characteristics and health care factors on 30-day mortality rates. Male, older, or severely ill patients, patients with more comorbidities, weekend admissions, larger reimbursement cuts and lower physician volume were associated with increased 30-day mortality rates. Moreover, there were interactions between patient age and trend on mortality. Male, older or severely ill patients with pneumonia have higher 30-day mortality rates. However, mortality gaps between elderly and young patients narrowed over time; namely, the decline rate of mortality among elderly patients was faster than that among young patients. Pneumonia patients admitted on weekends also have higher mortality rates than those admitted on weekdays. The mortality of pneumonia patients rises under increased financial strain from cuts in reimbursement such as the Balanced Budget Act in the United States or global budgeting. Higher physician volume is associated with lower mortality rates.
 
Top-cited authors
Janet B W Williams
  • Columbia University
David W Baker
  • Northwestern University, Philippines
David W Bates
  • Brigham and Women's Hospital
Dominick L Frosch
  • University of California, Los Angeles
Dean Schillinger
  • University of California, San Francisco