Jon J Snyder

University of Minnesota Duluth, Duluth, Minnesota, United States

Are you Jon J Snyder?

Claim your profile

Publications (101)609.86 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: Among patients with ESRD, cancer risk is affected by kidney dysfunction and by immunosuppression after transplant. Assessing patterns across periods of dialysis and kidney transplantation may inform cancer etiology. We evaluated 202,195 kidney transplant candidates and recipients from a linkage between the Scientific Registry of Transplant Recipients and cancer registries, and compared incidence in kidney function intervals (time with a transplant) with incidence in nonfunction intervals (waitlist or time after transplant failure), adjusting for demographic factors. Incidence of infection-related and immune-related cancer was higher during kidney function intervals than during nonfunction intervals. Incidence was most elevated for Kaposi sarcoma (hazard ratio [HR], 9.1; 95% confidence interval (95% CI), 4.7 to 18), non-Hodgkin's lymphoma (HR, 3.2; 95% CI, 2.8 to 3.7), Hodgkin's lymphoma (HR, 3.0; 95% CI, 1.7 to 5.3), lip cancer (HR, 3.4; 95% CI, 2.0 to 6.0), and nonepithelial skin cancers (HR, 3.8; 95% CI, 2.5 to 5.8). Conversely, ESRD-related cancer incidence was lower during kidney function intervals (kidney cancer: HR, 0.8; 95% CI, 0.7 to 0.8 and thyroid cancer: HR, 0.7; 95% CI, 0.6 to 0.8). With each successive interval, incidence changed in alternating directions for non-Hodgkin's lymphoma, melanoma, and lung, pancreatic, and nonepithelial skin cancers (higher during function intervals), and kidney and thyroid cancers (higher during nonfunction intervals). For many cancers, incidence remained higher than in the general population across all intervals. These data indicate strong short-term effects of kidney dysfunction and immunosuppression on cancer incidence in patients with ESRD, suggesting a need for persistent cancer screening and prevention.
    Journal of the American Society of Nephrology 11/2015; DOI:10.1681/ASN.2015040373 · 9.34 Impact Factor

  • [Show abstract] [Hide abstract]
    ABSTRACT: Solid organ transplant recipients, who are medically immunosuppressed to prevent graft rejection, have increased melanoma risk, but risk factors and outcomes are incompletely documented. We evaluated melanoma incidence among 139,991 non-Hispanic white transplants using linked U.S. transplant-cancer registry data (1987-2010). We used standardized incidence ratios (SIRs) to compare incidence to the general population, and incidence rate ratios (IRRs) from multivariable Poisson models to assess risk factors. Separately, we compared post-melanoma survival among transplant recipients (N=182) and non-recipients (N=131,358) using multivariable Cox models. Among transplant recipients, risk of invasive melanoma (N=519) was elevated (SIR=2.20, 95%CI 2.01-2.39), especially for regional stage tumors (SIR=4.11, 95%CI 3.27-5.09). Risk of localized tumors was stable over time after transplantation, but higher with azathioprine maintenance therapy (IRR=1.35, 95%CI 1.03-1.77). Risk of regional/distant stage tumors peaked within 4 years following transplantation and increased with polyclonal antibody induction therapy (IRR=1.65, 95%CI 1.02-2.67). Melanoma-specific mortality was higher among transplant recipients than non-recipients (HR 2.98, 95%CI 2.26-3.93). Melanoma exhibits increased incidence and aggressive behavior under transplant-related immunosuppression. Some localized melanomas may result from azathioprine, which acts synergistically with ultraviolet radiation, while T-cell depleting induction therapies may promote late stage tumors. Our findings support sun safety practices and skin screening for transplant recipients.Journal of Investigative Dermatology accepted article preview online, 13 August 2015. doi:10.1038/jid.2015.312.
    Journal of Investigative Dermatology 08/2015; DOI:10.1038/jid.2015.312 · 7.22 Impact Factor

  • Transplantation 08/2015; DOI:10.1097/TP.0000000000000891 · 3.83 Impact Factor
  • N Salkowski · J J Snyder · B L Kasiske ·

    American Journal of Transplantation 06/2015; 15(8). DOI:10.1111/ajt.13353 · 5.68 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Concerns have been raised that optimized redistricting of liver allocation areas might have the unintended result of shifting livers from better-performing to poorer-performing OPOs. We used the Liver Simulated Allocation Model to simulate a 5-year period of liver sharing within either 4 or 8 optimized districts. We investigated whether each OPO's net liver import under redistricting would be correlated with two OPO performance metrics (observed to expected liver yield and liver donor conversion ratio), along with two other potential correlates (eligible deaths and incident listings above MELD 15). We found no evidence that livers would flow from better-performing OPOs to poorer-performing OPOs in either redistricting scenario. Instead, under these optimized redistricting plans, our simulations suggest that livers would flow from OPOs with more-than-expected eligible deaths toward those with fewer-than-expected eligible deaths, and that livers would flow from OPOs with fewer-than-expected incident listings to those with more-than-expected incident listings, the latter a pattern already established in the current allocation system. Redistricting liver distribution to reduce geographic inequity is expected to align liver allocation across the country with the distribution of supply and demand, rather than transferring livers from better-performing OPOs to poorer-performing OPOs. This article is protected by copyright. All rights reserved. © 2015 American Association for the Study of Liver Diseases.
    Liver Transplantation 05/2015; 21(8). DOI:10.1002/lt.24171 · 4.24 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Bending the cost curve in medical expenses is a high national priority. The relationship between cost and kidney allograft failure has not been fully investigated in the United States. Using Medicare claims from the United States Renal Data System, we determined costs for all adults with Medicare coverage who underwent kidney transplant January 1, 2007, to June 30, 2009. We compared relative cost (observed/expected payment) for year 1 after transplantation for all transplant centers, adjusting for recipient, donor, and transplant characteristics, region, and local wage index. Using program-specific reports from the Scientific Registry of Transplant Recipients, we correlated relative cost with observed/expected allograft failure between centers, excluding small centers. Among 19,603 transplants at 166 centers, mean observed cost per patient per center was $65,366 (interquartile range, $55,094-$71,624). Mean relative cost was 0.99 (±0.20); mean observed/expected allograft failure was 1.03 (±0.46). Overall, there was no correlation between relative cost and observed/expected allograft failure (r = 0.096, P = 0.22). Comparing centers with higher than expected costs and allograft failure rates (lower performing) and centers with lower than expected costs and failure rates (higher-performing) showed differences in donor and recipient characteristics. As these characteristics were accounted for in the adjusted cost and allograft failure models, they are unlikely to explain the differences between higher- and lower-performing centers. Further investigations are needed to determine specific cost-effective practices of higher- and lower-performing centers to reduce costs and incidence of allograft failure.
    Transplantation 04/2015; Online First(10). DOI:10.1097/TP.0000000000000721 · 3.83 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: There have been few prospective controlled studies of kidney donors. Understanding the pathophysiologic effects of kidney donation is important for judging donor safety and improving our understanding of the consequences of reduced kidney function in chronic kidney disease. Prospective, controlled, observational cohort study. 3-year follow-up of kidney donors and paired controls suitable for donation at their donor's center. Kidney donation. Medical history, vital signs, glomerular filtration rate, and other measurements at 6, 12, 24, and 36 months after donation. At 36 months, 182 of 203 (89.7%) original donors and 173 of 201 (86.1%) original controls continue to participate in follow-up visits. The linear slope of the glomerular filtration rate measured by plasma iohexol clearance declined 0.36±7.55mL/min per year in 194 controls, but increased 1.47±5.02mL/min per year in 198 donors (P=0.005) between 6 and 36 months. Blood pressure was not different between donors and controls at any visit, and at 36 months, all 24-hour ambulatory blood pressure parameters were similar in 126 controls and 135 donors (mean systolic blood pressure, 120.0±11.2 [SD] vs 120.7±9.7mmHg [P=0.6]; mean diastolic blood pressure, 73.4±7.0 vs 74.5±6.5mmHg [P=0.2]). Mean arterial pressure nocturnal dipping was manifest in 11.2% ± 6.6% of controls and 11.3% ± 6.1% of donors (P=0.9). Urinary protein-creatinine and albumin-creatinine ratios were not increased in donors compared with controls. From 6 to 36 months postdonation, serum parathyroid hormone, uric acid, homocysteine, and potassium levels were higher, whereas hemoglobin levels were lower, in donors compared with controls. Possible bias resulting from an inability to select controls screened to be as healthy as donors, short follow-up duration, and dropouts. Kidney donors manifest several of the findings of mild chronic kidney disease. However, at 36 months after donation, kidney function continues to improve in donors, whereas controls have expected age-related declines in function. Copyright © 2015 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.
    American Journal of Kidney Diseases 03/2015; 66(1). DOI:10.1053/j.ajkd.2015.01.019 · 5.90 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Unlabelled: The current system granting liver transplant candidates with hepatocellular carcinoma (HCC) additional Model for End-Stage Liver Disease (MELD) points is controversial due to geographic disparity and uncertainty regarding optimal prioritization of candidates. The current national policy assigns a MELD exception score of 22 immediately upon listing of eligible patients with HCC. The aim of this study was to evaluate the potential effects of delays in granting these exception points on transplant rates for HCC and non-HCC patients. We used Scientific Registry of Transplant Recipients data and liver simulated allocation modeling software and modeled (1) a 3-month delay before granting a MELD exception score of 25, (2) a 6-month delay before granting a score of 28, and (3) a 9-month delay before granting a score of 29. Of all candidates waitlisted between January 1 and December 31, 2010 (n = 28,053), 2773 (9.9%) had an HCC MELD exception. For HCC candidates, transplant rates would be 108.7, 65.0, 44.2, and 33.6 per 100 person-years for the current policy and for 3-, 6-, and 9-month delays, respectively. Corresponding rates would be 30.1, 32.5, 33.9, and 34.8 for non-HCC candidates. Conclusion: A delay of 6-9 months would eliminate the geographic variability in the discrepancy between HCC and non-HCC transplant rates under current policy and may allow for more equal access to transplant for all candidates.
    Hepatology 01/2015; 61(5). DOI:10.1002/hep.27704 · 11.06 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: While the costs to Medicare of solid organ transplant are varied and considerable, the total Medicare expenditure of $4.4 billion for solid organ transplant recipients was less than 1 remains one of the most cost-effective surgical interventions in medicine. Heart transplant, the most expensive of the major transplants, is likely cost-effective; SRTR has released an Excel-based tool for investigators to use in exploring this question further. It is likely that most solid organ transplants are cost-effective, given the results presented here and the relatively high cost of heart transplant. However, this must be verified with further study.
    American Journal of Transplantation 01/2015; 15(S2). DOI:10.1111/ajt.13201 · 5.68 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Sirolimus has anti-carcinogenic properties and can be included in maintenance immunosuppressive therapy following kidney transplantation. We investigated sirolimus effects on cancer incidence among kidney recipients. The US transplant registry was linked with 15 population-based cancer registries and national pharmacy claims. Recipients contributed sirolimus-exposed time when sirolimus claims were filled, and unexposed time when other immunosuppressant claims were filled without sirolimus. Cox regression was used to estimate associations with overall and specific cancer incidence, excluding nonmelanoma skin cancers (not captured in cancer registries). We included 32 604 kidney transplants (5687 sirolimus-exposed). Overall, cancer incidence was suggestively lower during sirolimus use (hazard ratio [HR] = 0.88, 95% confidence interval [CI] = 0.70–1.11). Prostate cancer incidence was higher during sirolimus use (HR = 1.86, 95% CI = 1.15–3.02). Incidence of other cancers was similar or lower with sirolimus use, with a 26% decrease overall (HR = 0.74, 95% CI = 0.57–0.96, excluding prostate cancer). Results were similar after adjustment for demographic and clinical characteristics. This modest association does not provide strong evidence that sirolimus prevents posttransplant cancer, but it may be advantageous among kidney recipients with high cancer risk. Increased prostate cancer diagnoses may result from sirolimus effects on screen detection.
    American Journal of Transplantation 12/2014; 15(1). DOI:10.1111/ajt.12969 · 5.68 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Whether the liver allocation system shifts organs from better-performing OPOs to poorer-performing OPOs has been debated for many years. SRTR models of OPO performance make it possible to study this question in a data-driven manner. We investigated whether each OPO's net liver import was correlated with two performance metrics (observed to expected liver yield and liver donor conversion ratio) as well as two alternative explanations (eligible deaths and incident listings above MELD 15). We found no evidence to support the hypothesis that the allocation system transfers livers from better-performing OPOs to centers in poorer-performing OPOs. Also, having fewer eligible deaths was not associated with net import. However, having more incident listings was strongly correlated with net import, both before and after Share-35. Most importantly, the magnitude of variation in OPO performance was much lower than the variation in demand: while the poorest-performing OPOs differed from the best by less than 2-fold in observed to expected liver yield, incident listings above MELD 15 varied nearly 14-fold. Although it is imperative that all OPOs achieve the best possible results, the flow of livers is not explained by OPO performance metrics and appears instead to be strongly related to differences in demand. This article is protected by copyright. All rights reserved.
    Liver Transplantation 12/2014; 21(3). DOI:10.1002/lt.24074 · 4.24 Impact Factor
  • P A Clayton · S P McDonald · J J Snyder · N Salkowski · S J Chadban ·
    [Show abstract] [Hide abstract]
    ABSTRACT: The US kidney allocation system adopted in 2013 will allocate the best 20% of deceased donor kidneys (based on the kidney donor risk index [KDRI]) to the 20% of waitlisted patients with the highest estimated posttransplant survival (EPTS). The EPTS has not been externally validated, raising concerns as to its suitability to discriminate between kidney transplant candidates. We examined EPTS using data from the Australia and New Zealand Dialysis and Transplant (ANZDATA) Registry. We included 4983 adult kidney-only deceased donor transplants over 2000-2011. We constructed three Cox models for patient survival: (i) EPTS alone; (ii) EPTS plus donor age, hypertension and HLA-DR mismatch; and (iii) EPTS plus log(KDRI). All models demonstrated moderately good discrimination, with Harrell's C statistics of 0.67, 0.68 and 0.69, respectively. These results are virtually identical to the internal validation that demonstrated a c-statistic of 0.69. These results provide external validation of the EPTS as a moderately good tool for discriminating posttransplant survival of adult kidney-only transplant recipients.
    American Journal of Transplantation 06/2014; 14(8). DOI:10.1111/ajt.12761 · 5.68 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In 2013, the Organ Procurement and Transplantation Network in the United States approved a new national deceased donor kidney allocation policy that introduces the kidney donor profile index (KDPI), which gives scores of 0%-100% based on 10 donor factors. Kidneys with lower KDPI scores are associated with better post-transplant survival. Important features of the new policy include first allocating kidneys from donors with a KDPI≤20% to candidates in the top 20th percentile of estimated post-transplant survival, adding waiting time from dialysis initiation, conferring priority points for a calculated panel-reactive antibody (CPRA)>19%, broader sharing of kidneys for candidates with a CPRA≥99%, broader sharing of kidneys from donors with a KDPI>85%, eliminating the payback system, and allocating blood type A2 and A2B kidneys to blood type B candidates. We simulated the distribution of kidneys under the new policy compared with the current allocation policy. The simulation showed increases in projected median allograft years of life with the new policy (9.07 years) compared with the current policy (8.82 years). With the new policy, candidates with a CPRA>20%, with blood type B, and aged 18-49 years were more likely to undergo transplant, but transplants declined in candidates aged 50-64 years (4.1% decline) and ≥65 years (2.7% decline). These simulations demonstrate that the new deceased donor kidney allocation policy may improve overall post-transplant survival and access for highly sensitized candidates, with minimal effects on access to transplant by race/ethnicity and declines in kidney allocation for candidates aged ≥50 years.
    Journal of the American Society of Nephrology 05/2014; 25(8). DOI:10.1681/ASN.2013070784 · 9.34 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Based on recommendations from a recent consensus conference and a report commissioned by the Centers for Medicare & Medicaid Services to the Committee of Presidents of Statistical Societies, the Scientific Registry of Transplant Recipients (SRTR) plans to adopt Bayesian methods for assessing transplant program performance. Current methods for calculating program-specific reports (PSRs) often generate implausible point estimates of program performance, wide confidence intervals and underpowered conventional statistical tests. Although technically correct, these methods produce statistical summaries that are prone to misinterpretation. The Bayesian approach assumes that performance of most programs is about average and few programs perform much better or much worse than average; thus, strong evidence is required to conclude that performance is extremely good or poor. In Bayesian statistics, inference is performed via a posterior probability distribution, which reflects both the available data and prior beliefs about what model parameter values are most likely. In the PSRs, the posterior distribution of a program-specific hazard ratio will show whether a program is likely to be performing better or worse than average. Bayesian-derived PSRs will be available for preview by programs on the private SRTR website in mid-2014 and will likely replace current methods for public reporting in early 2015.
    American Journal of Transplantation 04/2014; 14(6). DOI:10.1111/ajt.12707 · 5.68 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Use of grafts from donation after circulatory death (DCD) as a strategy to increase the pool of transplantable livers has been limited due to poorer recipient outcomes compared with donation after brain death (DBD). We examined outcomes of recipients of failed DCD grafts who were selected for relisting with regard to waitlist mortality and patient and graft survival after retransplant. From the Scientific Registry of Transplant Recipients database, we identified 1820 adults who underwent first deceased donor liver transplant January 1, 2004 to June 30, 2011, and were relisted due to graft failure; 12.7% were DCD recipients. Compared with DBD recipients, DCD recipients had better waitlist survival (90-day mortality: 8%, DCD recipients; 14–21%, DBD recipients). Of 950 retransplant patients, 14.5% were prior DCD recipients. Graft survival after second liver transplant was similar for prior DCD (28% graft failure within 1 year) and DBD recipients (30%). Patient survival was slightly better for prior DCD (25% death within 1 year) than DBD recipients (28%). Despite higher overall graft failure and morbidity rates, survival of prior DCD recipients who were selected for relisting and retransplant was not worse than survival of DBD recipients.
    American Journal of Transplantation 04/2014; 14(5). DOI:10.1111/ajt.12700 · 5.68 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: In response to recommendations from a recent consensus conference and from the Committee of Presidents of Statistical Societies, the Scientific Registry of Transplant Recipients explored the use of Bayesian hierarchical, mixed-effects models in assessing transplant program performance in the United States. Identification of underperforming centers based on 1-year patient and graft survival using a Bayesian approach was compared with current observed-to-expected methods. Fewer small-volume programs (<10 transplants per 2.5-year period) were identified as underperforming with the Bayesian method than with the current method, and more mid-volume programs (10–249 transplants per 2.5-year period) were identified. Simulation studies identified optimal Bayesian-based flagging thresholds that maximize true positives while holding false positive flagging rates to approximately 5% regardless of program volume. Compared against previous program surveillance actions from the Organ Procurement and Transplantation Network Membership and Professional Standards Committee, the Bayesian method would have reduced the number of false positive program identifications by 50% for kidney, 35% for liver, 43% for heart and 57% for lung programs, while preserving true positives for, respectively, 96%, 71%, 58% and 83% of programs identified by the current method. We conclude that Bayesian methods to identify underperformance improve identification of programs that need review while minimizing false flags.
    American Journal of Transplantation 04/2014; 14(6). DOI:10.1111/ajt.12702 · 5.68 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Transmission of cancer is a life-threatening complication of transplantation. Monitoring transplantation practice requires complete recording of donor cancers. The US Scientific Registry of Transplant Recipients (SRTR) captures cancers in deceased donors (beginning in 1994) and living donors (2004). We linked the SRTR (52 599 donors, 110 762 transplants) with state cancer registries. Cancer registries identified cancers in 519 donors: 373 deceased donors (0.9%) and 146 living donors (1.2%). Among deceased donors, 50.7% of cancers were brain tumors. Among living donors, 54.0% were diagnosed after donation; most were cancers common in the general population (e.g. breast, prostate). There were 1063 deceased donors with cancer diagnosed in the SRTR or cancer registry, and the SRTR lacked a cancer diagnosis for 107 (10.1%) of these. There were 103 living donors with cancer before or at donation, diagnosed in the SRTR or cancer registry, and the SRTR did not have a cancer diagnosis for 43 (41.7%) of these. The SRTR does not record cancers after donation in living donors and so missed 81 cancers documented in cancer registries. In conclusion, donor cancers are uncommon, but lack of documentation of some cases highlights a need for improved ascertainment and reporting by organ procurement organizations and transplant programs.
    American Journal of Transplantation 03/2014; 14(6). DOI:10.1111/ajt.12683 · 5.68 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Although metformin is contraindicated in patients with increased serum creatinine levels (≥1.5 mg/dl in men, ≥1.4 mg/dl in women) in the United States, its use has not been systematically examined in kidney transplant recipients. We aimed to determine the frequency of metformin use and its associations among kidney transplant recipients, and to assess allograft and patient survival associated with metformin use. In this retrospective cohort study, we linked Scientific Registry of Transplant Recipients data for all incident kidney transplants 2001-2012 and national pharmacy claims (n = 46,914). We compared recipients having one or more pharmacy claims for a metformin-containing product (n = 4,609) and recipients having one or more claims for a non-metformin glucose-lowering agent (n = 42,305). On average, metformin claims were filled later after transplant and were associated with higher estimated glomerular filtration rates before the first claim. Median serum creatinine (mg/dl) levels before the first claim were lower in recipients with metformin claims than in those with non-metformin claims (1.3 [interquartile range 1.0-1.7] vs. 1.6 [1.2-2.5], respectively; p < 0.0001). Metformin was associated with lower adjusted hazards for living donor (0.55, 95% confidence interval 0.38-0.80; p = 0.002) and deceased donor (0.55, 0.44-0.70; p < 0.0001) allograft survival at 3 years posttransplant, and with lower mortality. Despite metformin being contraindicated in renal dysfunction, many kidney transplant recipients receive it, and it is not associated with worse patient or allograft survival. © 2015 S. Karger AG, Basel.
    American Journal of Nephrology 02/2014; 40(6):546-53. DOI:10.1159/000370034 · 2.67 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: There is a shortage of kidneys for transplant, and many patients on the deceased donor kidney transplant waiting list would likely benefit from kidneys that are currently being discarded. In the United States, the most common reason given for discarding kidneys retrieved for transplant is procurement biopsy results. This study aimed to compare biopsy results from discarded kidneys with discard attributed to biopsy findings, with biopsy results from comparable kidneys that were successfully transplanted. In this retrospective, observational, case-control study, biopsy reports were examined from 83 kidneys discarded in 2010 due to biopsy findings (cases), 83 contralateral transplanted kidneys from the same donor (contralateral controls), and 83 deceased donors randomly matched to cases by donor risk profile (randomly matched controls). A second procurement biopsy was obtained in 64 of 332 kidneys (19.3%). The quality of biopsy reports was low, with amounts of tubular atrophy, interstitial inflammation, arteriolar hyalinosis, and acute tubular necrosis often not indicated; 69% were wedge biopsies and 94% used frozen tissue. The correlation between first and second procurement biopsies was poor; only 25% of the variability (R(2)) in glomerulosclerosis was explained by biopsies being from the same kidney. The percentages of glomerulosclerosis overlapped substantially between cases, contralateral controls, and randomly matched controls: 17.1%±15.3%, 9.0%±6.6%, and 5.0%±5.9%, respectively. Of all biopsy findings, only glomerulosclerosis>20% was independently correlated with discard (cases versus contralateral controls; odds ratio, 15.09; 95% confidence interval, 2.47 to 92.41; P=0.003), suggesting that only this biopsy result was used in acceptance decisions. One-year graft survival was 79.5% and 90.7% in contralateral and randomly matched controls, respectively, versus 91.6% among all deceased donor transplants in the Scientific Registry of Transplant Recipients. Routine use of biopsies could lead to unnecessary kidney discards.
    Clinical Journal of the American Society of Nephrology 02/2014; 9(3). DOI:10.2215/CJN.07610713 · 4.61 Impact Factor

Publication Stats

6k Citations
609.86 Total Impact Points


  • 2008-2015
    • University of Minnesota Duluth
      • Medical School
      Duluth, Minnesota, United States
  • 2003-2015
    • Minneapolis Medical Research Foundation
      Minneapolis, Minnesota, United States
    • University of Pittsburgh
      Pittsburgh, Pennsylvania, United States
  • 2014
    • United Network for Organ Sharing
      Ричмонд, Virginia, United States
    • Washington University in St. Louis
      San Luis, Missouri, United States
    • Yale-New Haven Hospital
      New Haven, Connecticut, United States
  • 2002-2014
    • Hennepin County Medical Center
      Minneapolis, Minnesota, United States
  • 2012
    • National Institutes of Health
      • National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK)
      Bethesda, MD, United States