Questions related to Diagnostic Tests
I'm currently writing my dissertation which evaluates the accuracy of 3 serological diagnostic tests (lateral flow, ELISA, and CLIA). I am collecting sensitivity and specificity values from current research on each of these tests.
Does anyone know which statistical test is best to test for significance in sensitivity and specificity values of the 3 diagnostic tests?
I am working on a meta-analysis on the volume measurement of pulmonary nodules by automatic software tools. I used I^2 to calculate heterogeneity between studies but I was advised to do funnel plots. The studies are either phantom based (artificial nodules) or coffee-break in vivo studies were the actual volume of the nodule does not change. This is because there is no gold standard for the in vivo volume, since after surgery the nodule is known to shrink.
to my understanding, funnel plots apply to sensitivity / specificity studies, and I am having a hard time understanding what that means in this particular case, since all nodules, do not change (I.e., no false negatives or true positives for growth).
how else could I check for bias of publication?
Based on the topics listed in https://www.nature.com/articles/d41586-020-03564-y the diversity of major topics categories under which research publications on COVID are grouped are:
- Modelling epidemic, controlling spread
- Public health
- Diagnostics, testing
- Mental health
- Hospital mortality
It is obvious that Lateral Flow testing belongs to the third category on the list - diagnostics/testing. The software interface aspect of things is not addressed. One aspect that would belong to the software side is the reporting/registration of results. Please read the latest preprint of my paper on this subject at:
What are your thoughts on the impact on our preparedness for serious pandemics if software engineers develop methods to tackle online cheating at home? Please share your thoughts, let me know if I can quote you as a reviewer for this paper in a journal. Thanks in advance for your collaboration.
When screening for COVID-19 or any other infection using existing diagnostic tests, is it possible to detect Latently infected individual? If so, how different are the chances of detection in Latently infected compared to Infectious individuals and what other factors besides viral load and test sensitivity could influence detection chances? Thanks!
I have two questions.
1. When running the Trans-log SFA technique, is it necessary to carry out unit root tests?
2. What are the recommended diagnostic tests for SFA?
The metanalysis was done for the Sensitivity, Specificity, LR+, LR-, and DOR of each outcome. But the reviewer asked me to calculate and include the PPV and NVP too, however I don't think it is possible because the prevalence of the outcome (difficult laryngoscopy) is very variable (between 1,5 and 20% in the literature). What should I do?
should I calculated it with a median? or just not calculated it all.
I will put a picture of one metanalysis
Do we need to run the panel regression diagnostic tests for panel stochastic frontier analysis (xtfrontier or sfpanel)?
Could someone suggest the best read for this analysis, especially for the time-varying decay model and sfpanel with inefficiency functions explicitly mentioned?
Let us suppose we have a new cheap and simple diagnostic test we want to evaluate against the expensive and complex gold standard for a highly lethal disease.
The gold standard test is dichotomous (positive or negative), but the new test returns two continuous results: let's call them "Result A" and "Result B".
Assuming the disease can be accurately diagnosed with the gold standard test, we want to
1) estimate the posterior probability of disease given the prior and the new test results A and B, i.e. P(D+|A,B)
2) define the best threshold values for both A and B
Given the high lethality, we're more interested in avoiding false negatives.
Let's suppose we have data like the ones in figure 1 (randomly generated data). Big red dots and small grey dots are patients whose gold standard test did result respectively positive and negative.
Which is the best model to evaluate such a test?
Logistic regression and ROC curve?
Clustering in machine learning?
For example if I am comparing Rt PCR ct value with the RDT test positive or negative for Covid 19.
My research has the dependent variable that cannot have data prior to 1995 as the existence of the variable commences since that year only. So I now have 22 annual time series data. I am testing for the determinants against 6 other variables which are strongly related to the dependent variable. My test results indicate the following:
Unit root test (DF-GLS) - stationary
ARDL F-bounds test - Cointegration exists at a 1 per cent significance level
ARDL long-run cointegration - All explanatory variables are statistically significant
ECM - Cointegration across all variables shows statistical significance
Diagnostics tests - The results validate the null hypothesis assumptions of no autocorrelation (Serial Correlation LM Test - Breusch-Godfrey), the existence of homoscedasticity (Heteroscedasticity Test: Breusch-Pagan-Godfrey), normal distribution of the residuals (Normality Test: Jarque-Bera), and a correct model specification (Ramsey RESET).
Parameter stability - CUSUM and CUSUMSQ plots are within the critical lines of 5 per cent.
Can I go ahead with this econometric analysis or is there still a possibility of a spurious regression with such data? Please share your thoughts and suggestions. Are there are any studies on similar lines i.e. small sample data for ARDL estimation that I can refer to?
My ARDL model with no trend and no intercept passed every thing like, long run co integration, sign and significant of coefficients and all diagnostic tests. What will be the interpretation of “no trend and no intercept” in ARDL model? I need comments or any paper in similar context. Thanks
Systematic reviews and meta-analyses of diagnostic test accuracy usually pool sensitivity and specificity estimates. However, for clinical/patient-care purposes, positive and negative predictive values (PPV and NPV) are arguably more useful.
Is including PPV and NPV as meta-analytic pooling outcomes sensible? One potential argument against this that I can think of is that these measures are influenced by disease prevalence in the studies that report them (unlike sensitivity and specificity). However, a potential counter-argument to that is some sort of stratification by pre-specified prevalence ranges can be performed.
I want to prepare a Two Tier Diagnostic Test for Misconceptions of Cell Division and Nutrition at XI grade Science students. I did the item analysis of these concept. I want information about preparation of two tier diagnostic misconception test. I would like to accept your suggestions for my further work. If any one has authentic information regarding the validation of this two Tier diagnostic test please share with me.
Is it by comparing the values of sensitivity, specificity, positive predictive value and negative predictive value of each test?
I want to compare the performance of two diagnostic tests (binary yes or no) on the same population, first, I want to compare if there is a significant difference in their ability to predict my specific outcome/disease and second, I want to identify if there is a significant difference in the distribution of different characteristics between the positive diagnostic test results (both numeric and binary). Can this be done on SPSS? and if so under which test?
I am aware of a study that has compared the use of citrated and defibrinated sheep's blood in microbiological media (
Thank you for your input.
I am conducting Diagnostic test accuracy meta-analysis.
For publication bias, I want to conduct Deek's funnel plot test.
I am using R software with mada ant meta packages.
However, I found that there is no r package for Deek's funnel plot test.
Deek's test seems to be available only in STATA.
Trust you are all fine,
Please I have been struggling for days on estimating some spatial diagnostic test on my model. The stata command I am using is spatdiag command.
I am working on a panel data with 759 observations. However, I am using a 33x33 matrix. Each time I run the spatdiag command after the regression model, I get this error "Matrix W is 33x33, regression has been carried out on 759 obs. To run -spatdiag- weights matrix dimension must equal N. of obs"
This problem is in similitude with this https://www.statalist.org/forums/forum/general-stata-discussion/general/1531048-how-to-run-lm-tes-to-check-of-spatial-auto-correlation-in-panel-data, but no response.
I would be glad if I can get a comment on how to go about. I look forward to hearing from you
In order to establish new Rapid Diagnostic Test, I need the mentioned Anti-COVID-19 antibody urgently. Has anybody can assist me to prepare it?
Thanks in anticipated
Hi. I'm running a multivariate GARCH model. When I introduce un AR(1) in the mean equation, all the diagnostic tests on Standardized Residuals and squared residuals are perfect (Q-Statistics, Hosking's Multivariate Portmanteau test, Li and McLeod's Multivariate Portmanteau tests), however, the AR(1) parameter is not significant. When I remove the AR(1), Hosking's Multivariate Portmanteau Statistics are significant (autocorrelation problem is both residuals and squared residuals). It's a surprising result!!! especially that Hosking and McLeod's portmanteau tests give different results!
Do I have to keep the AR(1) even when it's not significant?
Thanks in advance for any advice.
What sample size formula should I use to compare the three area under the roc curve (AUC) obtained from three diagnostic methods from similar sample?
MedCalc software gives us the sample size to compare the two AUC. My question is about comparing more than 2 AUC.
We wish to compare a new rapid test to gold standard PCR for a disease with a population prevalence of 1-. 2 %. Samples will be tested by both methods concurrently (paired analysis). We will be testing symptomatic people (not the general population) and will want a sample size which will enable detection of approx 5-15% sensitivity difference with 80% power - if you can tell me how to do this in STATA (14) I'd be grateful.
Dear seniors and researchers,
I am currently performing ROC curve analysis in SPSS v22 to asses the diagnostic value of several laboratory tests to predict parasite density status (whether hyperparasitemia or not) of patients with malaria. I found that investigated parameters I examined showed a high AUC while the p value is not significant. Regarding the circumstance I faced, what are possible factors that might influence the significance of AUC in ROC Curve analysis? Does the number of samples contribute to this finding?
Could you help me out how to compare the sensitivity, specificity, PPV and NPV of the same diagnostic test (vs. a gold standard) on two different samples/populations?
Presumably I should use chi square test, but I don't know how, as these are all derived parameters, not percentages. Preferably I would use SPSS.
Answers are much appreciated!
I am planning to condacting a diagnostic accuracy systematic review and meta-analysis. In order to calculate pooled sensitivity and specificity. I need to know the number of TP, TN, FP, and, FN which are rarely reported in published articles. so how can I calculate the number of TP, TN, FP, FN from just the sample size, sensitivity, and specificity reported in a study?
I am planning to conduct a systematic review comparing 2 diagnostic tests, being one of them the conventional reference standard to diagnose the disease. Since it has been reported that this test might have a fairly low sensitivity in some cases, it could be relevant to compare it with another test.
However, most primary studies comparing these 2 tests are not assessing the sensitivity and specificity, only the agreement between them.
Would it be possible to conduct this review with this kind of data?
I have a mix order of integration after doing the unit root tests for all my variables. That is I(0) and I(1), I can't run panel Johansen co-integration or panel least square fix/ random effect because of this. it would violate the condition or assumption underlying them. The suitable approach is panel ARDL using Eviews-11. But i can't find any diagnostic test except for Histogram normality test. I don't know how to carry out serial correlation LM test, heteroscedasticity using the panel PMG/ARDL method on Eviews-11. Can i run the diagnostic test using the ordinary regression? and still use ARDL? PLEASE HELP.
I have a short panel of data in my analysis.(286 cross sections, over 3 years) I'm wondering what tests should i conduct to determine the validity of my model??? Are there any specific assumptions that i should check? I also tried running the unit root test however, e views wont let me run the test due to the short time period.
How would you perform heterogeneity assessment (chi-square) for diagnostic tests accuracy.
if my question about the performance of a computed test and the outcomes are in percentage.
i would want to assess the heterogeneity of different studies in the type of computed test and it's accuracy.
should i use chi-square? if yes, how to perform it while the outcomes are in percentage or proportions
I want to get the total Covid-19 gene number from the 50ml patient sample. I used the Sansure kit for RT-PCR analysis. In RT-PCR analysis, we only get the Ct value, not the gene number, or copy number of the virus. How can I get a copy number of the virus from the sample?
I looked for the daily number of samples being tested in "Our World in Data" "R" or "WHO"'s publicly available databases. I got a maximum of 80 countries data from "Our World in Data". However, WHO reported COVID-19 from more than 200 countries or territories and I need all the country's COVID-19 tests data. Are there any publicly available sources? Suggestions for the sources of the individual countries are very welcome.
I and my team, have planned to conduct a systematic review and meta-analysis that aimed to study the performance of a diagnostic test. We planned to use the QUADAS-2 tool to assess eligible studies for our systematic review. We need your help on how to utilize this tool.
I am new in ARDL estimation model.
its been required from me to do some diagnostic tests for the ARDL selected model.
one of these diagnostic tests is the adjusted R2. i am using Eviews 11 for my analysis. but i dont know what is the R2 means ? and how to find it, and what is the critical value for it? what is good R2 , and what is the bad R2 of the models,
i appropriate your answers and advice
Before I ask my question I would like to mention that I am NOT so good in econometrics and would request to please consider this, before answering, I will be very much Obliged.
I have used a panel Data for 6 years with 3 Dependent Variables and 3 Independent Variables. Hence, I have developed 3 Models as follows:
Dep. Var 1 = Alpha + B1 Ind.Var1 + B2Ind Var 2 + B3InvVar3 + error term
Dep. Var 2 = Alpha + B1 Ind.Var1 + B2Ind Var 2 + B3InvVar3 + error term
Dep. Var 3 = Alpha + B1 Ind.Var1 + B2Ind Var 2 + B3InvVar3 + error term
I have 6 years of Panel Data and applied GMM also included results of Fix. Eff and Rand. Eff Models for comparison.
My research has been challenged that I have used "Insufficient Tests" (It is not mentioned whether they are referring to Data Diagnostic/Normality- If these terms are different, please help me understand) so I have assumed they are referring to Normality Testing. As far as I know, Data normality is not an issue for Panel Data. However, Just to be sure, I have applied regression and VIF Tests to check for multicollinearity all values are well within an acceptable range (All Correlation Coefficients are below 0.05 and Mean VIF value is1.26).
Since multicollinearity is not an issue, What other tests are required for panel data? I believe there shouldn't be any since I am using a panel data set. If so How can I justify my assumption?
P.S I would sincerely appreciate it if you could please explain what the hell is the difference with a Diagnostic test and a Normal Distribution test? Please help me justify my data set.
I will be very much grateful for the assistance and hope to return the favor someday.
what are some ideas for developing cheaper test kits (it can be PCR-based or others) for COVID-19 diagnostics?
For those experienced in such test kits development, what would be the major challenges in terms of cost and how to overcome it?
Any tips for cutting cost would be much appreciated. Thanks!
I am running a logistic regression analysis where the outcome is a status for disease (positive, negative). The diagnostic test for the disease has a specificity of 100% and a sensitivity of 80%.
Can anyone tell me how to account for these spec. and sens. figures in my regression?
Hello, in order to do my end of study work in physiotherapy, I was wondering if it was possible to use SEBT as a diagnostic. And what would be the most relevant articles on this subject
Usually, RT-PCR or qPCR based test assay is performed for the diagnostic tests, which seems a little bit time-consuming. On the other hand, some rapid tests based on IgG-IgM interaction have been mentioned, which probably is not a specific test for coronavirus COVID-19. That is why, finding specific biomarkers (like antibody, protein or others) could be a very useful means to produce effective testing kits especially microdevices like lateral flow immunoassay strip or electrochemical sensor (lab on a chip). Can anybody, please, give me specific information or reference regarding this issue?
To conduct a study with people with severe/profound intellectual (and multiple) disabilities, it is important to ensure that the participating persons actually meet these sample criteria. -> From an international perspective, what are the most common ways/assessments/approaches to achieve this? Based on a little literature review, there are three possibilities:
- Questioning the direct support persons about their estimation within an open interview.
- Questioning the direct support person with a (semi-)structured interview/questionnaire (e.g. Vineland Adaptive Behavior Scales -> Arthur-Kelly et al. 2017; Mechling & Bishop 2011; Lancioni et al. 2015).
- Direct testing (e.g. Bayley Scales or Kent Infant Development -> Nijs et al. 2016).
(In terms of an assessment/questionnaire, it would be great if there were an English and a German version.) I am really interested in your feedback and your experiences.
I've recently completed a receiver operator curve analysis, where the goal is to identify the optimal cutpoint on a diagnostic test. We have two versions of the diagnostic test. The original test, and one scale where one of the six items was changed with alternate wording.
For the original scale, the AUC was .88, and the cutoff of two looks best (.82 sensitivity, .84 specificity, .66 youdens)
For the alternate scale, the AUC was .90, and the optimal cutoff looks like it was 1 (.93 sensitivity, .76 specificity, .68 youdens)
So, we are trying to decide if it is ultimately worth changing the wording of one of six items (i.e., selecting alternative scale), which looks like it may change the cutpoint of the overall test if adopted.
So, some questions:
1. Is a 2 percent increase in AUC important? Anyone have any guidelines for what constitutes an important change in AUC?
2. Would you take an 8% drop in specificity (.84 to.76), if you have an 11 percent increase in Sensitivity (.93 vs. .83)?
I look forward to your replies!
i am working on panel data. it is perfectly balanced panel, t>N , and all variables are I(1)
moreover, they are cointegrated. now my question is , which diagnostic testing i can use in this situation ?
I trying to build a model and conduct cost effectiveness analysis of diagnostic test. I have sensitivity data of different diagnostic test like ultrasonography, MRI, Tomosynthesis. What is the most appropriate way to calculate probabilities from sensitivity data.
The signs and symptoms of malaria are quite nonspecific - one of the few real distinguishing features is its periodic febrile paroxysms. Hence, what should be the basis for administration of pre-referral artesunate? Is just a history of periodic fever enough? Should you just use a Rapid Diagnostic Test (RDT) to confirm the diagnosis?
I am currently designing a trial to evaluate the performance of a diagnostic test by comparing it against an established standard. The outcomes are ordinal in nature, integer values ranging from 0 to 4. I understand that for binary outcomes, estimating sample size is very straightforward, however i have not found simple explanations on how to calculate the required sample size for ordinal multi class comparisons.
I have very little experience in ststistics so my question is very basic I am testing a new assay for detecting B12 deficiency, active b12 compared to the current test in use total B12. I Am analyzing the two tests on the same patients and have 357 patients in total. I am using spss and excel so what would the most appropriate test to use.
What I heard from my neurology colleague, that migraine is usually a diagnosis by exclusion. That means if your physician suspects you have migraine, other investigations will be done to exclude that you don't have another diagnosis.
Why is it so until this time of high tech and advanced medical diagnostic tests?
Is a skin biopsy, stored in saline, refrigerated one month at +4 °C, still expected to be suitable for a bacterial PCR-DNA analysis?
How fast does the DNA break down, and does it depend on which organism the DNA is from?
The purpose is diagnostic testing, so what is needed is a "yes/no" finding of the pathological bacteria borrelia burgdorferi spp. (causing Lyme disease).
It doesn't matter whether additional bacteria will grow (due to contamination or originating from the microflora of the skin).
Since (late) cutaneous Lyme disease, causing acrodermatitis chronica atrophicans, is a slow-growing infection, and borrelia burgdorferi spp. are very slow-growing bacteria, only a small amount of bacteria can be expected to exist in the sample, if present.
The question is whether the possible borrelia burgdorferi DNA is still intact enough, or is expected to be too broken down for analysis.
I'm aware that a few millennia ago, humans have observed that ants were attracted to urine which led to the practice of tasting urine for diagnosis. Would we see ants getting a comeback and work with physicians as an effective and cheap alternative for chemical tests?
Dear research community,
After running several tests (including F-Test, Breusch and Pagan’s (1980) Lagrange Multiplier (LM) Test and Hausman (1978) Test) I came to the conclusion that a fixed effects model is the most appropriate one for my data.
To ensure that the estimates are efficient I run a couple of diagnostic tests.
Following a modified Wald statistic the idiosyncratic errors seem to be heteroskedastic. However there is no evidence of serial correlation following the test proposed by Wooldridge (2002).--> here the sub-question if it is correct to run the command "xtserial" after:
egen countrynum = group(Country)
xtset countrynum Year, yearly
xtreg DV IVs, fe
The main questions is whether I can make use of robust (sandwich) estimators to correct for heteroskedasticity even though there seems to be no autocorrelation problems?
Thanks for your help in advance.
We have conducted a meta-analysis on diagnostic test accuracy studies that their method was similar to comparison method studies (We had data on index test, Reference test and the outcome was limits of agreement between the two methods to calculate precision). Simultaneous assessment of the two methods and setting were the two items important for us. The analysis was done with STATA 12.
For assessing the quality of studies, we have modified some of QUADAS-2 checklist questions but reviewers said that we must use GRADE approach too.
We have studies the GRADE guidelines. Unfortunately, we don’t understand how to do it and how to make specific questions in GRADE?
Any help would be greatly appreciated
doctor called it as an allergy & i got anti allergic injections but of no use .. i am much worried .. kindly suggest some diagnostic tests etc ?
(assume the tests are applied to the same population - if that matters)
I am looking to find a method to compare the likelihood ratio for two non-binary diagnostic tests, performed on the one group of patients.
Specifically, I have the results of two different antibody staining results that have 4 possible scores (e.g. 0, 1+, 2+, 3+). I then compared the results to the gold standard, which is a binary result (e.g. FISH +ve, or FISH -ve). Using this information, I was able to generate a likelihood ratio and 95% CI for each score.
I found a paper on how to calculate interactions between two estimates (e.g. risk ratios or LRs) by analyzing the values on a log scale to generate a z-score [Altman and Bland (2003). Interaction revisited: the difference between two estimates]. Would this be valid on my sample? I think there was a mention that the test only works for independent measurements, and should not be used on two estimates from the same patients. The two antibodies are independent tests, but does it make the test invalid if I compared their results on the same set of patients?
For disease X there exists no real, modern gold standard. The disease can be diagnosed with 4 diagnostic tests (A,B,C,D), all leading to a yes or no answer (binary).
I have a data set with results of those 4 tests of 100 persons. Not every test has been run in every person. Now I want to calculate the sensitivity of test A.
In order to receive a relative sensitivity of test A (relative to B,C,D), could I
- summ up the number of all the positive test results of B,C and D of those cases (I believe all positive, "BP"), in which test A has been run
- and divide this number by all positive results of test A?
Is that legit?
Furthermore I want to to calculate the sensitivity of test B (summ up the number of all the positive test results of A,C and D of those cases, in which test B has been run and divide this number by all positive results of test B), C (same procedure) and D (same procedure).
Is that a legit way to compare relative sensitivities in my data set? Is there any literature confirming / strengthening this procedure?
Looking forward to your answers!
Thank you very much!
I wish to calculate pooled sensitivity and pooled specificity for a diagnostic test as part of a systematic review. Is there a journal article, book or crystal ball that provides STEP-BY-STEP instructions on doing this? One that is easy enough for a statistical moron like myself to understand.
Hello everyone! Is there a formal way (such as Brooks-Gelman-Rubin (BGR) statistics or or Geweke diagnostic statistic) to determine convergence of Markov chain Monte Carlo (MCMC) if one estimates an econometric model using the latest bayes command in Stata 15? Stata 15 seems to only rely on graphical methods to determine convergence of the MCMCs but I am also interested in formal tests. However, I have not found anything yet on the same in Stata 15. Any help will be appreciated. Thank You!!
Good Day All,
I have new molecular-based diagnostic test which focuses on detecting SNPs in a 10 different genes. The DNA diagnostic test is actually probe based so it is looking at relatively small sequences (e.g. 16-24 base pairs long).
I would like to assess the test's overall accuracy as well as do an in-depth analysis on a set of well characterized clinical samples (perhaps a n~100-200) and am curious about thoughts on how the appropriate statistical test. The test is not a true sequencing 'platform' but it is rather similar.
In my opinion, there are several layers which I can assessed accuracy
- At a macro level, if any nucleotide is wrong the test is wrong. Run 100 tests and calculate test sensitivity/specificity and use McNemar's Test (crudest).
- At a micro level, look at each nucleotide (e.g. truth vs test) for the small sequences (therefore at a minimum 10*16=160 pairs per 'run') and then run the McNemar's on each run (so 160 pairs and 100 runs)
I am sure there are many other permutations so I am curious as to the appropriate statistical techniques to evaluate the accuracy, sensitivity and specificity of this test?
If I have an option to choose between two diagnostic tests with the following qualities:
A: conf.level of 95% and error of +/- 3.5%
B: conf.level of 99% and error of +/-5%
Both have more/less the same sensitivity and specificity.
Which of the upper two should I use?
Cancer drug hobbled by diagnostic-test confusion
A landmark cancer drug was approved last year to target tumours with specific mutations, no matter where in the body the cancer first took root. Yet physicians are struggling to identify which patients are likely to respond to the treatment, because of limitations in diagnostic tests to pick out molecular markers that indicate susceptible tumours.
I want to perform a meta-analysis on diagnostic tests. So, I will be pooling sensitivity and specificity. Some studies do not report the table that has TP, TN, FP, FN. I am wondering if there is any method (equation) that I can use to get these values or if there is any website/program that can do that rapidly.
It is worth mentioning that in these studies, the sample size, sensitivity, specificity and sometime CI are reported.
I would really appreciate any help from you
Hi all, I have few questions here:
1. If using one year time frame for households data, is it not considered as cross-section data? Because someone told me that household data is a micro data, if I use it to analyse on aggregate result, it is not cross-section.
2.I read from many articles that used households data especially on household expenditure survey data, but they did not mention about diagnostic tests. What kind of common or MUST diagnostic tests should be performed? Especially for OLS.
There are different types of diagnostic tests for Alzheimer's disease. as far as I know, one of them is positron emission tomography (PET) scanning. what exactly does cause the sign of the disease on a PET image? what percentage of Alzheimer's disease can be diagnosed by this procedure? do prescription drugs affect these signs after we take the test again?
30 year old patient diagnosed with primary hyperaldosteronism after presenting with severe hypertension and hypokalemia. On the next clinic visit to discuss further diagnostic tests and therapy, patient was found to be pregnant. How can this case be approached?
We are testing a new diagnostic tool and comparing it to the actual gold standard for this diagnosis.
Briefly, we examined 25 patients with the new diagnostic tool (test A) and the gold standard diagnostic tool (test B). Test A gives a positive result or a negative result (no variability or range in numbers, just "positive" or "negative" as outcome). We then performed test B which also gives a "positive" or "negative" results and which is considered the true result since this is the gold standard diagnostic tool.
All patients having a positive result on test A (n=18), had a positive result on test B (n=18).
Of all patients having a negative result on test A (n=7), 5 were negative on test B but 2 were positive on test B.
Overall, 23 patients had the same outcome on test A and test B, 2 were different, which means that our new diagnostic test has a sensitivity of 92% (if we consider test B to have 100% sensitivity).
Can you recommend me any more statistics on this data, to draw conclusions? Any idea to look at this data from another perspective? Any help or insight is appreciated.
Hello all, I'm working on Varian 4000 GC-MS. Since last week I've got the communicate "the trap filament is not being properly controlled". It seems strange because all diagnostic tests were ok, and the filament is new - it was changed in last week (the last one was broken). All other tests (air, water level, auto tune) passed without any errors, but when it comes to acquisition, the communicate appears once again. If anyone can help it would be great.
Large CT-registry based studies have shown that unrevascularized CTO has more underpriveledged prognosis, than conservatively treated one.
On the other hand, due to pretty evident lack of coronary flow in the CTO, stress tests, particularly those combined with imaging typically show hypoperfusion (actually you do not need additional tests beyond coronarography for that since CTO represents lack of flow).