Questions related to Clinical Epidemiology
We are developing a phase I randomized clinical trial, in 18 healthy volunteers, aimed to test the safety and pharmacokinetics of i.v drug. However, we want to test two different doses of the drug (doses A and B), and each dose is to be administered with a specific infusion rate: dose A will be administered at X ml/min, and dose B at Y ml/min.
We need to randomize the 18 patients with a 2:1 ratio (active drug vs placebo), in blocks of size 6. However, to maintain the blind, we also would need two different infusion rates for the placebo (X and Y).
What do you think is the best way to randomize the volunteers in this study?
One way could be to randomize the patients in a 2 x 2 factorial design: one axis to assign the drug vs placebo, and the other axis to assign the drug dose with the infusion rate. To maintain a 2:1 ratio for the first axis, a and 1:1 ratio for the second axis, in blocks of size 6. A second way could be to randomize "three treatments" (dose A with X infusion rate, dose B with Y infusion rate, and placebo), 1:1:1 ratio, in blocks of size 6, and then, to randomize patients assigned to placebo in blocks of size two (or without blocks) to infusion rate X or Y.
What do you think is the best manner to randomize in methodological terms? In the case of the first way, Do we need to test the interaction between dose and infusion rate? Do you have another idea to randomize the patients in this study?
Thank you so much for your suggestions and help.
I am analysing the relation between productivity and quality in hospitals, using performance indicators. The number of hospitals is not big, below 45 per year. Is it possible to broad research on multiple years using same hospitals more than one time? Certiainly, that will harm assumtion on independent observations. However, I am sure that that there is no (systemati or planning) intervention in order to change hospital performances.
What do you think about my approach?
As you know, due to the high cost and time-consuming of laboratory assays, like PCR, for the suspected cases of COVID-19, doing this type of investigation is not possible for all the referrals to medical centers with COVID-like symptoms. Even though in middle and low-income countries this issue will be more critical. As a result, the infection may be confirmed by the practitioners based on the manifested signs and symptoms, without requesting a laboratory assay, i.e. clinically confirmation of COVID-19.
Accordingly, my question is that if we have the COVID-19 data in both forms of "PCR confirmed and Clinically confirmed patients" in our dataset:
1.Should we ignore or analyze those who didn't confirm by PCR test?
2.Can doing so cause bias in the reporting results?
3.How should these two types be reported together in a paper? Merged or separated?!
The general public must be made aware of the mode of transmission, presenting symptoms and the measures that can be undertaken to prevent the spread of infection.
Few options- Media, Webinars...
I have a study that found an association between exposure to tricyclic antidepressants and the risk of preeclampsia. The number of women who were exposed and had an outcome (i.e. preeclampsia) was small: 210 exposed women and 10 of them developed late-onset preeclampsia. Generalized Linear Models with binomial distribution and log function was used to calculate the relative risk (SPSS software). The reviewer asked us to "report the model goodness of fit criteria (to ensure correct specification of the model)".
How should I reply him? Because our study is an exploratory study that suggested an association. We are not building any model or predicting anything. Besides, the number of exposed cases were too small to predict anything. Thank you so much.
I want to analayze the inattentiveness level in ADHD patients and compare the current score with the score of 6 months before now. I do not have any score of inattentiveness level of 6 months before now. If I ask the patient or the patients close relative to state or fill out a questionnare about inattentiveness level of 6 months before now( stating their behaviour of 6 months earlier), will the results be reliable? Will the comparison stay out of bias?
Infection with coronavirus SARS-CoV-2 has affected every aspect of our life including scientific research. It is evident that all aspects of human subject research are potentially affected by the situation since recruitment and inclusion of participants/patients may be disturbed, characteristics of participants in ongoing studies may have changed and overall study protocols may have been flawed.
I would like the scientific community to reflect on these aspects including (but not limited to) the following:
How has the COVID-19 pandemic affected your research?
Has there been a focus shift in general interest, financial or otherwise?
Would it be necessary for studies to report in publications if and how the study was affected by the situation?
Which aspects need to be considered in the different fields of human subject research including (but not limited to) medicine, biology, psychology, and sports sciences?
Which statistical aspects need to be considered and how can we solve potential problems?
How were researchers themselves affected and do we see impact on other sciences?
Hello dear Reserachers, Professors,
I'm asking myself and you, for the effects of Covid-19 on publication, reviewing process, and reclassification of priorioties on certain topics; e.g: and with high probability, the first topic will be all reserches treated this virus and relatid subjects...)
How you see this new shift ?
I performed a multivariate logistic regression to estimate the role of some baseline variables (e.g. age, sex, etiology etc.) on a long-term outcome (good, bad). I only have one patient group and no patient underwent any treatment.
Can I calculate absolute risk reduction (ARR) or number needed to treat (NNT) in this case?
How is coronavirus transmitted?
How dangerous is coronavirus?
What are the symptoms of the coronavirus?
How is coronavirus diagnosed?
How long does the coronavirus live?
In this article, we have 2 groups, patients with clefts and patients without clefts. Our outcome is teeth with developmental defects "enamel defects". The authors report this as a "cross-sectional study".
In this article, we have 2 groups, patients with clefts and patients without clefts. Our outcome is teeth with developmental defects or dental anomalies. The authors report this as a "retrospective study" (case-control?)
In this article, we have 2 groups, patients with clefts and patients without clefts. Our outcome is teeth with dental anomalies or developmental defects. The authors report this as a "cohort study"
My question is why is the study design different, if in all cases we recruit patients and examine clinically / with radiography to assess the outcome.
Can someone help?
I would like to run a mediation analysis in STATA with a survival model using a category independent variable (3 levels), a continuous mediator (or) a category mediator (3 levels), and a binary dependent variable (Yes/No).
Which command should I use in STATA?
What are the data needed and exclusion criteria for the researches studying the prevalence of infectious diseases or viral infection incidence in epidemiological researches?
How to design the study and what are the suggestions to get a good questionnaire form?
Could fall down be the most common etiology of mandibular fractures especially in countries with rapid architectural urbanization? are there published articles or researches which support or have this result?
In terms of food interaction and confounding, are associations with health outcomes likely to be direct, or could it be explained by other factors such as decreased consumption of other foods? How would design a study to address this question?
I'm comparing the effect of different therapeutic methods on the one year survival rate of different groups of patients. I calculated Hazard ratio (HR) using this formula:
HR= [Ln(proportion of patients event-free on research arm)] / [Ln(proportion of patients event-free on control arm)]
Also, I calculated Standard error (SE) of HR in different studies based on availability of data such as; confidence interval (CI) , p-value and number of patients at risk.
Unfortunately, in some studies, none of this information is available. So I wonder if you could help me with an alternate method to calculate SE using HR.
Our paper published in the Journal of Clinical Epidemiology clearly outlines why doing this is not just inaccurate but actually wrong because it introduces bias. This is regularly done and advocated by Cochrane . I think this erroneous practice should now stop - see link to the paper 
 Higgins JPT, Altman DG, Gøtzsche PC, J€uni P, Moher D, Oxman AD, et al. The Cochrane collaboration’s tool for assessing risk of bias in randomised trials. BMJ 2011;343:d5928.
 Stone J, Gurunathan U, Glass K, Munn Z, Tugwell P, Doi SAR. Stratification by quality induced selection bias in a meta-analysis of clinical trials. J Clin Epidemiol. 2018 Nov 17. pii: S0895-4356(18)30744-3.
Is meta-epidemiology a part of meta-research?
I understand the following facts:
"Meta-epidemiology is not the same as meta-analysis.
Meta-research in not the same as meta-analysis."
Its true that a specific maximum number can not be given, but I would like to know the possible implications of increasing the number of confounding variables. What factors would limit the number of confounds to be included?
I have a longitudinal retrospective data set of human medical records. They feature CONDITION and DRUG. There is no way of saying why a drug was prescribed other than observing the conditions/diseases present at the time.
I would like to know whether taking drug X has an outcome on a particular disease. The outcome will be the duration between repeated visits to the doctors. I have used a recurrent cox regression to classify whether a particular drug (as a covariant) is associated with a change in risk to the disease outcome.
The predictor/independent could be time to a particular reoccurring disease record (remember, this is recurrent so a little bit like migraine so the patient sees the doctor often) and the dependent/outcome variable would be some measure of the disease outcome. If I take e.g., 1250,000 patient records, align them so that the index date is defined by the particular drug of interest, I could be able to get a before-after effect.
I would appreciate any links, papers, tutorials, on an approach similar to what I am trying to do.
A recent “Controversy & Debate” series in the Journal of Clinical Epidemiology suggests that the results and conclusions of nutrition epidemiologic research are both "pseudo-scientific" and “meaningless” (links below). This conclusion was based on the fact that FFQs and other memory-based dietary assessment methods (M-BMs) produce data that are “physiologically implausible” and have non-quantifiable (i.e., non-falsifiable) measurement error.
For example, there are myriad factors that render it impossible to ascertain if reported foods and beverages match the respondent’s actual consumption. These include reactivity, lying, false memories, forgetting, mis-estimation, pseudo-quantification, and invalid nutrient databases. Additionally, the use of M-BMs is based on multiple logical fallacies.
Thus, how can nutrition epidemiologic data be valid?
I am after some suggestions on what statistical analysis I can perform to show a before-and-after effect in a longitudinal electronic healthcare record (EHR). I have N number of EHRs, of varying sizes/time-spans. Each record has a history of recurrent disease records (for the one disease). To see whether a particular drug has had an effect on the disease outcome (duration before the next relapse), I have used time-gap recurrent cox regression.
However, I would now like to see whether the disease outcome (a series of remissions into relapses, good = long durations in between, bad = short durations in between) is immediately clear from the first prescription of a particular drug. In my head I imagine, taking all of the records (of vary time-span sizes -- very important to remember), and adjusting so every record overlaps when the drug of interest is first prescribed. Y axis is disease prevalence or risk, and x axis is time. From before the initial drug prescription event, disease prevalence/risk should be high, then after crossing the initial prescription time, disease prevalence/risk should drop. This would help demonstrate the efficacy of the drug.
Some points to remember: 1) Each medical record maybe unique in timespan. 2) The first prescription event of a particular drug will happen at different times across the record set. 3) Some records may have no medical events before the drug was prescribed (as all the diseases of interest feel after the drug prescription of interest). 4) The number of medical events either before or after the first prescription of the drug may be sparsely populated (making binning by time very difficult) or richly populated.
Is there a name for this kind of analysis? I am using R. Any suggestions are very welcome.
My exposure is a continuous variable that has been measured at 9 follow-up examinations in each participant.
My outcome is a also a continuous variable that has been measured only once: at the end of study.
I would like to test whether changes in my exposure over time is related to the outcome.
What are the available statistical tests to evaluate such relationship?
I have used mixed model previously to test the association between an exposure at time x and change in outcome over time. In this case, the outcome was measured multiple times over time while the exposure was measured only at the beginning of study. I wonder if mixed model would work for modeling the changes in exposure over time as well? or are there better statistical tests to answer my question?
I have 1600 drugs to treat a condition. I firstly test them each by performing a drug screen. I divide the 1600 drugs into successes and failures.
100 drugs were successful
1500 drugs were failures (had no effect).
I then have an in silico predicting model, to which I apply all 1600 drugs. Again, like the experimental drug screens, the predicting model will yield successful drugs and failure drugs.
X drugs were successful
1600-X drugs were failures
I want to know what kind of a statistical test I can perform that will return some measure of significance between the two methods used. So I can say whether both reproduce very similar success/failure outcomes (sets of drugs)?
I'm interested to get in contact with researchers in Montana that has an interest in epidemiological research in general, or pulmonary/cardiovascular research with an epidemiological approach. Have tried online searches/googling, but no success that way, so why not try here?
I have a medical longitudinal retrospective dataset, records between the observation period of 2000 and end 2016. For many reasons not every medical record spans that entire time-frame, e.g. the patient may have died, or they may have transferred in to the study half way through or transferred out at some stage.
A particular event (or exposure) is seen as a clinical event e.g., going to the doctor and saying or being told that you have a particular disease, e.g., a chest infection. That patient will also have a categorical variable to indicate whether they are a smoker or not.
I wish to count the frequency of chest infections per patient and distribute them over whether they smoke or not. I can imagine this would be a box plot with UQ and LQ being defined, frequency of disease on the Y, and a Smoke YES and NO on the X. This would be very easy to do. The problem I have though is that I am not sure how I deal with medical records of varying length. Surely there is bias if a smoker vs. non-smoker both have twenty chest infections, but there is a four year medical record difference?
I understand how survival methods can be used to determine the probability of survival given a dataset of 'time-to-events' with almost all examples considering cases of Alive/Dead e.g., cancer. However, how can I factor in cases of multiple remission and relapse events per person in a disease that will not take a life?
For example. Remission is defined by the absence for more than 90 days from medication or disease in a patient record. A relapse is returning to a similar medical/drug state any time after a single day beyond the 90-day disease/remit cut-off. To be considered as having ongoing treatment, there will be a continuous record of either drug prescription or disease code for less than 90 days at intervals (more than 90 days and we assume that the patient is in remission). e.g., visiting a doctor (or repeat prescription using the NHS model) at least once every three months.
Using these definitions, I can take an individual's medical history and table the number of days until time-of-event of diseased and remission. A good drug will mean remission was longer than diseased, or at least diseased is kept as short as possible even if there is then only a very short remission time. For example, Bob gets disease X at t=0 (and a drug prescription) and I start counting the number of days until there has been a 90-day absence of either, at which point I start counting the number of days as remission until the same drug or same disease appears and then I start counting again but for a diseased state.
patid days event
1 200 D (diseased)
1 450 R (remit)
1 340 D
1 500 R
2 ... D
2 ... R
I am using R and providing this data into the Cox regression function as though patid 1 (the first patient) is actually 4 people! Similar to how 4 people would be alive/dead in a cancer model.
I have coded all the logic to break down a group of individual's records into stages of diseased or remission. However, is it correct in a cox model (in R) to provide this information as is?
i have developed a bioinformatic tool (Link: geltowgs.uofk.edu). that compares PFGE gel image analysis results to mathematical models of band sizes derived from WGS (FASTA files). I have suggested a new algorithm to count DNA fragments that co-migrates across each lane. We are about to publish ower work (Research gate DOI: 10.13140/RG.2.2.32752.76806) . The attached file illustrate our method.
Currently trying to determine the feasibility of a three arm parallel group RCT pre-/posttest design on VAS pain intensity in osteoarthritis patients. Two intervention arms arms of different dosages of cannabis and an active control of codeine with analysis via ANOVA. However, there are currently no studies utilizing smoked/vaporized cannabis for the treatment of osteoarthritis or nociceptive pain so I don't know how I should calculate a cohen's d for the determination of a sample size. Any input or suggestions on finding an appropriate sample size or calculations would be greatly appreciated.
I would like to belong to this project. Which is the first step?
Diana Rodríguez Hurtado M.D FACP
Full Professor Faculty of Medicine Universidad Peruana Cayetano Heredia.
Internal Medicine - Geriatrics
Master in Clinical Epidemiology.
Mobile: 51 999395806
Just for an undergrad research proposal, if you are doing an RCT looking at if an intervention (exercise) is effective at preventing a disease (hernia), is there any merit to continuing to follow-up subjects once they have acquired the disease if your outcome of interest is only incidence? It consumes more resources for follow-ups if it is not necessarily going to affect data analysis or anything..
I am new to the latent class analysis technique and I'd like to ask if it would be possible to first run a latent class analysis on a sample of patients, followed by a agreement analysis between the LCA classification outcomes with the classification outcomes generated by an already established criteria (a priori)?
In other words, could a latent class analysis be used to verify the accuracy of established classification criteria for disease severity for example?
Another question, could we use the outcomes of latent class analysis to generate reproducible classification criteria that could be used in future studies?
I would like to compare the biochemical and cytological values like cell count, glucose levels, etc of test samples between two different tests, PCR and culture, for identification of Streptococcus pneumoniae. I would like to check if the values differed significantly among the samples positive by each test alone, and in combination. What statistical test(s) should I use for this? Is Fischer exact test with two-tailed p < 0.05 can be used? Kindly help me.
I want to have a look at how the area-level effects of disease prevalence vary between ethnic groups.
Does it make sense to configure a random slope model to look at ethnic group (e.g. white or non-white)?
I did come across a similar question here: http://thread.gmane.org/gmane.comp.lang.r.lme4.devel/10096 The author suggests that in using a random slope on a dichotomous variable, it might aid interpretation to not use a random intercept: In my mind, in the context of my question, this would have the result of being able to see how the effect of ethnicity would affect disease prevalence, but no longer being able to see differences in disease prevalence between groups.
I am currently working on a retrospective analysis of a patients database, which includes demographic variables, and other regarding healthcare (especially intra-operative and post-operative outcomes). However, it was not properly designed (6 years ago), therefore variables are not operationalized and some data is missing. Any recommendations to fix this old database and deal with the missing data?
I am gathering information and opinions of the community about the most appropriate mouse strain to study development of wound infection. I am aware that it is highly important which bacteria/bacterium is studied. However, I am interested in information/opinions with respect of physiological responses to wound and infection. What are your experiences and pros/cons for particular strains (i.e C57Bl/6 vs BALB/c)
Looking forward to good discussions.
I'm planning to do a case-control study of Diabetes type 2 and lifestyle. It's gonna be a population-based study. do you have any suggestion ?
Proof of concept studies are usually small sample. I was wondering if there is any guidance regarding what power you should be aiming for.
About The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE)statement
what is the possible score range of STROBE checklist? is there any cutoff score to define good or bad
McHugh (2012) highlights some guidelines for cohen's kappa (quoted below), but I wanted to pose a question to others. What level of kappa do you usually see in published works? Or if you were a reviewer, what level of kappa would be your cut-off. I have some graduate students coding some data and the kappa statistics are ranging from .63 to .80. If you were reviewing, what would you think? Your thoughts and feedback on this would be appreciated. Thanks!
"Cohen suggested the Kappa result be interpreted as follows: values ≤ 0 as indicating no agreement and 0.01–0.20 as none to slight, 0.21–0.40 as fair, 0.41– 0.60 as moderate, 0.61–0.80 as substantial, and 0.81–1.00 as almost perfect agreement."
I am performing a bibliometric analysis on the field of cancer.
I have found many studies which are derived from human samples and which would fall under the category of "molecular epidemiology" such as Genome-wide association studies -GWAS- or genetic risk prediction studies -GRIPS-, etc.
But how would they be best classified under the traditional concept of epidemiological design? Could they be classified as an "observational" or "interventional" study, or they need be considered as a different study design under the concept of "molecular epidemiological study"?
I am working on a systematic review and meta-analysis of RCTs and I am wondering if there is a different approach for rating the quality of the evidence if the difference is statistically significant versus when the difference is not. Or should the quality-rating be restricted to only the significant outcomes? To me it does not seem logical to downgrade the quality of the evidence for an outcome (with a non-significant result) because there is a high risk of bias in the included studies.
I am looking to sample questionnaire which measure knowledge, attitude and practice to predict risk behavior for cardiovascular diseases.
Recently, we completed a TST survey among diabetic population in some Malaysian primary care clinics. We used 2 tuberculin unit (2 TU of RT23) during the TST survey. I would like to know how the use different tuberculin unit (2 TU, 5 TU, 10 TU) may affect the TST results and how the results should be interpreted. Are there any evidence in literature to substantiate 2 TU reduces false positives in a high burden country with wide BCG coverage?
There are only a few studies on epidemiology of pediatric thyroid cancer mostly restricted to adolescents and less children (<15). Thyroid volume is smaller in children. Thus, the commonly used T-Staging in adults may underestimate the impact of tumor size on prognosis in children.
A text that explains the concepts without much math and formulae? Beginners in medical research in clinical trials/epidemiology often need a basic book on medical statistics that is appropriate for self study.
The literature on wax and deafness is sparse, subjective, contradictory and confusing. My clinical experience in a tertiary ENT clinic agrees with that of Politzer (1908), in that hearing loss only occurs when the meatus is completely obstructed by cerumen, or if a small passageway through is blocked off when water gets into the meatus. The literature confounds impacted wax with meatal occlusion. So is there any reliable data on the population attributable fraction of deafness due to wax, or data on hearing tests before and after removal? (There are reasons other than possible deafness for removing wax).
Im involved in a project which we desired to answer the follwoing: what is the probability of this patient to be sucessfully free of ventilation support 2 days from now, or 4 days from now or a week from now, if the ventilation support team starts its removal right now?
I have data from a cohort where many predictors, such as ventilator sets, blood gases etc, are measured every two days up to ventilation support successful removal. Therefore, the dataset has a typical predictors time dependent structure.
The first question is: is it feasible to use such data to develop a prediction model? I looked around and found some stuff supporting in the positive answer.
But I also found some stuff supporting a negative answer:
So far, a believe that it is possible, and in such a model the outcome prediction can be made at any time, not only at baseline as in traditional right censored survival model. Am I correct? However, Im not completely sure that cox proportional hazard model is adequate for this purpose. If it is not, what are the alternatives?
But then other questions follow.
To do time dependent predictions, I guess I must set the baseline date of every single patient to 1, and the following dates where predictors are repeatedly measured must be set as differences to his/her baseline date, in a way that, similar to the traditional right censored model, all patients will start at day 1. Is this reasonable?
The last question. Suppose I have developed and validated this model, and I have five different risk groups as in this paper
If the patient is classified at group low risk, but after a while in this group without the outcome, he/she changes the predictor values and this moves he/she to another risk group. Does the risk estimation for her/him will be like just jumping from one survival curve to another at the same time? Or his/her time will be set to baseline time in the survival curve as he/she are just starting the follow up right now?
Another way to ask the same thing...the patient is at baseline and I want to estimate the probability of the outcome at day 2 from now. This is simple. Now suppose that he/she did not have the outcome and up to day 2 of the follow-up. Now I want to predict the outcome in the next two days with different predictors values. Should I predict for 2 days or should predict the outcome at day 4 of the follow-up?
I was challenged with this question. For me, so far, the correct way is to predict at day 4 (or just jump from one survival curve to another at the same time), as it correspond to "calendar" date of that patient in the follow-up, as it is in the dataset. However, I heard an argument that made me wonder. This argument is that survival models deal with conditional probabilities in a way that if this patient did not have the outcome up to now, the probability for the next two days are conditioned to his survival. Therefore, predictions should be made for his next two days, not at day 4.
R-Lipoic acid, acetyl-L-carnitine, N-acetyl cysteine plus other agents may be important in protecting mitochondria, but making sense of the literature is challenging: any light that might be spread would be greatly appreciate.
Dears researchers,I want to analyse association between disease(absence or presence :dependent variables ) and SNP (independents )and others parameters by using logistic regression binary (spss),please how can i do adjustment for age and sex. Thanks
I'm having trouble with a project. I am using ONS deprivation indices (https://www.gov.uk/government/statistics/english-indices-of-deprivation-2015)
which assign deprivation indices according to postcode/LSOA.
My problem is that I'm unsure what to do regarding homeless people. Since they have no postcode, I can't assign a deprivation score. Homeless people are a 'deprived' population, however the deprivation indices is an area level index. One option would be to exclude them from the deprivation analysis, but this would then underestimate the impact of 'deprivation'.
Would be interested to hear from anyone with experience in this.
We are performing a systematic review regarding studies that validated administrative databases for ICD-9-CM or ICD-10 codes for different cancers. There could be events in which there are algorithms that were developed to identify multiple cancers.
I have been documenting strange step-like changes in deaths in a number of countries and would like others to check and see if these observations can be replicated using small-area death statistics. Attached is a paper documenting the parallel effects of these events on medical admissions to hospital and it gives an idea of the sort of analysis which could be required.
If needed many of the supporting studies can be accessed at www.hcaf.biz in the 'Emergency Admissions' web page - which also contains the published stuidies on deaths.
Much appreciated if you can assist.
We are more versed in mercury, manganese, and lead but have a large dataset (n = >4000) that also has data for cadmium. We found several significant demographic associations and also have further CBC, folate, other blood data that can be further analyzed
Please add me and message me about any potential collaboration
I plan to do a systematic review to summarize the evidence regarding the risk factors of a disease. I have 3 main questions.
1) Should I include longitudinal studies only or both cross-sectional and longitudinal studies that investigated the risk factors for a disease?
2) Can you recommend some risk of bias assessment tools for evaluating this type of systematic review?
3) Apart from odds ratio, relative risk and hazard ratio, are there any other statistical key words that I should use for my literature searches so that I can include all the relevant papers?
Thank you very much!
We are seeing lot of viral thrombocytopenic fevers which test negative for Dengue serology and even PCR but have a clinical picture similar to Dengue.
I am developing a tool to predict the age of onset of Alzheimer's disease using Bayesian statistics to estimate genotype. I have come up with a proportional hazards model using simulated data based on existing literature. I am using covariates APOE4 status, sex, history of TBI, history of DM and education level.
Potentially I need two datasets: The first would be the APOE4 genotype for the subject and familial history ie current age (or death) and age of onset for the subject, parents and grandparents.
The second dataset I would need is age of onset and genotype, sex, Hx of TBI, Hx of DM, and education level.
I would prefer an anonymous dataset(s). I would be happy to collaborate with someone on this project.
I am currently looking into studies for epidemiological data of offshore workers outside the US in order to use it for a project looking into the nature of medical cases on offshore installations and their causes.
There is significant amount of hospital based research with loads of biomedical research institutes. I am curious to know if there is evidence showing the benefits of hospital based research to the health of populations, training of health workers and clinical care (the easy bit).
I want to investigate the impact of bacterial infection, which could occur at any time of disease course, on patient prognosis. How can I choose appropriate statistical methods?
Many researches demonstrate that increasing the ceiling height of naturally ventilated hospital wards can provide a better IAQ and allow for reducing the risk of infection transmission. It has been interpreted that increasing ceiling height increases temperature stratification which concentrates hot air above occupied space. But does anyone one the explanation of the same phenomenon regarding airborne particles and contaminants?
ESRD (End Stage Renal Disease) prevalence is increasing in Oman and worldwide. The difference is that, the western world has an update system of documentation and statistics for everything. In Oman and the developing countries, where are we now? How many dialysis-treated end renal disease at present? If you know the answer, how many females and males? Why they end up with ESRD? Do we have guideline in operation now and is it successful?