Article

Platelet Count as a Severity of Chronic Obstructive Pulmonary Disease

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Chronic Obstructive Pulmonary disease (COPD) is a heterogenous respiratory disease characterized by a progressive, not fully reversible airflow limitation associated with an abnormal inflammatory response of the lung to noxious stimuli. It is a disease presenting with pulmonary inflammation as well as a systemic one. Measurement of inflammatory marker is difficult but platelet count estimation is easy and less costly. This descriptive, cross-sectional study was carried out at Department of Medicine, Mymensingh Medical college Hospital, Mymensingh, Bangladesh for a period of twelve months among fifty-nine COPD patients. Data were collected through interview, physical examination and laboratory investigations. Statistical analysis was performed using SPSS version 22.0 for consistency and completeness. Age range of the patients was 40 to 49 years with a mean of 56.3±10.9 years. Age group 40-49 years contained the highest number (19; 32.3%) of patients. Majority 57(96.6%) of the respondents were male. Thirty seven (62.7%) of patients were illiterate. Majority 56(94.9%) of patients resided in rural area, of them most 38(64.4%) were farmers. According to Spirometric measurement among 59 respondents of COPD patient, 3(5.1%) were in GOLD stage-I, 9(15.3%) in GOLD stage-II, 27(45.8%) in GOLD stage-III and 20(33.9%) in GOLD stage IV group. Mean platelet count (10³/μl), 241.6±86.5 was found in mild, whereas 315.0±47.7 in moderate, 337.2±76.3 in severe, and 412.4±67.5 in very severe group of COPD patients. So increase in platelet count is statistically significant in severity of COPD. In conclusion, platelet count measurement is less costly to categorize COPD and may be a diagnostic marker.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... COPD severity. 24 Platelets were implicated in the development and exacerbation of COPD through multiple mechanisms, including destruction of lung elasticity by secreting platelet factor 4, and induction of a pro-thrombotic state and pulmonary vascular remodeling. 25 Consequently, a significantly elevated platelet count has also been associated with an increased risk of all-cause mortality, and antiplatelet therapy with aspirin may improve COPD symptoms and quality of life. ...
Article
Full-text available
Background The platelet to high-density lipoprotein cholesterol ratio (PHR) is a novel biomarker for inflammation and hypercoagulability. This study aimed to explore the potential association between PHR and prevalence of chronic obstructive pulmonary disease (COPD). Methods Participants aged between 40 and 85 years from the 1999–2018 US National Health and Nutrition Examination Survey with COPD were included. Multivariable logistic regression and restricted cubic spline analysis were applied to evaluate the associations between PHR and COPD. Propensity score matching (PSM) was performed to reduce the impact of potential confounding factors. Results A total of 25751 participants, including 753 with COPD, at a mean age of 57.19 years and 47.83% men, were included. The multivariable-adjusted model showed that the odds ratio (OR) and 95% confidence interval (CI) for PHR to predict COPD was 1.002 (1.001–1.003). Compared with the lowest quartile, the ORs and 95% CIs for the Q2, Q3, and Q4 PHR quartile were 1.162 (0.874–1.546), 1.225 (0.924–1.625), and 1.510 (1.102–2.069), respectively (P for trend = 0.012). Restricted cubic spline analysis demonstrated a linear association between PHR and COPD prevalence both before and after PSM. Significant association between PHR and COPD prevalence was observed only in participants without hypertension. Receiver-operating characteristic curves showed significantly higher area under the curve for distinguishing COPD from non-COPD by PHR than platelet count and high-density lipoprotein cholesterol. Conclusion PHR is significantly associated with COPD prevalence in US adults aged 40 to 85 years without hypertension, supporting the effectiveness of PHR as a potential biomarker for COPD.
... A cross-sectional study was done by Fathy et al. [39] documented that the MPV was significantly increased by increasing the severity of COPD. Also, another study by Moniruzzaman et al. [40] detected positive correlation between PLTs count and the severity of COPD. A previous study by Biljak et al. [41] revealed that in spite of changes in the lung function parameters, there was no significant differences in PLTs count and MPV between the COPD stages. ...
Article
Full-text available
Objective: The study aims to elucidate the association of host-related factors on systemic inflammation in COPD patients. Methods: In 295 clinically stable and optimally treated COPD patients from 39 outpatient centers, age, gender, and body composition (body mass index, BMI; fat-free mass index, FFMI; fat mass index, FMI) were related to inflammatory biomarkers: CRP, fibrinogen, TNFα, and its soluble receptors (s)TNFαR1 and sTNFαR2. Furthermore, forced expiratory volume in the first second (FEV1), BMI, FFMI, and FMI were stratified by quartiles to elucidate the influence on inflammatory biomarkers. Monovariate and multivariate regression analyses were performed for associations between inflammatory biomarkers. Results: Positive correlations were found for FFMI with sTNFαR1, FMI with CRP and age with TNFα, sTNFαR1 and sTNFαR2 (p < 0.01). FEV1 was not correlated with body composition and inflammatory markers. Mono- and multivariate analysis showed weak correlations between the acute phase markers and the TNFα system after correcting for multiple co-variants. Conclusions: This study highlights the modest role of age and body composition on levels of systemic inflammatory biomarkers in COPD. Results show the degree of airflow limitation does not affect systemic inflammation. Last, a weak relationship between acute phase markers and markers of the TNFα system is present in COPD.
Article
Full-text available
This paper studies estimation and inference for linear quantile regression models with generated regressors. We suggest a practical two-step estimation procedure, where the generated regressors are computed in the first step. The asymptotic properties of the two-step estimator, namely, consistency and asymptotic normality are established. We show that the asymptotic variance-covariance matrix needs to be adjusted to account for the first-step estimation error. We propose a general estimator for the asymptotic variance-covariance, establish its consistency, and develop testing procedures for linear hypotheses in these models. Monte Carlo simulations to evaluate the finite-sample performance of the estimation and inference procedures are provided. Finally, we apply the proposed methods to study Engel curves for various commodities using data from the UK Family Expenditure Survey. We document strong heterogeneity in the estimated Engel curves along the conditional distribution of the budget share of each commodity. The empirical application also emphasizes that correctly estimating confidence intervals for the estimated Engel curves by the proposed estimator is of importance for inference.
Article
Full-text available
In this paper, we introduce a novel parametric quantile regression model for asymmetric response variables, where the response variable follows a power skew-normal distribution. By considering a new convenient parametrization, these distribution results are very useful for modeling different quantiles of a response variable on the real line. The maximum likelihood method is employed to estimate the model parameters. Besides, we present a local influence study under different perturbation settings. Some numerical results of the estimators in finite samples are illustrated. In order to illustrate the potential for practice of our model, we apply it to a real dataset.
Article
Full-text available
The spatial distribution of soil moisture (SM) was estimated by a multiple quantile regression (MQR) model with Terra Moderate Resolution Imaging Spectroradiometer (MODIS) and filtered SM data from 2013 to 2015 in South Korea. For input data, observed precipitation and SM data were collected from the Korea Meteorological Administration and various institutions monitoring SM. To improve the work of a previous study, prior to the estimation of SM, outlier detection using the isolation forest (IF) algorithm was applied to the observed SM data. The original observed SM data resulted in IF_SM data following outlier detection. This study obtained an average data removal rate of 20.1% at 58 stations. For various reasons, such as instrumentation, environment, and random errors, the original observed SM data contained approximately 20% uncertain data. After outlier detection, this study performed a regression analysis by estimating land surface temperature quantiles. The soil characteristics were considered through reclassification into four soil types (clay, loam, silt, and sand), and the five-day antecedent precipitation was considered in order to estimate the regression coefficient of the MQR model. For all soil types, the coefficient of determination (R2) and root mean square error (RMSE) values ranged from 0.25 to 0.77 and 1.86% to 12.21%, respectively. The MQR results showed a much better performance than that of the multiple linear regression (MLR) results, which yielded R2 and RMSE values of 0.20 to 0.66 and 1.08% to 7.23%, respectively. As a further illustration of improvement, the box plots of the MQR SM were closer to those of the observed SM than those of the MLR SM. This result indicates that the cumulative distribution functions (CDF) of MQR SM matched the CDF of the observed SM. Thus, the MQR algorithm with outlier detection can overcome the limitations of the MLR algorithm by reducing both the bias and variance.
Article
Full-text available
in the prediction of quantiles of daily Standard&Poor’s 500 (S&P 500) returns we consider how to use high-frequency 5-minute data. We examine methods that incorporate the high frequency information either indirectly, through combining forecasts (using forecasts generated from returns sampled at different intraday interval), or directly, through combining high frequency information into one model. We consider subsample averaging, bootstrap averaging, forecast averaging methods for the indirect case, and factor models with principal component approach, for both direct and indirect cases. We show that in forecasting the daily S&P 500 index return quantile (Value-at-Risk or VaR is simply the negative of it), using high-frequency information is beneficial, often substantially and particularly so, in forecasting downside risk. Our empirical results show that the averaging methods (subsample averaging, bootstrap averaging, forecast averaging), which serve as different ways of forming the ensemble average from using high-frequency intraday information, provide an excellent forecasting performance compared to using just low-frequency daily information.
Article
Full-text available
Temporal gene expression data are of particular interest to researchers as they contain rich information in characterization of gene function and have been widely used in biomedical studies and early cancer detection. However, the current temporal gene expressions usually have few measuring time series levels; extracting information and identifying efficient treatment effects without temporal information are still a problem. A dense temporal gene expression data set in bacteria shows that the gene expression has various patterns under different biological conditions. Instead of analyzing gene expression levels, in this paper we consider the relative change-rates of gene in the observation period. We propose a non-linear regression model to characterize the relative change-rates of genes, in which individual expression trajectory is modeled as longitudinal data with changeable variance and covariance structure. Then, based on the parameter estimates, a chi-square test is proposed to test the equality of gene expression change-rates. Furthermore, the Mahalanobis distance is used for the classification of genes. The proposed methods are applied to the data set of 18 genes in P. aeruginosa expressed in 24 biological conditions. The simulation studies show that our methods perform well for analysis of temporal gene expressions.
Article
Full-text available
Spotted cDNA microarrays are emerging as a powerful and cost-effective tool for large-scale analysis of gene expression. Microarrays can be used to measure the relative quantities of specific mRNAs in two or more tissue samples for thousands of genes simultaneously. While the power of this technology has been recognized, many open questions remain about appropriate analysis of microarray data. One question is how to make valid estimates of the relative expression for genes that are not biased by ancillary sources of variation. Recognizing that there is inherent "noise" in microarray data, how does one estimate the error variation associated with an estimated change in expression, i.e., how does one construct the error bars? We demonstrate that ANOVA methods can be used to normalize microarray data and provide estimates of changes in gene expression that are corrected for potential confounding effects. This approach establishes a framework for the general analysis and interpretation of microarray data.
Article
Full-text available
Motivation: There is a great need to develop analytical methodology to analyze and to exploit the information contained in gene expression data. Because of the large number of genes and the complexity of biological networks, clustering is a useful exploratory technique for analysis of gene expression data. Other classical techniques, such as principal component analysis (PCA), have also been applied to analyze gene expression data. Using different data analysis techniques and different clustering algorithms to analyze the same data set can lead to very different conclusions. Our goal is to study the effectiveness of principal components (PCs) in capturing cluster structure. Specifically, using both real and synthetic gene expression data sets, we compared the quality of clusters obtained from the original data to the quality of clusters obtained after projecting onto subsets of the principal component axes. Results: Our empirical study showed that clustering with the PCs instead of the original variables does not necessarily improve, and often degrades, cluster quality. In particular, the first few PCs (which contain most of the variation in the data) do not necessarily capture most of the cluster structure. We also showed that clustering with PCs has different impact on different algorithms and different similarity metrics. Overall, we would not recommend PCA before clustering except in special circumstances.
Article
Full-text available
Motivation: A crucial step in microarray data analysis is the selection of subsets of interesting genes from the initial set of genes. In many cases, especially when comparing a specific condition to a reference, the genes of interest are those which are differentially expressed. Two common methods for gene selection are: (a) selection by fold difference (at least n fold variation) and (b) selection by altered ratio (at least n standard deviations away from the mean ratio). Results: The novel method proposed here is based on ANOVA and uses replicate spots to estimate an empirical distribution of the noise. The measured intensity range is divided in a number of intervals. A noise distribution is constructed for each such interval. Bootstrapping is used to map the desired confidence levels from the noise distribution corresponding to a given interval to the measured log ratios in that interval. If the method is applied on individual arrays having replicate spots, the method can calculate an overall width of the noise distribution which can be used as an indicator of the array quality. We compared this method with the fold change and unusual ratio method. We also discuss the relationship with an ANOVA model proposed by Churchill et al. In silico experiments were performed while controlling the degree of regulation as well as the amount of noise. Such experiments show the performance of the classical methods can be very unsatisfactory. We also compared the results of the 2-fold method with the results of the noise sampling method using pre and post immortalization cell lines derived from the MDAH041 fibroblasts hybridized on Affymetrix GeneChip arrays. The 2-fold method reported 198 genes as upregulated and 493 genes as downregulated. The noise sampling method reported 98 gene upregulated and 240 genes downregulated at the 99.99% confidence level. The methods agreed on 221 genes downregulated and 66 genes upregulated. Fourteen genes from the subset of genes reported by both methods were all confirmed by Q-RT-PCR. Alternative assays on various subsets of genes on which the two methods disagreed suggested that the noise sampling method is likely to provide fewer false positives.
Article
Temporal gene expression data is of importance in the classifications of gene functions and have been extensively used in biomedical studies, such as cancer diagnostics. However, since temporal gene expressions vary over time, after the initial time periods, many genes exhibit some kind of stability, which means that gene expressions keep constant or fluctuate slightly after those time points. Thereby, this threshold point is a key in the study of behaviours of gene expressions, which can be used to decide the measuring time period and to distinguish the gene expressions. In this paper three methods are presented to detect the threshold points for the gene expressions. In particular, the first-order and second-order change rates are used to construct the test statistics for detecting the threshold points. The simulation study shows that the proposed methods have a good performance for the detection of threshold points. A real dataset with 21 genes in P. aeruginosa expressed in 24 biological conditions is used to illustrate the proposed methodology.
Article
With ideal spatial adaptation, an oracle furnishes information about how best to adapt a spatially variable estimator, whether piecewise constant, piecewise polynomial, variable knot spline, or variable bandwidth kernel, to the unknown function. Estimation with the aid of an oracle o ers dramatic advantages over traditional linear estimation by nonadaptive kernels; however, it is a priori unclear whether such performance can be obtained by a procedure relying on the data alone. We describe a new principle for spatially-adaptive estimation: selective wavelet reconstruction. Weshow that variableknot spline ts and piecewise-polynomial ts, when equipped with an oracle to select the knots, are not dramatically more powerful than selective wavelet reconstruction with an oracle. We develop a practical spatially adaptive method, RiskShrink, which works by shrinkage of empirical wavelet coe cients. RiskShrink mimics the performance of an oracle for selective wavelet reconstruction as well as it is possible to do so. A new inequality inmultivariate normal decision theory which we call the oracle inequality shows that attained performance di ers from ideal performance by at most a factor 2 log n, where n is the sample size. Moreover no estimator can give a better guarantee than this. Within the class of spatially adaptive procedures, RiskShrink is essentially optimal. Relying only on the data, it comes within a factor log 2 n of the performance of piecewise polynomial and variable-knot spline methods equipped with an oracle. In contrast, it is unknown how or if piecewise polynomial methods could be made to function this well when denied access to an oracle and forced to rely on data alone.
Article
Dependent data arise in many studies. Frequently adopted sampling designs, such as cluster, multilevel, spatial, and repeated measures, may induce this dependence, which the analysis of the data needs to take into due account. In a previous publication (Geraci and Bottai in Biostatistics 8:140–154, 2007), we proposed a conditional quantile regression model for continuous responses where subject-specific random intercepts were included to account for within-subject dependence in the context of longitudinal data analysis. The approach hinged upon the link existing between the minimization of weighted absolute deviations, typically used in quantile regression, and the maximization of a Laplace likelihood. Here, we consider an extension of those models to more complex dependence structures in the data, which are modeled by including multiple random effects in the linear conditional quantile functions. We also discuss estimation strategies to reduce the computational burden and inefficiency associated with the Monte Carlo EM algorithm we have proposed previously. In particular, the estimation of the fixed regression coefficients and of the random effects’ covariance matrix is based on a combination of Gaussian quadrature approximations and non-smooth optimization algorithms. Finally, a simulation study and a number of applications of our models are presented.
Article
Microarrays can measure the expression of thousands of genes to identify changes in expression between different biological states. Methods are needed to determine the significance of these changes while accounting for the enormous number of genes. We describe a method, Significance Analysis of Microarrays (SAM), that assigns a score to each gene on the basis of change in gene expression relative to the standard deviation of repeated measurements. For genes with scores greater than an adjustable threshold, SAM uses permutations of the repeated measurements to estimate the percentage of genes identified by chance, the false discovery rate (FDR). When the transcriptional response of human cells to ionizing radiation was measured by microarrays, SAM identified 34 genes that changed at least 1.5-fold with an estimated FDR of 12%, compared with FDRs of 60 and 84% by using conventional methods of analysis. Of the 34 genes, 19 were involved in cell cycle regulation and 3 in apoptosis. Surprisingly, four nucleotide excision repair genes were induced, suggesting that this repair pathway for UV-damaged DNA might play a previously unrecognized role in repairing DNA damaged by ionizing radiation.
Article
In this chapter we discuss the problem of identifying differentially expressed genes from a set of microarray experiments. Statistically speaking, this task falls under the heading of “multiple hypothesis testing.” In other words, we must perform hypothesis tests on all genes simultaneously to determine whether each one is differentially expressed. Recall that in statistical hypothesis testing, we test a null hypothesis vs an alternative hypothesis. In this example, the null hypothesis is that there is no change in expression levels between experimental conditions. The alternative hypothesis is that there is some change. We reject the null hypothesis if there is enough evidence in favor of the alternative. This amounts to rejecting the null hypothesis if its corresponding statistic falls into some predetermined rejection region. Hypothesis testing is also concerned with measuring the probability of rejecting the null hypothesis when it is really true (called a false positive), and the probability of rejecting the null hypothesis when the alternative hypothesis is really true (called power).
Article
The change in gene expression patterns in response to host environments is a prerequisite for bacterial infection. Bacterial diseases often occur as an outcome of the complex interactions between pathogens and the host. The indigenous, usually non-pathogenic microflora is a ubiquitous constituent of the host. In order to understand the interactions between pathogens and the resident microflora and how they affect the gene expression patterns of the pathogens and contribute to bacterial diseases, the interactions between pathogenic Pseudomonas aeruginosa and avirulent oropharyngeal flora (OF) strains isolated from sputum samples of cystic fibrosis (CF) patients were investigated. Animal experiments using a rat lung infection model indicate that the presence of OF bacteria enhanced lung damage caused by P. aeruginosa. Genome-wide transcriptional analysis with a lux reporter-based promoter library demonstrated that approximately 4% of genes in the genome responded to the presence of OF strains using an in vitro system. Characterization of a subset of the regulated genes indicates that they fall into seven functional classes, and large portions of the upregulated genes are genes important for P. aeruginosa pathogenesis. Autoinducer-2 (AI-2)-mediated quorum sensing, a proposed interspecies signalling system, accounted for some, but not all, of the gene regulation. A substantial amount of AI-2 was detected directly in sputum samples from CF patients and in cultures of most non-pseudomonad bacteria isolated from the sputa. Transcriptional profiling of a set of defined P. aeruginosa virulence factor promoters revealed that OF and exogenous AI-2 could upregulate overlapping subsets of these genes. These results suggest important contributions of the host microflora to P. aeruginosa infection by modulating gene expression via interspecies communications.
Cluster analysis and display of genome-wide expression patterns
  • M B Eisen
  • P T Spellman
  • P O Brown
  • D Botstein
Eisen, M.B.; Spellman, P.T.; Brown, P.O.; Botstein, D. Cluster analysis and display of genome-wide expression patterns. Proc. Natl. Acad. Sci. USA 1998, 95, 14863-14868. [CrossRef]
Influence of biological conditions to temporal gene expression based on variance analysis
  • D Deng
  • K R Jahromi
  • Z Zhou
Deng, D.; Jahromi, K.R.; Zhou, Z. Influence of biological conditions to temporal gene expression based on variance analysis. In JSM Proceedings; American Statistical Association: Alexandria, VA, USA, 2017; pp. 786-800.
Order-dependent Thresholding with Applications to Regression Splines. In In-Contemporary Multivariate Analysis and Design of Experiments
  • J T Zhang
Zhang, J.T. Order-dependent Thresholding with Applications to Regression Splines. In In-Contemporary Multivariate Analysis and Design of Experiments; World Scentific Publishing Co. Pte. Ltd.: Singapore. 2005; pp. 397-425.