Yi Pan

The Second Xiangya Hospital of Central South University, Ch’ang-sha-shih, Hunan, China

Are you Yi Pan?

Claim your profile

Publications (371)319.69 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: Cluster analysis of biological networks is one of the most important approaches for identifying functional modules and predicting protein functions. Furthermore, visualization of clustering results is crucial to uncover the structure of biological networks. In this paper, ClusterViz, an APP of Cytoscape 3 for cluster analysis and visualization, has been developed. In order to reduce complexity and enable extendibility for ClusterViz, we designed the architecture of ClusterViz based on the framework of Open Services Gateway Initiative. According to the architecture, the implementation of ClusterViz is partitioned into three modules including interface of ClusterViz, clustering algorithms and visualization and export. ClusterViz fascinates the comparison of the results of different algorithms to do further related analysis. Three commonly used clustering algorithms, FAG-EC, EAGLE and MCODE, are included in the current version. Due to adopting the abstract interface of algorithms in module of the clustering algorithms, more clustering algorithms can be included for the future use. To illustrate usability of ClusterViz, we provided three examples with detailed steps from the important scientific articles, which show that our tool has helped several research teams do their research work on the mechanism of the biological networks.
    IEEE/ACM transactions on computational biology and bioinformatics / IEEE, ACM 09/2015; 12(4):815-22. DOI:10.1109/TCBB.2014.2361348 · 1.44 Impact Factor
  • Junbo Zhang · Yun Zhu · Yi Pan · Tianrui Li
  • [Show abstract] [Hide abstract]
    ABSTRACT: In genome assembly, as coverage of sequencing and genome size growing, most current softwares require a large memory for handling a great deal of sequence data. However, most researchers usually can not meet the requirements of computing resources which prevent most current softwares from practical applications. In this paper, we present an update algorithm called EPGA2 which applies some new modules and can bring about improved assembly results in small memory. For reducing peak memory in genome assembly, EPGA2 adopts memory-efficient DSK to count K-mers and revised BCALM to construct De Bruijn Graph. Moreover, EPGA2 parallels the step of Contigs Merging and adds Errors Correction in its pipeline. Our experiments demonstrate that all these changes in EPGA2 are more useful for genome assembly. EPGA2 is publicly available for download at https://github.com/bioinfomaticsCSU/EPGA2. jxwang@csu.edu.cn. © The Author (2015). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
    Bioinformatics 08/2015; DOI:10.1093/bioinformatics/btv487 · 4.98 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Essential proteins are indispensable for living organisms to maintain life activities and play important roles in the studies of pathology, synthetic biology, and drug design. Therefore, besides experiment methods, many computational methods are proposed to identify essential proteins. Based on the centrality-lethality rule, various centrality methods are employed to predict essential proteins in a Protein-protein Interaction Network (PIN). However, neglecting the temporal and spatial features of protein-protein interactions, the centrality scores calculated by centrality methods are not effective enough for measuring the essentiality of proteins in a PIN. Moreover, many methods, which overfit with the features of essential proteins for one species, may perform poor for other species. In this paper, we demonstrate that the centrality-lethality rule also exists in Protein Subcellular Localization Interaction Networks (PSLINs). To do this, a method based on Localization Specificity for Essential protein Detection (LSED), was proposed, which can be combined with any centrality method for calculating the improved centrality scores by taking into consideration PSLINs in which proteins play their roles. In this study, LSED was combined with eight centrality methods separately to calculate Localization-specific Centrality Scores (LCSs) for proteins based on the PSLINs of four species (Saccharomyces cerevisiae, Homo sapiens, Mus musculus and Drosophila melanogaster). Compared to the proteins with high centrality scores measured from the global PINs, more proteins with high LCSs measured from PSLINs are essential. It indicates that proteins with high LCSs measured from PSLINs are more likely to be essential and the performance of centrality methods can be improved by LSED. Furthermore, LSED provides a wide applicable prediction model to identify essential proteins for different species.
    PLoS ONE 06/2015; 10(6):e0130743. DOI:10.1371/journal.pone.0130743 · 3.23 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Single nucleotide polymorphisms, a dominant type of genetic variants, have been used successfully to identify defective genes causing human single gene diseases. However, most common human diseases are complex diseases and caused by gene-gene and gene-environment interactions. Many SNP-SNP interaction analysis methods have been introduced but they are not powerful enough to discover interactions more than three SNPs. The paper proposes a novel method that analyzes all SNPs simultaneously. Different from existing methods, the method regards an individual’s genotype data on a list of SNPs as a point with a unit of energy in a multi-dimensional space, and tries to find a new coordinate system where the energy distribution difference between cases and controls reaches the maximum. The method will find different multiple SNPs combinatorial patterns between cases and controls based on the new coordinate system. The experiment on simulated data shows that the method is efficient. The tests on the real data of age-related macular degeneration (AMD) disease show that it can find out more significant multi-SNP combinatorial patterns than existing methods.
    IEEE/ACM Transactions on Computational Biology and Bioinformatics 05/2015; 12(3):695-704. DOI:10.1109/TCBB.2014.2363459 · 1.44 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Prediction of essential proteins which are crucial to an organism's survival is important for disease analysis and drug design, as well as the understanding of cellular life. The majority of prediction methods infer the possibility of proteins to be essential by using the network topology. However, these methods are limited to the completeness of available protein-protein interaction (PPI) data and depend on the network accuracy. To overcome these limitations, some computational methods have been proposed. However, seldom of them solve this problem by taking consideration of protein domains. In this work, we first analyze the correlation between the essentiality of proteins and their domain features based on data of 13 species. We find that the proteins containing more protein domain types which rarely occur in other proteins tend to be essential. Accordingly, we propose a new prediction method, named UDoNC, by combining the domain features of proteins with their topological properties in PPI network. In UDoNC, the essentiality of proteins is decided by the number and the frequency of their protein domain types, as well as the essentiality of their adjacent edges measured by edge clustering coefficient. The experimental results on S. cerevisiae data show that UDoNC outperforms other existing methods in terms of area under the curve (AUC). Additionally, UDoNC can also perform well in predicting essential proteins on data of E. coli.
    IEEE/ACM Transactions on Computational Biology and Bioinformatics 04/2015; 12(2):276-288. DOI:10.1109/TCBB.2014.2338317 · 1.44 Impact Factor
  • Min Li · Yu Lu · Jianxin Wang · Fang-Xiang Wu · Yi Pan
    [Show abstract] [Hide abstract]
    ABSTRACT: Essential proteins are indispensable for cellular life. It is of great significance to identify essential proteins that can help us understand the minimal requirements for cellular life and is also very important for drug design. However, identification of essential proteins based on experimental approaches are typically time-consuming and expensive. With the development of high-throughput technology in the post-genomic era, more and more protein-protein interaction data can be obtained, which make it possible to study essential proteins from the network level. There have been a series of computational approaches proposed for predicting essential proteins based on network topologies. Most of these topology based essential protein discovery methods were to use network centralities. In this paper, we investigate the essential proteins’ topological characters from a completely new perspective. To our knowledge it is the first time that topology potential is used to identify essential proteins from a protein-protein interaction (PPI) network. The basic idea is that each protein in the network can be viewed as a material particle which creates a potential field around itself and the interaction of all proteins forms a topological field over the network. By defining and computing the value of each protein’s topology potential, we can obtain a more precise ranking which reflects the importance of proteins from the PPI network. The experimental results show that topology potential-based methods TP and TP-NC outperform traditional topology measures: degree centrality (DC), betweenness centrality (BC), closeness centrality (CC), subgraph centrality (SC), eigenvector centrality (EC), information centrality (IC), and network centrality (NC) for predicting essential proteins. In addition, these centrality measures are improved on their performance for identifying essential proteins in biological network when controlled by topology potential.
    IEEE/ACM Transactions on Computational Biology and Bioinformatics 04/2015; 12(2):372-383. DOI:10.1109/TCBB.2014.2361350 · 1.44 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Apolipoprotein M (ApoM) is predominantly located in the high-density lipoprotein in human plasma. It has been demonstrated that ApoM expression could be regulated by several crucial nuclear receptors that are involved in the bile acid metabolism. In the present study, by combining gene-silencing experiments, overexpression studies, and chromatin immunoprecipitation assays, we showed that ApoM positively regulated liver receptor homolog-1 (LRH-1) gene expression via direct binding to an LRH-1 promoter region (nucleotides -406/ -197). In addition, we investigated the effects of farnesoid X receptor agonist GW4064 on hepatic ApoM expression in vitro. In HepG2 cell cultures, both mRNA and protein levels of ApoM and LRH-1 were decreased in a time-dependent manner in the presence of 1 μM GW4064, and the inhibition effect was gradually attenuated after 24 hours. In conclusion, our findings present supportive evidence that ApoM is a regulator of human LRH-1 transcription, and further reveal the importance of ApoM as a critical regulator of bile acids metabolism.
    Drug Design, Development and Therapy 04/2015; 9:2375-82. DOI:10.2147/DDDT.S78496 · 3.03 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Based on the next generation genome sequencing technologies, a variety of biological applications are developed, while alignment is the first step once the sequencing reads are obtained. In recent years, many software tools have been developed to efficiently and accurately align short reads to the reference genome. However, there are still many reads that can't be mapped to the reference genome, due to the exceeding of allowable mismatches. Moreover, besides the unmapped reads, the reads with low mapping qualities are also excluded from the downstream analysis, such as variance calling. If we can take advantages of the confident segments of these reads, not only can the alignment rates be improved, but also more information will be provided for the downstream analysis. This paper proposes a method, called RAUR (Re-align the Unmapped Reads), to re-align the reads that can not be mapped by alignment tools. Firstly, it takes advantages of the base quality scores (reported by the sequencer) to figure out the most confident and informative segments of the unmapped reads by controlling the number of possible mismatches in the alignment. Then, combined with an alignment tool, RAUR re-align these segments of the reads. We run RAUR on both simulated data and real data with different read lengths. The results show that many reads which fail to be aligned by the most popular alignment tools (BWA and Bowtie2) can be correctly re-aligned by RAUR, with a similar Precision. Even compared with the BWA-MEM and the local mode of Bowtie2, which perform local alignment for long reads to improve the alignment rate, RAUR also shows advantages on the Alignment rate and Precision in some cases. Therefore, the trimming strategy used in RAUR is useful to improve the Alignment rate of alignment tools for the next-generation genome sequencing. All source code are available at http://netlab.csu.edu.cn/bioinformatics/RAUR.html.
    BMC Bioinformatics 03/2015; 16(Suppl 5):S8. DOI:10.1186/1471-2105-16-S5-S8 · 2.58 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Essential proteins are vitally important for cellular survival and development, and identifying essential proteins is very meaningful research work in the post-genome era. Rapid increase of available protein-protein interaction (PPI) data has made it possible to detect protein essentiality at the network level. A series of centrality measures have been proposed to discover essential proteins based on the PPI networks. However, the PPI data obtained from large scale, high-throughput experiments generally contain false positives. It is insufficient to use original PPI data to identify essential proteins. How to improve the accuracy, has become the focus of identifying essential proteins. In this paper, we proposed a framework for identifying essential proteins from active PPI networks constructed with dynamic gene expression. Firstly, we process the dynamic gene expression profiles by using time-dependent model and time-independent model. Secondly, we construct an active PPI network based on co-expressed genes. Lastly, we apply six classical centrality measures in the active PPI network. For the purpose of comparison, other prediction methods are also performed to identify essential proteins based on the active PPI network. The experimental results on yeast network show that identifying essential proteins based on the active PPI network can improve the performance of centrality measures considerably in terms of the number of identified essential proteins and identification accuracy. At the same time, the results also indicate that most of essential proteins are active.
    BMC Genomics 02/2015; 16 Suppl 3(Suppl 3):S1. DOI:10.1186/1471-2164-16-S3-S1 · 3.99 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The recently developed next generation sequencing platforms not only decrease the cost for metagenomics data analysis, but also greatly enlarge the size of metagenomic sequence datasets. A common bottleneck of available assemblers is that the trade-off between the noise of the resulting contigs and the gain in sequence length for better annotation has not been attended enough for large-scale sequencing projects, especially for the datasets with low coverage and a large number of nonoverlapping contigs. To address this limitation and promote both accuracy and efficiency, we develop a novel metagenomic sequence assembly framework, DIME, by taking the DIvide, conquer, and MErge strategies. In addition, we give two MapReduce implementations of DIME, DIME-cap3 and DIME-genovo, on Apache Hadoop platform. For a systematic comparison of the performance of the assembly tasks, we tested DIME and five other popular short read assembly programs, Cap3, Genovo, MetaVelvet, SOAPdenovo, and SPAdes on four synthetic and three real metagenomic sequence datasets with various reads from fifty thousand to a couple million in size. The experimental results demonstrate that our method not only partitions the sequence reads with an extremely high accuracy, but also reconstructs more bases, generates higher quality assembled consensus, and yields higher assembly scores, including corrected N50 and BLAST-score-per-base, than other tools with a nearly theoretical speed-up. Results indicate that DIME offers great improvement in assembly across a range of sequence abundances and thus is robust to decreasing coverage.
    Journal of computational biology: a journal of computational molecular cell biology 02/2015; 22(2):159-77. DOI:10.1089/cmb.2014.0251 · 1.74 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: As the volume of data grows at an unprecedented rate, large-scale data mining and knowledge discovery present a tremendous challenge. Rough set theory, which has been used successfully in solving problems in pattern recognition, machine learning, and data mining, centers around the idea that a set of distinct objects may be approximated via a lower and upper bound. In order to obtain the benefits that rough sets can provide for data mining and related tasks, efficient computation of these approximations is vital. The recently introduced cloud computing model, MapReduce, has gained a lot of attention from the scientific community for its applicability to large-scale data analysis. In previous research, we proposed a MapReduce-based method for computing approximations in parallel, which can efficiently process complete data but fails in the case of missing (incomplete) data. To address this shortcoming, three different parallel matrix-based methods are introduced to process large-scale, incomplete data. All of them are built on MapReduce and implemented on Twister that is a lightweight MapReduce runtime system. The proposed parallel methods are then experimentally shown to be efficient for processing large-scale data.
    IEEE Transactions on Knowledge and Data Engineering 02/2015; 27(2):326-339. DOI:10.1109/TKDE.2014.2330821 · 2.07 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper considers the performance problem of VoIP over 802.11e WLANs caused by the unfairness between uplink and downlink as well as the inefficient EDCA. A novel medium access control scheme named BEDCA (Balanced EDCA) is presented, which provides service differentiation between the access point (AP) and the mobile stations (STAs) to enhance VoIP capacity. In BEDCA, the expression of AP’s contention window is obtained which is a relative constant value independent of the participating STAs. The minimum contention window of the STAs is traffic-aware based on the proposed algorithm. The performance improvement of BEDCA is verified through intensive simulations and the results show the capacity improvement of 82.1% compared to EDCA.
    International Journal of Distributed Sensor Networks 01/2015; 2015:1-11. DOI:10.1155/2015/235648 · 0.67 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Brain tumor segmentation aims to separate the different tumor tissues such as active cells, necrotic core, and edema from normal brain tissues of White Matter (WM), Gray Matter (GM), and Cerebrospinal Fluid (CSF). MRI-based brain tumor segmentation studies are attracting more and more attention in recent years due to non-invasive imaging and good soft tissue contrast of Magnetic Resonance Imaging (MRI) images. With the development of almost two decades, the innovative approaches applying computer-aided techniques for segmenting brain tumor are becoming more and more mature and coming closer to routine clinical applications. The purpose of this paper is to provide a comprehensive overview for MRI-based brain tumor segmentation methods. Firstly, a brief introduction to brain tumors and imaging modalities of brain tumors is given. Then, the preprocessing operations and the state of the art methods of MRI-based brain tumor segmentation are introduced. Moreover, the evaluation and validation of the results of MRI-based brain tumor segmentation are discussed. Finally, an objective assessment is presented and future developments and trends are addressed for MRI-based brain tumor segmentation methods.
    Tsinghua Science & Technology 12/2014; 19(6):578-595. DOI:10.1109/TST.2014.6961028
  • [Show abstract] [Hide abstract]
    ABSTRACT: Genome-Wide Association Studies (GWASs) aim to identify genetic variants that are associated with disease by assaying and analyzing hundreds of thousands of Single Nucleotide Polymorphisms (SNPs). Although traditional single-locus statistical approaches have been standardized and led to many interesting findings, a substantial number of recent GWASs indicate that for most disorders, the individual SNPs explain only a small fraction of the genetic causes. Consequently, exploring multi-SNPs interactions in the hope of discovering more significant associations has attracted more attentions. Due to the huge search space for complicated multilocus interactions, many fast and effective methods have recently been proposed for detecting disease-associated epistatic interactions using GWAS data. In this paper, we provide a critical review and comparison of eight popular methods, i.e., BOOST, TEAM, epiForest, EDCF, SNPHarvester, epiMODE, MECPM, and MIC, which are used for detecting gene-gene interactions among genetic loci. In views of the assumption model on the data and searching strategies, we divide the methods into seven categories. Moreover, the evaluation methodologies, including detecting powers, disease models for simulation, resources of real GWAS data, and the control of false discover rate, are elaborated as references for new approach developers. At the end of the paper, we summarize the methods and discuss the future directions in genome-wide association studies for detecting epistatic interactions.
    Tsinghua Science & Technology 12/2014; 19(6):596-616. DOI:10.1109/TST.2014.6961029
  • [Show abstract] [Hide abstract]
    ABSTRACT: In genome assembly, the primary issue is how to determine upstream and downstream sequence regions of sequence seeds for constructing long contigs or scaffolds. When extending one sequence seed, repetitive regions in the genome always cause multiple feasible extension candidates which increase the difficulty of genome assembly. The universally accepted solution is choosing one based on read overlaps and paired-end (mate-pair) reads. However, this solution faces difficulties with regard to some complex repetitive regions. In addition, sequencing errors may produce false repetitive regions and uneven sequencing depth leads some sequence regions to have too few or too many reads. All the aforementioned problems prohibit existing assemblers from getting satisfactory assembly results. In this article, we develop an algorithm, called EPGA, which extracts paths from De Bruijn graph for genome assembly. EPGA uses a new score function to evaluate extension candidates based on the distributions of reads and insert size. The distribution of reads can solve problems caused by sequencing errors and short repetitive regions. Through assessing the variation of the distribution of insert size, EPGA can solve problems introduced by some complex repetitive regions. For solving uneven sequencing depth, EPGA uses relative mapping to evaluate extension candidates. On real datasets, we compare the performance of EPGA and other popular assemblers. The experimental results demonstrate that EPGA can effectively obtain longer and more accurate contigs and scaffolds. EPGA is publicly available for download at https://github.com/bioinfomaticsCSU/EPGA. jxwang@csu.edu.cn. © The Author (2014). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
    Bioinformatics 11/2014; 31(6). DOI:10.1093/bioinformatics/btu762 · 4.98 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Nowadays, centrality analysis has become a principal method for identifying essential proteins in biological networks. Here we present CytoNCA, a Cytoscape plugin integrating calculation, evaluation and visualization analysis for multiple centrality measures. (i) CytoNCA supports eight different centrality measures and each can be applied to both weighted and unweighted biological networks. (ii) It allows users to upload biological information of both nodes and edges in the network, to integrate biological data with topological data to detect specific nodes. (iii) CytoNCA offers multiple potent visualization analysis modules, which generate various forms of output such as graph, table, and chart, and analyze associations among all measures. (iv) It can be utilized to quantitatively assess the calculation results, and evaluate the accuracy by statistical measures. (v) Besides current eight centrality measures, the biological characters from other sources could also be analyzed and assessed by CytoNCA. This makes CytoNCA an excellent tool for calculating centrality, evaluating and visualizing biological networks. http://apps.cytoscape.org/apps/cytonca. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
    Bio Systems 11/2014; 127C:67-72. DOI:10.1016/j.biosystems.2014.11.005 · 1.55 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Identification of disease-causing genes among a large number of candidates is a fundamental challenge in human disease studies. However, it is still time-consuming and laborious to determine the real disease-causing genes by biological experiments. With the advances of the high-throughput techniques, a large number of protein-protein interactions have been produced. Therefore, to address this issue, several methods based on protein interaction network have been proposed. In this paper, we propose a shortest path-based algorithm, named SPranker, to prioritize disease-causing genes in protein interaction networks. Considering the fact that diseases with similar phenotypes are generally caused by functionally related genes, we further propose an improved algorithm SPGOranker by integrating the semantic similarity of GO annotations. SPGOranker not only considers the topological similarity between protein pairs in a protein interaction network but also takes their functional similarity into account. The proposed algorithms SPranker and SPGOranker were applied to 1598 known orphan disease-causing genes from 172 orphan diseases and compared with three state-of-the-art approaches, ICN, VS and RWR. The experimental results show that SPranker and SPGOranker outperform ICN, VS, and RWR for the prioritization of orphan disease-causing genes. Importantly, for the case study of severe combined immunodeficiency, SPranker and SPGOranker predict several novel causal genes.
    Science China. Life sciences 10/2014; 57(11). DOI:10.1007/s11427-014-4747-6 · 1.69 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Many computational methods have been proposed to identify essential proteins by using the topological features of interactome networks. However, the precision of essential protein discovery still needs to be improved. Researches show that majority of hubs (essential proteins) in the yeast interactome network are essential due to their involvement in essential complex biological modules and hubs can be classified into two categories: date hubs and party hubs. In this study, combining with gene expression profiles, we propose a new method to predict essential proteins based on overlapping essential modules, named POEM. In POEM, the original protein interactome network is partitioned into many overlapping essential modules. The frequencies and weighted degrees of proteins in these modules are employed to decide which categories does a protein belong to? The comparative results show that POEM outperforms the classical centrality measures: Degree Centrality (DC), Information Centrality (IC), Eigenvector Centrality (EC), Subgraph Centrality (SC), Betweenness Centrality (BC), Closeness Centrality (CC), Edge Clustering Coefficient Centrality (NC) and two newly proposed essential proteins prediction methods: PeC and CoEWC. Experimental results indicate that the precision of predicting essential proteins can be improved by considering the modularity of proteins and integrating gene expression profiles with network topological features.
    IEEE Transactions on NanoBioscience 08/2014; 13(4). DOI:10.1109/TNB.2014.2337912 · 2.31 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Accurate annotation of protein functions is still a big challenge for understanding life in the post-genomic era. Recently, some methods have been developed to solve the problem by incorporating functional similarity of GO terms into protein-protein interaction (PPI) network, which are based on the observation that a protein tends to share some common functions with proteins that interact with it in PPI network, and two similar GO terms in functional interrelationship network usually co-annotate some common proteins. However, these methods annotate functions of proteins by considering at the same level neighbors of proteins and GO terms respectively, and few attempts have been made to investigate their difference. Given the topological and structural difference between PPI network and functional interrelationship network, we firstly investigate at which level neighbors of proteins tend to have functional associations and at which level neighbors of GO terms usually co-annotate some common proteins. Then, an unbalanced Bi-random walk (UBiRW) algorithm which iteratively walks different number of steps in the two networks is adopted to find protein-GO term associations according to some known associations. Experiments are carried out on S. cerevisiae data. The results show that our method achieves better prediction performance not only than methods that only use PPI network data, but also than methods that consider at the same level neighbors of proteins and of GO terms.
    Current Protein and Peptide Science 07/2014; 15(6). DOI:10.2174/1389203715666140724085224 · 3.15 Impact Factor

Publication Stats

3k Citations
319.69 Total Impact Points


  • 2015
    • The Second Xiangya Hospital of Central South University
      Ch’ang-sha-shih, Hunan, China
  • 1970–2015
    • Georgia State University
      • • Department of Biology
      • • Department of Computer Science
      Atlanta, Georgia, United States
  • 2013
    • University of Connecticut
      • Department of Computer Science and Engineering
      Storrs, CT, United States
  • 2010–2013
    • Central South University
      • • School of Biological Science and Technology
      • • School of Information Science and Engineering
      Changsha, Hunan, China
  • 2009
    • University of Central Arkansas
      • Department of Computer Science
      Arkansas, United States
    • Southeast University (China)
      • School of Computer Science and Engineering
      Nanjing, Jiangxi Sheng, China
  • 2003–2008
    • Southwest Jiaotong University
      • School of Information Science and Technology
      Hua-yang, Sichuan, China
  • 2006–2007
    • Jiangsu University of Science and Technology
      Chenkiang, Jiangsu Sheng, China
    • University of Georgia
      Атина, Georgia, United States
    • Nanjing University
      Nan-ching, Jiangsu Sheng, China
  • 2005
    • Nanyang Technological University
      Tumasik, Singapore
  • 2004–2005
    • The University of Memphis
      • Department of Computer Science
      Memphis, TN, United States
    • Georgia Institute of Technology
      • College of Computing
      Atlanta, Georgia, United States
  • 2002
    • University of Tsukuba
      • Centre for Computational Sciences
      Tsukuba, Ibaraki, Japan
    • The University of Aizu
      • School of Computer Science and Engineering
      Fukushima-shi, Fukushima-ken, Japan
  • 1970–2001
    • University of Dayton
      • Department of Computer Science
      Dayton, Ohio, United States
  • 1999
    • Griffith University
      Southport, Queensland, Australia
  • 1997–1999
    • Louisiana State University
      • Department of Computer Science
      Baton Rouge, Louisiana, United States
    • State University of New York at New Paltz
      New Paltz, New York, United States