Yi Pan

Central South University, Ch’ang-sha-shih, Hunan, China

Are you Yi Pan?

Claim your profile

Publications (329)220.9 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: In genome assembly, the primary issue is how to determine upstream and downstream sequence regions of sequence seeds for constructing long contigs or scaffolds. When extending one sequence seed, repetitive regions in the genome always cause multiple feasible extension candidates which increase the difficulty of genome assembly. The universally accepted solution is choosing one based on read overlaps and paired-end (mate-pair) reads. However, this solution faces difficulties with regard to some complex repetitive regions. In addition, sequencing errors may produce false repetitive regions and uneven sequencing depth leads some sequence regions to have too few or too many reads. All the aforementioned problems prohibit existing assemblers from getting satisfactory assembly results. In this article, we develop an algorithm, called EPGA, which extracts paths from De Bruijn graph for genome assembly. EPGA uses a new score function to evaluate extension candidates based on the distributions of reads and insert size. The distribution of reads can solve problems caused by sequencing errors and short repetitive regions. Through assessing the variation of the distribution of insert size, EPGA can solve problems introduced by some complex repetitive regions. For solving uneven sequencing depth, EPGA uses relative mapping to evaluate extension candidates. On real datasets, we compare the performance of EPGA and other popular assemblers. The experimental results demonstrate that EPGA can effectively obtain longer and more accurate contigs and scaffolds. EPGA is publicly available for download at https://github.com/bioinfomaticsCSU/EPGA. jxwang@csu.edu.cn. © The Author (2014). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
    Bioinformatics (Oxford, England). 11/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Nowadays, centrality analysis has become a principal method for identifying essential proteins in biological networks. Here we present CytoNCA, a Cytoscape plugin integrating calculation, evaluation and visualization analysis for multiple centrality measures. (i) CytoNCA supports eight different centrality measures and each can be applied to both weighted and unweighted biological networks. (ii) It allows users to upload biological information of both nodes and edges in the network, to integrate biological data with topological data to detect specific nodes. (iii) CytoNCA offers multiple potent visualization analysis modules, which generate various forms of output such as graph, table, and chart, and analyze associations among all measures. (iv) It can be utilized to quantitatively assess the calculation results, and evaluate the accuracy by statistical measures. (v) Besides current eight centrality measures, the biological characters from other sources could also be analyzed and assessed by CytoNCA. This makes CytoNCA an excellent tool for calculating centrality, evaluating and visualizing biological networks. http://apps.cytoscape.org/apps/cytonca. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
    Bio Systems 11/2014; 127C:67-72. · 1.27 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Identification of disease-causing genes among a large number of candidates is a fundamental challenge in human disease studies. However, it is still time-consuming and laborious to determine the real disease-causing genes by biological experiments. With the advances of the high-throughput techniques, a large number of protein-protein interactions have been produced. Therefore, to address this issue, several methods based on protein interaction network have been proposed. In this paper, we propose a shortest path-based algorithm, named SPranker, to prioritize disease-causing genes in protein interaction networks. Considering the fact that diseases with similar phenotypes are generally caused by functionally related genes, we further propose an improved algorithm SPGOranker by integrating the semantic similarity of GO annotations. SPGOranker not only considers the topological similarity between protein pairs in a protein interaction network but also takes their functional similarity into account. The proposed algorithms SPranker and SPGOranker were applied to 1598 known orphan disease-causing genes from 172 orphan diseases and compared with three state-of-the-art approaches, ICN, VS and RWR. The experimental results show that SPranker and SPGOranker outperform ICN, VS, and RWR for the prioritization of orphan disease-causing genes. Importantly, for the case study of severe combined immunodeficiency, SPranker and SPGOranker predict several novel causal genes.
    Science China. Life sciences. 10/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Many computational methods have been proposed to identify essential proteins by using the topological features of interactome networks. However, the precision of essential protein discovery still needs to be improved. Researches show that majority of hubs (essential proteins) in the yeast interactome network are essential due to their involvement in essential complex biological modules and hubs can be classified into two categories: date hubs and party hubs. In this study, combining with gene expression profiles, we propose a new method to predict essential proteins based on overlapping essential modules, named POEM. In POEM, the original protein interactome network is partitioned into many overlapping essential modules. The frequencies and weighted degrees of proteins in these modules are employed to decide which categories does a protein belong to? The comparative results show that POEM outperforms the classical centrality measures: Degree Centrality (DC), Information Centrality (IC), Eigenvector Centrality (EC), Subgraph Centrality (SC), Betweenness Centrality (BC), Closeness Centrality (CC), Edge Clustering Coefficient Centrality (NC) and two newly proposed essential proteins prediction methods: PeC and CoEWC. Experimental results indicate that the precision of predicting essential proteins can be improved by considering the modularity of proteins and integrating gene expression profiles with network topological features.
    IEEE transactions on nanobioscience. 08/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Accurate annotation of protein functions is still a big challenge for understanding life in the post-genomic era. Recently, some methods have been developed to solve the problem by incorporating functional similarity of GO terms into protein-protein interaction (PPI) network, which are based on the observation that a protein tends to share some common functions with proteins that interact with it in PPI network, and two similar GO terms in functional interrelationship network usually co-annotate some common proteins. However, these methods annotate functions of proteins by considering at the same level neighbors of proteins and GO terms respectively, and few attempts have been made to investigate their difference. Given the topological and structural difference between PPI network and functional interrelationship network, we firstly investigate at which level neighbors of proteins tend to have functional associations and at which level neighbors of GO terms usually co-annotate some common proteins. Then, an unbalanced Bi-random walk (UBiRW) algorithm which iteratively walks different number of steps in the two networks is adopted to find protein-GO term associations according to some known associations. Experiments are carried out on S. cerevisiae data. The results show that our method achieves better prediction performance not only than methods that only use PPI network data, but also than methods that consider at the same level neighbors of proteins and of GO terms.
    Current Protein and Peptide Science 07/2014; · 2.33 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Identification of essential proteins is very important for understanding the minimal requirements for cellular life and also necessary for a series of practical applications, such as drug design. With the advances in high throughput technologies, a large number of protein-protein interactions are available, which makes it possible to detect proteins’ essentialities from the network level. Considering that most species already have a number of known essential proteins, we proposed a new priori knowledge-based scheme to discover new essential proteins from protein interaction networks. Based on the new scheme, two essential protein discovery algorithms, CPPK and CEPPK, were developed. CPPK predicts new essential proteins based on network topology and CEPPK detects new essential proteins by integrating network topology and gene expressions. The performances of CPPK and CEPPK were validated based on the protein interaction network of Saccharomyces cerevisiae. The experimental results showed that the priori knowledge of known essential proteins was effective for improving the predicted precision. The predicted precisions of CPPK and CEPPK clearly exceeded that of the other ten previously proposed essential protein discovery methods: Degree Centrality (DC), Betweenness Centrality (BC), Closeness Centrality (CC), Subgraph Centrality(SC), Eigenvector Centrality(EC), Information Centrality(IC), Bottle Neck (BN), Density of Maximum Neighborhood Component (DMNC), Local Average Connectivity-based method (LAC), and Network Centrality (NC). Especially, CPPK achieved 40% improvement in precision over BC, CC, SC, EC, and BN, and CEPPK performed even better. CEPPK was also compared to four other methods (EPC, ORFL, PeC, and CoEWC) which were not node centralities and CEPPK was showed to achieve the best results.
    Methods 01/2014; · 3.64 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Identification of protein complexes from protein-protein interaction networks has become a key problem for understanding cellular life in postgenomic era. Many computational methods have been proposed for identifying protein complexes. Up to now, the existing computational methods are mostly applied on static PPI networks. However, proteins and their interactions are dynamic in reality. Identifying dynamic protein complexes is more meaningful and challenging. In this paper, a novel algorithm, named DPC, is proposed to identify dynamic protein complexes by integrating PPI data and gene expression profiles. According to Core-Attachment assumption, these proteins which are always active in the molecular cycle are regarded as core proteins. The protein-complex cores are identified from these always active proteins by detecting dense subgraphs. Final protein complexes are extended from the protein-complex cores by adding attachments based on a topological character of "closeness" and dynamic meaning. The protein complexes produced by our algorithm DPC contain two parts: static core expressed in all the molecular cycle and dynamic attachments short-lived. The proposed algorithm DPC was applied on the data of Saccharomyces cerevisiae and the experimental results show that DPC outperforms CMC, MCL, SPICi, HC-PIN, COACH, and Core-Attachment based on the validation of matching with known complexes and hF-measures.
    BioMed Research International 01/2014; 2014:375262. · 2.71 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Most biological processes are carried out by protein complexes. A substantial number of false positives of the protein-protein interaction (PPI) data can compromise the utility of the datasets for complexes reconstruction. In order to reduce the impact of such discrepancies, a number of data integration and affinity scoring schemes have been devised. The methods encode the reliabilities (confidence) of physical interactions between pairs of proteins. The challenge now is to identify novel and meaningful protein complexes from the weighted PPI network. To address this problem, a novel protein complex mining algorithm ClusterBFS (Cluster with Breadth-First Search) is proposed. Based on the weighted density, ClusterBFS detects protein complexes of the weighted network by the breadth first search algorithm, which originates from a given seed protein used as starting-point. The experimental results show that ClusterBFS performs significantly better than the other computational approaches in terms of the identification of protein complexes.
    BioMed research international. 01/2014; 2014:354539.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Local protein structure prediction is one of important tasks for bioinformatics research. In order to further enhance the performance of local protein structure prediction, we propose the Multi-level Clustering Support Vector Machine Trees (MLSVMTs). Building on the multi-cluster tree structure, the MLSVMTs model uses multiple SVMs, each of which is customized to learn the unique sequence-to-structure relationship for one cluster. Both the combined 5 x 2 CV F test and the independent test show that the local structure prediction accuracy of MLSVMTs is significantly better than that of one-level K-means clustering, Multi-level clustering and Clustering Support Vector Machines.
    International Journal of Data Mining and Bioinformatics 01/2014; 9(2):172-98. · 0.39 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The role of exosomes shed from Mycobacterium avium sp. paratuberculosis-infected macrophages in intercellular communication processes was examined. We compared the responses of resting macrophages infected with Mycobacterium avium sp. paratuberculosis with those of resting macrophages treated with exosomes previously released from macrophages infected with Mycobacterium avium sp. paratuberculosis. Some proteins components of exosomes released from resting macrophages infected with Mycobacterium avium sp. paratuberculosis showed a significantly differential expression compared with exosomes from uninfected-macrophages. Both Mycobacterium avium sp. paratuberculosis and exosomes from infected-cells enhanced the expression of CD80 and CD86 and the secretion of TNF-α and IFN-γ by macrophages. This suggests that exosomes from infected macrophages may be carriers of molecules, e.g. bacterial antigens and/or components from infected macrophages, that can elicit responses in resting cells. Two-dimensional analysis of the proteins present in exosomes from Mycobacterium avium sp. paratuberculosis-infected macrophages compared with those from resting cells resulted in the identification by MALDI TOF/TOF mass spectrometry of the following differentially expressed proteins: two actin isoforms, guanine nucleotide-binding protein β-1, cofilin-1 and peptidyl-prolyl cis-trans isomerase A. The possible relevance of the changes observed and the biological functions of the proteins differentially present are discussed.
    Microbes and Infection 12/2013; · 2.92 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Reliable inference of transcription regulatory networks is a challenging task in computational biology. Network component analysis (NCA) has become a powerful scheme to uncover regulatory networks behind complex biological processes. However, the performance ...
    IEEE/ACM transactions on computational biology and bioinformatics / IEEE, ACM 11/2013; 5(3):321-2. · 2.25 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Protein complexes are a cornerstone of many biological processes. Protein-protein interaction (PPI) data enable a number of computational methods for predicting protein complexes. However, the insufficiency of the PPI data significantly lowers the accuracy of computational methods. In the current work, the authors develop a novel method named clustering based on multiple biological information (CMBI) to discover protein complexes via the integration of multiple biological resources including gene expression profiles, essential protein information and PPI data. First, CMBI defines the functional similarity of each pair of interacting proteins based on the edge-clustering coefficient and the Pearson correlation coefficient. Second, CMBI selects essential proteins as seeds to build the protein complexes. A redundancy-filtering procedure is performed to eliminate redundant complexes. In addition to the essential proteins, CMBI also uses other proteins as seeds to expand protein complexes. To check the performance of CMBI, the authors compare the complexes discovered by CMBI with the ones found by other techniques by matching the predicted complexes against the reference complexes. The authors use subsequently GO::TermFinder to analyse the complexes predicted by various methods. Finally, the effect of parameters T and R is investigated. The results from GO functional enrichment and matching analyses show that CMBI performs significantly better than the state-of-the-art methods.
    IET Systems Biology 10/2013; 7(5):223-30. · 1.54 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Identifying essential proteins is very important for understanding the minimal requirements of cellular survival and development. Fast growth in the amount of available protein-protein interactions has produced unprecedented opportunities for detecting protein essentiality on network level. A series of centrality measures have been proposed to discover essential proteins based on network topology. Unfortunately, the protein-protein interactions produced by high-throughput experiments generally have high false positives. Moreover, most of centrality measures based on network topology are sensitive to false positives. We therefore propose a new method for evaluating the confidence of each interaction based on the combination of logistic regression-based model and function similarity. Nine standard centrality measures in weighted network were redefined in this paper. The experimental results on a yeast protein interaction network shows that the weighting method improved the performance of centrality measures considerably. More essential proteins were discovered by the weighted centrality measures than by the original centrality measures used in the unweighted network. Even about 20% improvements were obtained from closeness centrality and subgraph centrality.
    Journal of Bioinformatics and Computational Biology 06/2013; 11(3):1341002. · 0.93 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Due to the existence of many probabilistic lossy links in Wireless Sensor Networks (WSNs) (Liu et al., 2010) [25], it is not practical to study the network capacity issue under the Deterministic Network Model (DNM). A more realistic one is actually the Probabilistic Network Model (PNM). Therefore, we study the Snapshot Data Aggregation (SDA) problem, the Continuous Data Aggregation (CDA) problem, and their achievable capacities for probabilistic WSNs under both the independent and identically distributed (i.i.d.) node distribution model and the Poisson point distribution model in this paper. First, we partition a network into cells and use two vectors to further partition these cells into equivalent color classes. Subsequently, based on the partitioned cells and equivalent color classes, we propose a Cell-based Aggregation Scheduling (CAS) algorithm for the SDA problem in probabilistic WSNs. Theoretical analysis of CAS and the upper bound capacity of the SDA problem show that the achievable capacities of CAS are all order optimal in the worst case, the average case, and the best case. For the CDA problem in probabilistic WSNs, we propose a Level-based Aggregation Scheduling (LAS) algorithm. LAS gathers the aggregation values of continuous snapshots by forming a data aggregation/transmission pipeline on the segments and scheduling all the cell-levels in a cell-level class concurrently. By theoretical analysis of LAS and the upper bound capacity of the CDA problem, we prove that LAS also successfully achieves order optimal capacities in all the cases. The extensive simulation results further validate the effectiveness of CAS and LAS.
    Journal of Parallel and Distributed Computing 06/2013; 73(6):729–745. · 1.12 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: BACKGROUND: Identifying protein complexes from protein-protein interaction network is fundamental for understanding the mechanism of cellular component and protein function. At present, many methods to identify protein complexes are mainly based on the topological characteristics or the functional similarity features, neglecting the fact that proteins must be in their active forms to interact with others and the formation of protein complex is following a just-in-time mechanism. RESULTS: This paper firstly presents a protein complex formation model based on the just-in-time mechanism. By investigating known protein complexes combined with gene expression data, we find that most protein complexes can be formed in continuous time points, and the average overlapping rate of the known complexes during the formation is large. A method is proposed to refine the protein complexes predicted by clustering algorithms based on the protein complex formation model and the properties of known protein complexes. After refinement, the number of known complexes that are matched by predicted complexes, Sensitivity, Specificity, and f-measure are significantly improved, when compared with those of the original predicted complexes. CONCLUSION: The refining method can discard the spurious proteins by protein activity and generate new complexes by just-in-time assemble mechanism, which can enhance the ability to predict complex.
    BMC Systems Biology 03/2013; 7(1):28. · 2.98 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Objective To explore the mechanism underlying the molecular immune response of macrophages stimulated with exosome [(+)exosome] from macrophages after Mycobacterium avium (M. avium) infection and analyze the differential protein component of the exosome. Methods The culture supernatants of M.avium-infected macrophages and uninfected ones were collected and exosome was harvested from the supernatants by frozen ultra centrifugation. The concentrations of IFN-γ, TNF-α in supernatants were detected by enzyme-linked immunosorbent assay (ELISA), and CD80, CD86 expressions on macrophages were analyzed by flow cytometry after macrophages were stimulated with exosome. Meanwhile, 2-DE MALDI TOF/TOF MS was used to identify differentially expressed proteins of exosome between M. avium infected group and uninfected group. Results IFN-γ, TNF-α concentrations were increased in the supernatant after stimulated with (+)exosome and CD80, CD86 were raised on macrophage surface by stimulation with (+)exosome. With 2-DE MALDI TOF/TOF MS analysis, we obtained 18 differentially expressed proteins and 12 proteins were identified successfully. Conclusion (+)exosome induces TNF-α, IFN-γ secretion from macrophages and result in the promotion of inflammatory response. In addition, (+)exosome enhances CD80 and CD86 protein expressions. The function of the differentially expressed proteins we identified is closely related to cytoskeleton, protein synthesis and processing, inflammatory response.
    Xi bao yu fen zi mian yi xue za zhi = Chinese journal of cellular and molecular immunology 02/2013; 29(2):123-6.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Abstract Nowadays, with the volume of data growing at an unprecedented rate, large-scale data mining and knowledge discovery have become a new challenge. Rough set theory for knowledge acquisition has been successfully applied in data mining. The recently introduced MapReduce technique has received much attention from both scientific community and industry for its applicability in big data analysis. To mine knowledge from big data, we present parallel large-scale rough set based methods for knowledge acquisition using MapReduce in this paper. We implemented them on several representative MapReduce runtime systems: Hadoop, Phoenix and Twister. Performance comparisons on these runtime systems are reported in this paper. The experimental results show that (1) The computational time is mostly minimum on Twister while employing the same cores; (2) Hadoop has the best speedup for larger data sets; (3) Phoenix has the best speedup for smaller data sets. The excellent speedups also demonstrate that the proposed parallel methods can effectively process very large data on different runtime systems. Pitfalls and advantages of these runtime systems are also illustrated through our experiments, which are helpful for users to decide which runtime systems should be used in their applications.
    International Journal of Approximate Reasoning 01/2013; · 1.73 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The security and privacy preservation issues are prerequisites for vehicular ad hoc networks. Recently, secure and privacy enhancing communication schemes (SPECS) was proposed and focused on intervehicle communications. SPECS provided a software-based solution to satisfy the privacy requirement and gave lower message overhead and higher successful rate than previous solutions in the message verification phase. SPECS also presented the first group communication protocol to allow vehicles to authenticate and securely communicate with others in a group of known vehicles. Unfortunately, we find out that SPECS is vulnerable to impersonation attack. SPECS has a flow such that a malicious vehicle can force arbitrary vehicles to broadcast fake messages to other vehicles or even a malicious vehicle in the group can counterfeit another group member to send fake messages securely among themselves. In this paper, we provide a secure scheme that can achieve the security and privacy requirements, and overcome the weaknesses of SPECS. Moreover, we show the efficiency merits of our scheme through performance evaluations in terms of verification delay and transmission overhead.
    IEEE Transactions on Information Forensics and Security 01/2013; 8(11):1860-1875. · 1.90 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: MapReduce has become one of the most popular programming model for big data analysis in cloud systems due to its simplicity for implementing data parallel applications. There are several platforms for users to develop their applications based on MapReduce framework such as Hadoop and Twister. Hadoop is one of the most popular runtime systems for MapReduce applications and supported by various organizations, however, the original design for Hadoop did not propose an iterative feature efficiently which is required for many scientific applications. Twister, another system for Iterative MapReduce, is introduced and designed to facilitate iterative applications based on MapReduce framework. It has shown that Twister has the better performance than Hadoop on some applications such as Pair wise Distance Calculation (Smith Waterman Gotoh distance). Automatic translations between two program languages in cloud platforms can help developers move their applications from one cloud to anther cloud without changing codes. In this paper, we propose a simple Hadoop-to-Twister translator named H2T which is designed for converting simple Hadoop applications into Twister applications. The experimental results show that translated Twister applications is much faster than original Hadoop applications.
    Biometrics and Security Technologies (ISBAST), 2013 International Symposium on; 01/2013
  • Ken D Nguyen, Yi Pan
    [Show abstract] [Hide abstract]
    ABSTRACT: A common and cost-effective mechanism to identify the functionalities, structures, or relationships between species is multiple-sequence alignment, in which DNA/RNA/protein sequences are arranged and aligned so that similarities between sequences are clustered together. Correctly identifying and aligning these sequence biological similarities help from unwinding the mystery of species evolution to drug design. We present our knowledge-based multiple sequence alignment (KB-MSA) technique that utilizes the existing knowledge databases such as SWISSPROT, GENBANK, or HOMSTRAD to provide a more realistic and reliable sequence alignment. We also provide a modified version of this algorithm (CB-MSA) that utilizes the sequence consistency information when sequence knowledge databases are not available. Our benchmark tests on BAliBASE, PREFAB, HOMSTRAD, and SABMARK references show accuracy improvements up to 10 percent on twilight data sets against many leading alignment tools such as ISPALIGN, PADT, CLUSTALW, MAFFT, PROBCONS, and T-COFFEE.
    IEEE/ACM transactions on computational biology and bioinformatics / IEEE, ACM 01/2013; 10(4):884-896. · 2.25 Impact Factor

Publication Stats

2k Citations
220.90 Total Impact Points

Institutions

  • 2010–2014
    • Central South University
      • • School of Information Science and Engineering
      • • School of Biological Science and Technology
      Ch’ang-sha-shih, Hunan, China
  • 1970–2014
    • Georgia State University
      • Department of Computer Science
      Atlanta, Georgia, United States
  • 2011–2013
    • University of Connecticut
      • Department of Computer Science and Engineering
      Storrs, CT, United States
    • Clayton State University
      Georgia, United States
  • 2008–2010
    • University of Central Arkansas
      • Department of Computer Science
      Arkansas, United States
    • University of Waterloo
      • Department of Electrical & Computer Engineering
      Waterloo, Quebec, Canada
    • Huazhong (Central China) Normal University
      • Department of Computer Science
      Wuhan, Hubei, China
    • National Taiwan University of Science and Technology
      • Department of Computer Science and Information Engineering
      Taipei, Taipei, Taiwan
    • University of Wisconsin–Madison
      Madison, Wisconsin, United States
  • 2003–2010
    • Southwest Jiaotong University
      • Institute of Mobile Commun
      Hua-yang, Sichuan, China
  • 2005–2009
    • Southeast University (China)
      • School of Computer Science and Engineering
      Nanjing, Jiangxi Sheng, China
    • The University of Memphis
      • Department of Computer Science
      Memphis, TN, United States
  • 2007
    • Nanyang Technological University
      • School of Computer Engineering
      Singapore, Singapore
    • University of South Carolina
      Columbia, South Carolina, United States
    • Sun Yat-Sen University
      Shengcheng, Guangdong, China
    • Drexel University
      • iSchool at Drexel, College of Information Science and Technology
      Philadelphia, PA, United States
  • 2006–2007
    • Jiangsu University of Science and Technology
      Chenkiang, Jiangsu Sheng, China
    • University of Georgia
      Атина, Georgia, United States
    • Nanjing University
      Nan-ching, Jiangsu Sheng, China
  • 2002–2005
    • The University of Aizu
      • School of Computer Science and Engineering
      Hukusima, Fukushima, Japan
    • University of Tsukuba
      • Centre for Computational Sciences
      Tsukuba, Ibaraki, Japan
  • 2001
    • University of Missouri - Kansas City
      Kansas City, Missouri, United States
  • 1970–2001
    • University of Dayton
      • Department of Computer Science
      Dayton, Ohio, United States
  • 1999
    • Griffith University
      Southport, Queensland, Australia
  • 1998
    • State University of New York at New Paltz
      New Paltz, New York, United States
    • University of Vermont
      • Department of Computer Science
      Burlington, VT, United States