Chong-Yung Chi

Virginia Polytechnic Institute and State University, Blacksburg, VA, United States

Are you Chong-Yung Chi?

Claim your profile

Publications (161)238.1 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: In blind hyperspectral unmixing (HU), the pure-pixel assumption is well-known to be powerful in enabling simple and effective blind HU solutions. However, the pure-pixel assumption is not always satisfied in an exact sense, especially for scenarios where pixels are all intimately mixed. In the no pure-pixel case, a good blind HU approach to consider is the minimum volume enclosing simplex (MVES). Empirical experience has suggested that MVES algorithms can perform well without pure pixels, although it was not totally clear why this is true from a theoretical viewpoint. This paper aims to address the latter issue. We develop an analysis framework wherein the perfect identifiability of MVES is studied under the noiseless case. We prove that MVES is indeed robust against lack of pure pixels, as long as the pixels do not get too heavily mixed and too asymmetrically spread. Also, our analysis reveals a surprising and counter-intuitive result, namely, that MVES becomes more robust against lack of pure pixels as the number of endmembers increases. The theoretical results are verified by numerical simulations.
    06/2014;
  • Wei-Chiang Li, Tsung-Hui Chang, Chong-Yung Chi
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper studies the coordinated beamforming (CoBF) design for the multiple-input single-output interference channel, provided that only channel distribution information is known to the transmitters. The problem under consideration is a probabilistically constrained optimization problem which maximizes a predefined system utility subject to constraints on rate outage probability and power budget of each transmitter. Our recent analysis has shown that the outage-constrained CoBF problem is intricately difficult, e.g., NP-hard. Therefore, the focus of this paper is on suboptimal but computationally efficient algorithms. Specifically, by leveraging on the block successive upper bound minimization (BSUM) method in optimization, we propose a Gauss-Seidel type algorithm, called distributed BSUM algorithm, which can handle differentiable, monotone and concave system utilities. By exploiting a weighted minimum mean-square error (WMMSE) reformulation, we further propose a Jocobi-type algorithm, called distributed WMMSE algorithm, which can optimize the weighted sum rate utility in a fully parallel manner. To provide a performance benchmark, a relaxed approximation method based on polyblock outer approximation is also proposed. Simulation results show that the proposed algorithms are significantly superior to the existing successive convex approximation method in both performance and computational efficiency, and can yield promising approximation performance.
    05/2014;
  • Wei-Chiang Li, Tsung-Hui Chang, Chong-Yung Chi
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper studies the coordinated beamforming (CoBF) design in the multiple-input single-output interference channel, assuming only channel distribution information given a priori at the transmitters. The CoBF design is formulated as an optimization problem that maximizes a predefined system utility, e.g., the weighted sum rate or the weighted max-min-fairness (MMF) rate, subject to constraints on the individual probability of rate outage and power budget. While the problem is non-convex and appears difficult to handle due to the intricate outage probability constraints, so far it is still unknown if this outage constrained problem is computationally tractable. To answer this, we conduct computational complexity analysis of the outage constrained CoBF problem. Specifically, we show that the outage constrained CoBF problem with the weighted sum rate utility is intrinsically difficult, i.e., NP-hard. Moreover, the outage constrained CoBF problem with the weighted MMF rate utility is also NP-hard except the case when all the transmitters are equipped with single antenna. The presented analysis results confirm that efficient approximation methods are indispensable to the outage constrained CoBF problem.
    05/2014;
  • Wei-Chiang Li, Tsung-Hui Chang, Chong-Yung Chi
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper studies the coordinated beamforming (CoBF) design in the multiple-input single-output interference channel, assuming only channel distribution information given a priori at the transmitters. The CoBF design is formulated as an optimization problem that maximizes a predefined system utility, e.g., the weighted sum rate or the weighted max-min-fairness (MMF) rate, subject to constraints on the individual probability of rate outage and power budget. While the problem is non-convex and appears difficult to handle due to the intricate outage probability constraints, so far it is still unknown if this outage constrained problem is computationally tractable. To answer this, we conduct computational complexity analysis of the outage constrained CoBF problem. Specifically, we show that the outage constrained CoBF problem with the weighted sum rate utility is intrinsically difficult, i.e., NP-hard. Moreover, the outage constrained CoBF problem with the weighted MMF rate utility is also NP-hard except the case when all the transmitters are equipped with single antenna. The presented analysis results confirm that efficient approximation methods are indispensable to the outage constrained CoBF problem.
    04/2014;
  • Wei-Chiang Li, Tsung-Hui Chang, Chong-Yung Chi
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper studies the coordinated beamforming (CoBF) design for the multiple-input single-output interference channel, provided that only channel distribution information is known to the transmitters. The problem under consideration is a probabilistically constrained optimization problem which maximizes a predefined system utility subject to constraints on rate outage probability and power budget of each transmitter. Our recent analysis has shown that the outage-constrained CoBF problem is intricately difficult, e.g., NP-hard. Therefore, the focus of this paper is on suboptimal but computationally efficient algorithms. Specifically, by leveraging on the block successive upper bound minimization (BSUM) method in optimization, we propose a Gauss-Seidel type algorithm, called distributed BSUM algorithm, which can handle differentiable, monotone and concave system utilities. By exploiting a weighted minimum mean-square error (WMMSE) reformulation, we further propose a Jocobi-type algorithm, called distributed WMMSE algorithm, which can optimize the weighted sum rate utility in a fully parallel manner. To provide a performance benchmark, a relaxed approximation method based on polyblock outer approximation is also proposed. Simulation results show that the proposed algorithms are significantly superior to the existing successive convex approximation method in both performance and computational efficiency, and can yield promising approximation performance.
    04/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Blind hyperspectral unmixing (HU), also known as unsupervised HU, is one of the most prominent research topics in signal processing (SP) for hyperspectral remote sensing [1], [2]. Blind HU aims at identifying materials present in a captured scene, as well as their compositions, by using high spectral resolution of hyperspectral images. It is a blind source separation (BSS) problem from a SP viewpoint. Research on this topic started in the 1990s in geoscience and remote sensing [3]-[7], enabled by technological advances in hyperspectral sensing at the time. In recent years, blind HU has attracted much interest from other fields such as SP, machine learning, and optimization, and the subsequent cross-disciplinary research activities have made blind HU a vibrant topic. The resulting impact is not just on remote sensing - blind HU has provided a unique problem scenario that inspired researchers from different fields to devise novel blind SP methods. In fact, one may say that blind HU has established a new branch of BSS approaches not seen in classical BSS studies. In particular, the convex geometry concepts - discovered by early remote sensing researchers through empirical observations [3]-[7] and refined by later research - are elegant and very different from statistical independence-based BSS approaches established in the SP field. Moreover, the latest research on blind HU is rapidly adopting advanced techniques, such as those in sparse SP and optimization. The present development of blind HU seems to be converging to a point where the lines between remote sensing-originated ideas and advanced SP and optimization concepts are no longer clear, and insights from both sides would be used to establish better methods.
    IEEE Signal Processing Magazine 01/2014; 31(1):67-81. · 3.37 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Hyperspectral endmember extraction is to estimate endmember signatures (or material spectra) from the hyperspectral data of an area for analyzing the materials and their composition therein. The presence of noise and outliers in the data poses a serious problem in endmember extraction. In this paper, we handle the noise- and outlier-contaminated data by a two-step approach. We first propose a robust-affine-set-fitting algorithm for joint dimension reduction and outlier removal. The idea is to find a contamination-free data-representative affine set from the corrupted data, while keeping the effects of outliers minimum, in the least squares error sense. Then, we devise two computationally efficient algorithms for extracting endmembers from the outlier-removed data. The two algorithms are established from a simplex volume max-min formulation which is recently proposed to cope with noisy scenarios. A robust algorithm, called worst case alternating volume maximization (WAVMAX), has been previously developed for the simplex volume max-min formulation but is computationally expensive to use. The two new algorithms employ a different kind of decoupled max-min partial optimizations, wherein the design emphasis is on low-complexity implementations. Some computer simulations and real data experiments demonstrate the efficacy, the computational efficiency, and the applicability of the proposed algorithms, in comparison with the WAVMAX algorithm and some benchmark endmember extraction algorithms.
    IEEE Transactions on Geoscience and Remote Sensing 07/2013; 51(7):3982-3997. · 3.47 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper considers beamforming designs for weighted sum rate maximization (WSRM) in a multiple-input single-output interference channel subject to probability constraints on the rate outage. We claim that the outage probability constrained WSRM problem is an NP-hard problem, and therefore focus on devising efficient approximation methods. In particular, inspired by an insightful problem reformulation, a pricing-based sequential optimization (PSO) algorithm is proposed for efficiently handling the considered outage constrained WSRM problem. We show that the proposed PSO algorithm has semi-analytical beamforming solutions in each iteration, and hence can be efficiently implemented. Moreover, the PSO algorithm upon convergence attains a point satisfying Karush-Kuhn-Tucker (KKT) conditions of the original outage constrained problem. Simulation results demonstrate that the proposed PSO algorithm not only yields competing weighted sum rate performance, but also is computationally more efficient than the existing method [1].
    Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on; 05/2013
  • Kun-Yu Wang, Haining Wang, Zhi Ding, Chong-Yung Chi
    [Show abstract] [Hide abstract]
    ABSTRACT: This work considers worst-case utility maximization (WCUM) problem for a downlink wireless system where a multiantenna base station communicates with multiple single-antenna users. Specifically, we jointly design transmit covariance matrices for each user to robustly maximize the worst-case (i.e., minimum) system utility function under channel estimation errors bounded within a spherical region. This problem has been shown to be NP-hard, and so any algorithms for finding the optimal solution may suffer from prohibitively high complexity. In view of this, we seek an efficient and more accurate suboptimal solution for the WCUM problem. A low-complexity iterative WCUM algorithm is proposed for this nonconvex problem by solving two convex problems alternatively. We also show the convergence of the proposed algorithm, and prove its Pareto optimality to the WCUM problem. Some simulation results are presented to demonstrate its substantial performance gain and higher computational efficiency over existing algorithms.
    Vehicular Technology Conference, 1988, IEEE 38th 02/2013;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Optimal power allocation for orthogonal frequency division multiplexing (OFDM) wiretap channels with Gaussian channel inputs has already been studied in some previous works from an information theoretical viewpoint. However, these results are not sufficient for practical system design. One reason is that discrete channel inputs, such as quadrature amplitude modulation (QAM) signals, instead of Gaussian channel inputs, are deployed in current practical wireless systems to maintain moderate peak transmission power and receiver complexity. In this paper, we investigate the power allocation and artificial noise design for OFDM wiretap channels with discrete channel inputs. We first prove that the secrecy rate function for discrete channel inputs is nonconcave with respect to the transmission power. To resolve the corresponding nonconvex secrecy rate maximization problem, we develop a low-complexity power allocation algorithm, which yields a duality gap diminishing in the order of O(1/\sqrt{N}), where N is the number of subcarriers of OFDM. We then show that independent frequency-domain artificial noise cannot improve the secrecy rate of single-antenna wiretap channels. Towards this end, we propose a novel time-domain artificial noise design which exploits temporal degrees of freedom provided by the cyclic prefix of OFDM systems {to jam the eavesdropper and boost the secrecy rate even with a single antenna at the transmitter}. Numerical results are provided to illustrate the performance of the proposed design schemes.
    IEEE Transactions on Wireless Communications 02/2013; 12(6). · 2.42 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we consider two-way orthogonal frequency division multiplexing (OFDM) relay channels, where the direct link between the two terminal nodes is too weak to be used for data transmission. The widely known per-subcarrier decode-and-forward (DF) relay strategy, treats each subcarrier as a separate channel, and performs independent channel coding over each subcarrier. We show that this per-subcarrier DF relay strategy is only a suboptimal DF relay strategy, and present a multi-subcarrier DF relay strategy which utilizes cross-subcarrier channel coding to achieve a larger rate region. We then propose an optimal resource allocation algorithm to characterize the achievable rate region of the multi-subcarrier DF relay strategy. The computational complexity of this algorithm is much smaller than that of standard Lagrangian duality optimization algorithms. We further analyze the asymptotic performance of two-way relay strategies including the above two DF relay strategies and an amplify-and-forward (AF) relay strategy. The analysis shows that the multi-subcarrier DF relay strategy tends to achieve the capacity region of the two-way OFDM relay channels in the low signal-to-noise ratio (SNR) regime, while the AF relay strategy tends to achieve the multiplexing gain region of the two-way OFDM relay channels in the high SNR regime. Numerical results are provided to justify all the analytical results and the efficacy of the proposed optimal resource allocation algorithm.
    IEEE Transactions on Wireless Communications 01/2013; · 2.42 Impact Factor
  • Kun-Yu Wang, N. Jacklin, Zhi Ding, Chong-Yung Chi
    [Show abstract] [Hide abstract]
    ABSTRACT: To improve wireless heterogeneous network service via macrocell and femtocells that share certain spectral resources, this paper studies the transmit beamforming design for femtocell base station (FBS), equipped with multiple antennas, under an outage-based quality-of-service (QoS) constraint at the single-antenna femtocell user equipment characterized by its signal-to-interference-plus-noise ratio. Specifically, we focus on the practical case of imperfect downlink multiple-input single-output (MISO) channel state information (CSI) at the FBS due to limited CSI feedback or CSI estimation errors. By characterizing the CSI uncertainty probabilistically, we formulate an outage-based robust beamforming design. This nonconvex optimization problem can be relaxed into a convex semidefinite programming problem, which reduces to a power control problem when all CSI vectors are independent and identically distributed. We also investigate the performance gap between the optimal transmission strategy (that allows maximum transmission degrees of freedom (DoF) equal to the number of transmit antennas) and the proposed optimal beamforming design (with the DoF equal to one) and provide some feasibility conditions, followed by their performance evaluation and trade-off through simulation results.
    IEEE Transactions on Wireless Communications 01/2013; 12(4):1883-1897. · 2.42 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Hyperspectral unmixing (HU) is a process to extract the underlying endmember signatures (or simply endmembers) and the corresponding proportions (abundances) from the observed hyperspectral data cloud. The Craig's criterion (minimum volume simplex enclosing the data cloud) and the Winter's criterion (maximum volume simplex inside the data cloud) are widely used for HU. For perfect identifiability of the endmembers, we have recently shown in [1] that the presence of pure pixels (pixels fully contributed by a single endmember) for all endmembers is both necessary and sufficient condition for Winter's criterion, and is a sufficient condition for Craig's criterion. A necessary condition for endmember identifiability (EI) when using Craig's criterion remains unsolved even for three-endmember case. In this work, considering a three-endmember scenario, we endeavor a statistical analysis to identify a necessary and statistically sufficient condition on the purity level (a measure of mixing levels of the endmembers) of the data, so that Craig's criterion can guarantee perfect identification of endmembers. Precisely, we prove that a purity level strictly greater than 1/√(2) is necessary for EI, while the same is sufficient for EI with probability-1. Since the presence of pure pixels is a very strong requirement which is seldom true in practice, the results of this analysis foster the practical applicability of Craig's criterion over Winter's criterion, to real-world problems.
    Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on; 01/2013
  • Kun-Yu Wang, N. Jacklin, Zhi Ding, Chong-Yung Chi
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we consider a two-tier heterogeneous network that locally consists of a multi-antenna macrocell base station and a multi-antenna femtocell base station (FBS) each serving separate single-antenna users. We investigate an optimal transmission strategy with maximum degree of freedom for transmit power minimization of the FBS under outage-based quality-of-service (QoS) constraints for the femtocell user equipment (FUE) and macrocell user equipment (MUE). Specifically, we examine the scenario that the FBS receives no instantaneous channel estimate from the FUE, and relies on only statistical information of downlink multiple-input single-output (MISO) channels. Although the outage constrained problem has no closed-form probabilistic constraints and may not be convex in general, we propose a transmission strategy and prove its optimality under a given condition. Our simulation results also demonstrate that the proposed transmission strategy can significantly save power compared to beamforming strategies.
    Communications (ICC), 2013 IEEE International Conference on; 01/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: Multiplexed illumination has been proved to be valuable and beneficial, in terms of noise reduction, in wide applications of computer vision and graphics, provided that the limitations of photon noise and saturation are properly tackled. Existing optimal multiplexing codes, in the sense of maximum signal-to-noise ratio (SNR), are primarily designed for time multiplexing, but they only apply to a multiplexing system requiring the number of measurements (M) equal to the number of illumination sources (N). In this paper, we formulate a general code design problem, where M≥N, for time and color multiplexing, and develop a sequential semi-definite programming to deal with the formulated optimization problem. The proposed formulation and method can be readily specialized to time multiplexing, thereby making such optimized codes have a much broader application. Computer simulations will discover the main merit of the method-- a significant boost of SNR as M increases. Experiments will also be presented to demonstrate the effectiveness and superiority of the method in object illumination.
    Proceedings of the 12th European conference on Computer Vision - Volume Part VI; 10/2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper considers the context of orthogonal space-time block coded OFDM (OSTBC-OFDM) without channel state information at the receiver. Assuming noncoherent maximum-likelihood detection, the interest herein lies in detection within one OSTBC-OFDM block, motivated by its capability of accommodating relatively fast block fading channels. Our investigation focuses on analysis aspects, where we seek to establish practical noncoherent BPSK/QPSK OSTBC-OFDM schemes that have provably good channel identifiability and diversity properties. We consider perfect channel identifiability (PCI), a strong condition guaranteeing unique noncoherent channel identification for any (nonzero) channel. Through a judicious design involving special OSTBCs and pilot placement, we propose an OSTBC-OFDM scheme that is PCI-achieving and consumes fewer pilots compared to conventional pilot-aided channel estimation methods. We further our analysis by showing that a PCI-achieving scheme also achieves maximal noncoherent spatial diversity for the Kronecker Gaussian spatial-temporal channel fading model, which covers the popular i.i.d. Rayleigh fading channel and a variety of correlated and sparse multipath channels. All these results are developed in parallel for the centralized point-to-point MIMO scenario and a distributed relay communication scenario. For the latter scenario, our diversity analysis shows that the PCI-achieving scheme can also achieve maximal noncoherent cooperative diversity. The performance merits of the proposed PCI-achieving scheme are demonstrated by simulations.
    IEEE Transactions on Signal Processing 09/2012; 60(9):4849-4863. · 2.81 Impact Factor
  • Source
    Tsung-Hui Chang, Wing-Kin Ma, Chong-Yung Chi
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper studies a downlink multiuser transmit beamforming design under spherical channel uncertainties, using a worst-case robust formulation. This robust design problem is nonconvex. Recently, a convex approximation formulation based on semidefinite relaxation (SDR) has been proposed to handle the problem. Curiously, simulation results have consistently indicated that SDR can attain the global optimum of the robust design problem. This paper intends to provide some theoretical insights into this important empirical finding. Our main result is a dual representation of the SDR formulation, which reveals an interesting linkage to a different robust design problem, and the possibility of SDR optimality.
    Circuits, Systems and Computers, 1977. Conference Record. 1977 11th Asilomar Conference on 04/2012;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Uncertainty in arterial input function (AIF) estimation is one of the major errors in the quantification of dynamic contrast-enhanced MRI. A blind source separation algorithm was proposed to determine the AIF by selecting the voxel time course with maximum purity, which represents a minimal contamination from partial volume effects. Simulations were performed to assess the partial volume effect on the purity of AIF, the estimation accuracy of the AIF, and the influence of purity on the derived kinetic parameters. In vivo data were acquired from six patients with hypopharyngeal cancer and eight rats with brain tumor. Results showed that in simulation the AIF with the highest purity is closest to the true AIF. In patients, the manually selection had reduced purity, which could lead to underestimations of K(trans) and V(e) and an overestimation of V(p) when compared with those obtained by the proposed blind source separation algorithm. The derived kinetic parameters in the tumor were more susceptible to the changes in purity when compared with those in the muscle. The animal experiment demonstrated good reproducibility in blind source separation-AIF derived parameters. In conclusion, the blind source separation method is feasible and reproducible to identify the voxel with the tracer concentration time course closest to the true AIF. Magn Reson Med, 2012. © 2012 Wiley Periodicals, Inc.
    Magnetic Resonance in Medicine 03/2012; 68(5):1439-49. · 3.27 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The combination of bit-interleaved coded modulation (BICM), orthogonal space-time block coding (OSTBC) and orthogonal frequency division multiplexing (OFDM) has been shown recently to be able to achieve maximum spatial-frequency diversity in frequency selective multi-path fading channels, provided that perfect channel state information (CSI) is available to the receiver. In view of the fact that perfect CSI can be obtained only if a sufficient amount of resource is allocated for training or pilot data, this paper investigates pilot-efficient noncoherent decoding methods for the BICM-OSTBC-OFDM system. In particular, we propose a noncoherent maximum-likelihood (ML) decoder that uses only one OSTBC-OFDM block. This block-wise decoder is suitable for relatively fast fading channels whose coherence time may be as short as one OSTBC-OFDM block. Our focus is mainly on noncoherent diversity analysis. We study a class of carefully designed transmission schemes, called perfect channel identifiability (PCI) achieving schemes, and show that they can exhibit good diversity performance. Specifically, we present a worst-case diversity analysis framework to show that PCI-achieving schemes can achieve the maximum noncoherent spatial-frequency diversity of BICM-OSTBC-OFDM. The developments are further extended to a distributed BICM-OSTBC-OFDM scenario in cooperative relay networks. Simulation results are presented to confirm our theoretical claims and show that the proposed noncoherent schemes can exhibit near-coherent performance.
    IEEE Transactions on Wireless Communications 01/2012; 11(9):3335-3347. · 2.42 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper considers robust multi-cell coordinated beamforming (MCBF) design for downlink wireless systems, in the presence of channel state information (CSI) errors. By assuming that the CSI errors are complex Gaussian distributed, we formulate a chance-constrained robust MCBF design problem which guarantees that the mobile stations can achieve the desired signal-to-interference-plus-noise ratio (SINR) requirements with a high probability. A convex approximation method, based on semidefinite relaxation and tractable probability approximation formulations, is proposed. The goal is to solve the convex approximation formulation in a distributed manner, with only a small amount of information exchange between base stations. To this end, we develop a distributed implementation by applying a convex optimization method, called weighted variable-penalty alternating direction method of multipliers (WVP-ADMM), which is numerically more stable and can converge faster than the standard ADMM method. Simulation results are presented to examine the chance-constrained robust MCBF design and the proposed distributed implementation algorithm.
    Global Communications Conference (GLOBECOM), 2012 IEEE; 01/2012

Publication Stats

843 Citations
238.10 Total Impact Points

Institutions

  • 2008–2012
    • Virginia Polytechnic Institute and State University
      • Department of Electrical and Computer Engineering
      Blacksburg, VA, United States
    • Rutgers, The State University of New Jersey
      • Department of Electrical and Computer Engineering
      New Brunswick, NJ, United States
    • National Hsinchu University of Education
      Hsin-chu-hsien, Taiwan, Taiwan
  • 1990–2012
    • National Tsing Hua University
      • Department of Electrical Engineering
      Hsin-chu-hsien, Taiwan, Taiwan
  • 2010
    • The Chinese University of Hong Kong
      • Department of Electronic Engineering
      Hong Kong, Hong Kong
  • 2001–2009
    • Tsinghua University
      • • Institute of Systems Engineering
      • • Department of Electronic Engineering
      Peping, Beijing, China
  • 2007
    • Shandong University
      • School of Information Science and Engineering
      Chi-nan-shih, Shandong Sheng, China
  • 2004
    • Xidian University
      Ch’ang-an, Shaanxi, China