Xiaoqing Yu

Shanghai University, Shanghai, Shanghai Shi, China

Are you Xiaoqing Yu?

Claim your profile

Publications (85)4.78 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we present a robust reconstruction framework on noisy and large point cloud data. Though Poisson reconstruction performs well in recovering the surface from noisy point cloud data, it's problematic to reconstruct underlying surface from large cloud data, especially on a general processor. An inaccurate estimation of point normal for noisy and large dataset would result in local distortion on the reconstructed mesh. We adopt a systematical combination of Poisson-disk sampling, normal estimation and Poisson reconstruction to avoid the inaccuracy of normal calculated from k-nearest neighbors. With the fewer dataset obtained by sampling on original points, the normal estimated is more reliable for subsequent Poisson reconstruction and the time spent in normal estimation and reconstruction is much less. We demonstrate the effectiveness of the framework in recovering topology and geometry information when dealing with point cloud data from real world. The experiment results indicate that the framework is superior to Poisson reconstruction directly on raw point dataset in the aspects of time consumption and visual fidelity.
    2014 14th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid); 05/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: 3D visualization and real-time rendering of large-scale scenes is an important task of virtual reality. In this paper, different from local loading and roaming of the scenes, we focus on dynamic loading and real-time roaming, based on the remote servers and clients, to facilitate the efficient transmission and reduce the transmission time. More specifically, a novel dynamic scheduling algorithm in the client side is used to optimize the loading and real-time rendering performance, i.e., this method can dynamically load and unload the partitioned data blocks from the server side according to the roaming viewpoints so that we can realize the infinite roaming of a large scale scene. In order to accommodate different networking scenarios, we also design a multi-resolution scheme for those server-stored scene blocks according to different communication channel conditions, so that the most suitable resolution scheme can be automatically chosen. As shown in the experiments, our method only requires a small amount of memory overhead, while efficiently realizes infinite roaming of the large city scenes.
    Journal of Signal Processing Systems 04/2014; 75(1):15-21. DOI:10.1007/s11265-013-0860-1 · 0.56 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Particle systems' greatest advantage is well suited for modeling complex fuzzy phenomena, such as explosions, fountain, tornado and fireworks, etc. in 3D graphics. With the increasing requirements on the number of particles and particle-particle interactions, the computational complexity of simulation in particle systems has increased rapidly. Particle systems are traditionally implemented on a general-purpose CPU, and the computational complexity of particle systems limits the number of particles that can be computed at interactive rates. This paper focuses on real-time simulation of large-scale particle systems. We discuss optional integration algorithms based on CUDA (Compute Unified Device Architecture) for both graphic and scientific simulation. The speed of particle systems has been greatly improved, with parallel-core GPUs working in tandem with multi-core CPUs. In order to provide a scalable and portable API library, the object-oriented programming method is adopted to encapsulate the functions of parallel particle system. Results show that our proposed APIs are user-friendly and the parallel implementations are significantly efficient.
    2013 IEEE International Conference on Dependable, Autonomic and Secure Computing (DASC); 12/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: With the dramatic increase of the audio resources, how to achieve a multi-classifier of high performance for audio resources has become one of the research focuses. In this paper, we propose a novel classifier using SVM decision tree named as Dual Parallel SVM (DP-SVM). The key point of this method is that all the audio categories are divided into two local sub-classes each time till all nodes turns into leaf nodes. Rather than one branch is on behalf of one class, the other branch contains all of the remaining categories, and we can construct a new decision tree. Based on this SVM decision tree, Genetic Algorithm is adopted to optimize parameters. Better parameters are obtained with iterative selection, crossover and mutation operations on templates. Experimental results show that the new DP-SVM constructor based on Genetic Algorithm can effectively accelerate the speed of training and classification, and it can also improve the accuracy of classification.
    Journal of Computational and Theoretical Nanoscience 03/2013; 19(3):746-752. DOI:10.1166/asl.2013.4853 · 1.25 Impact Factor
  • Xiaoqing Yu, Chao Yang, Xuannan Ye
    [Show abstract] [Hide abstract]
    ABSTRACT: The Cluster-based three-dimensional non-uniform mesh simplification algorithm based on edge-collapse mesh simplification is proposed to solve some practical problems. At first, we adapt K-means algorithm to cluster the mesh triangles and divide them into several model blocks. Secondly, we simplify each model block. In order to keep the feature points of mesh triangles of each block, here we introduce a non-uniform algorithm combined with Gaussian curvature. We can obtain the detail information of different groups by simplifying each of blocks which are independent to each other. If some details are lost in transmission, other details can also decode independently. We only need to ask server to resend the lost part of the group data. At the same time, the regional details can be decoded depend on our viewpoint when we need to load the complex models. Because our vision can only focus on part region of model, and the area in which we aren't interested can be instead by coarse model.
    IET International Conference on Smart and Sustainable City 2013 (ICSSC 2013); 01/2013
  • Liang Liu, Libing Lu, Xiaoqing Yu, Ran Liu
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, first we introduce the principle and method of the point cloud registration. Then we study the registration method based on target ball, which mainly includes locating the center of target balls and the registration among multistation point cloud. Here, we proposed a new method that scan the same ball more time and then get the average of them to locate the centre. At last, we obtain multi-station scanning point cloud using Faro laser scanner and achieve the result of registration of multi-station point cloud using target ball method.
    IET International Conference on Smart and Sustainable City 2013 (ICSSC 2013); 01/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: VOIP is widely used in modern communication. Based on the VOIP theory, we design the conference mixer system. It combines communication protocol, voice compression, mixer processor and network transmission. This paper shows how to realize the whole conference mixer system and tests performance of it.
    IET International Conference on Smart and Sustainable City 2013 (ICSSC 2013); 01/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper introduces the calculation method of degree centrality, closeness centrality and betwenness centrality based on the Sina weibo user data of Fudan University in Dec, 2012. The cumulative probability distribution of each centrality indicator is measured and analyzed. The relationship among three centrality metrics is also discussed. It is concluded that the combination of the three centrality measurements may effectively discover important nodes in the microblog network.
    IET International Conference on Smart and Sustainable City 2013 (ICSSC 2013); 01/2013
  • Xiaoqing Yu, Wei Xiong, Jianhua Shi
    [Show abstract] [Hide abstract]
    ABSTRACT: Chroma-based feature is one of the most useful information in the area of music retrieval. In this paper, a novel music retrieval system based on chroma feature been proposed. The core of this system is the music fingerprint that be extracted from the chroma feature. Beyond that, notes detection technique also has been implemented in our system to assist the retrieval process. The preliminary result of experiments shows the proposed system occupy both speed and accuracy.
    IET International Conference on Smart and Sustainable City 2013 (ICSSC 2013); 01/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: With the rapid development of digital media technology, audio resource has become one important sources of information, but people may be confused because they do not know how to handle these resources. In this paper, we proposed a new audio fingerprinting system based on Shazam's algorithm to help people manage and find out the wanted audio resources. Salient points, not only the maxima but also the points have the largest increases in energy, have been extracted in the chromagram to construct the audio fingerprints. The idea which is similar to Constant Q Transform is adopted to optimize the salient points before the fingerprinting construction. The experiment results show that our system can recognize the wanted audio clips at the accuracy about 90%.
    IET International Conference on Smart and Sustainable City 2013 (ICSSC 2013); 01/2013
  • Wei Xiong, Xiaoqing Yu, Jianhua Shi
    [Show abstract] [Hide abstract]
    ABSTRACT: An audio fingerprint is a content-based compact feature that summarizes an audio clip. As same as we using human fingerprint to recognize people, audio fingerprint can be used to retrieval unknown audio clips. In this paper, we propose an algorithm which extracts audio fingerprint based on spectral bark-band energy and PCA (Principle Components Analysis). By using this algorithm, search a huge audio database efficient and highly robust in the presence of noise be possible. Especially, the algorithm enhances the efficiency to a higher level. The preliminary result of our experiments shows prominent performance.
    IET International Conference on Smart and Sustainable City 2013 (ICSSC 2013); 01/2013
  • Xiaoqing Yu, Jing Lu, Huanhuan Liu
    [Show abstract] [Hide abstract]
    ABSTRACT: Opinion leaders have an important influence on message propagation, so how to find opinion leaders efficiently is significance for further studying. In this paper, we proposed synthesize centrality as a new method to find opinion leaders. PageRank and HITS are presented to ranking influential, experimental results show that large overlap exists between SC and PageRank and HITS. If a user is an opinion leader with SC, it is likely to be opinion leaders with PageRank and HITS. And SC has a higher accuracy in finding opinion leaders than PageRank and HITS.
    IET International Conference on Smart and Sustainable City 2013 (ICSSC 2013); 01/2013
  • Xiaoqing Yu, Jing Lu, Yupu Ding
    [Show abstract] [Hide abstract]
    ABSTRACT: Opinion leaders play an important role in social network, and have a significant impact on the people around them. Although a lot of work have been made to identify opinion leaders, the effective methods still need to be developed, especially for the internet users like sina weibo. Sina Weibo is the largest and most popular online social network in china. It has a big influence on people's lives. In this paper, k-means clustering algorithm, a machine learning method, is used to find the opinion leaders from sina microblog social network. Preliminary test results show that this method is effective. What's more, the paper also analyzes the effect of opinion leaders in the sina microblog network structure.
    IET International Conference on Smart and Sustainable City 2013 (ICSSC 2013); 01/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper proposes a novel algorithm for high-dimensional unsupervised reduction from intrinsic Bayesian model. The proposed algorithm is to assume that the pixel reflectance results from nonlinear combinations of pure component spectra contaminated by additive noise. The constraints are naturally expressed in intrinsic Bayesian literature by using appropriate abundance prior distributions. The posterior distributions of the unknown model parameters are then derived. The proposed algorithm consists of intrinsic Bayesian inductive cognition part and hierarchical reduction algorithm model part. The algorithm has several advantages over traditional distance based on Bayesian reduction algorithms. The proposed reduction algorithm from intrinsic Bayesian inductive cognitive model is used to decide which dimensions are advantageous and to output the recommended dimensions of the hyperspectral image. The algorithm can be interpreted as a novel fast reduction inference method for intrinsic Bayesian inductive cognitive model. We describe procedures for learning the model hyperparameters, computing the dimensions distribution, and extensions to the intrinsic Bayesian inductive cognition model. Experimental results on hyperspectral data demonstrate robust and useful properties of the proposed reduction algorithm.
    Neurocomputing 12/2012; 98:143–150. DOI:10.1016/j.neucom.2011.03.060 · 2.01 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper proposed an improved objective audio quality estimation system compatible with PEAQ (Perceptual Evaluation of Audio Quality). Based on the computational auditory model, we used a novel psychoacoustic model to assess the quality of highly impaired audio. We also applied the robust linear MOA (Least-squares Weight Vector algorithm) and MinmaxMOA (Minmax-Optimized MOV Selection algorithm) to cognitive model of the estimation system. Compared to the PEAQ advanced version, the proposed estimation system has a considerable improvement in performance both in terms of the correlation and MSE (Mean Square Error). By combining the computational auditory model and PEAQ, our estimation system can be applied to the quality assessment of highly impaired audio.
    Proceedings of the 19th international conference on Neural Information Processing - Volume Part IV; 11/2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: To dynamically load the large scaled 3D city model based on OSG (OpenSceneGraph), the model is firstly separated small blocks so as to improve the loading speed and save the memory, here we directly separate the scene along the x-axis and y-axis, then we design the algorithm to control the numbers and the positions of loading blocks, thirdly we optimize the algorithm. In the end we give the experimental result to show the virtues of our methods.
    2012 International Conference on Audio, Language and Image Processing (ICALIP); 07/2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: With the promotion and quick development of the concept of smart city, based on this concept how to predict and eliminate the nature disaster has been more and more possible. This paper gives a outline of how to use OpenGL to realize a system of flood simulation. First using OSGDEM to make a real terrain scene with texture and coordination. Second, based on OpenGL graphic library to simulate the water and the movement. Third, using MFC to realize the visualization of the flood.
    Audio, Language and Image Processing (ICALIP), 2012 International Conference on; 01/2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: With the rapid expansion of modern multimedia data, a number of audio fingerprinting algorithms have been proposed. Audio fingerprint is a compact unique content-based digital signature of an audio signal, which can be used to identify unknown audio clips. Due to the interference of different kinds of noise, audio fingerprinting is still a challenging task. In this paper, Nearest Neighbor Estimation (NNE) is used to reduce the interference of the noise. Firstly, audio feature points are extracted from audio clips. Then NNE is used to reduce the impact of noise on the feature points. Experimental results show that NNE reduces the influence of noise effectively in white noise environment.
    Audio, Language and Image Processing (ICALIP), 2012 International Conference on; 01/2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: Audio fingerprint is a content-based compact feature that summarizes an audio clip. As same as we using human fingerprint to recognize people, audio fingerprint can be used to retrieval unknown audio clips. In this paper, we propose an algorithm which extracts audio fingerprint based on dynamic subband locating and normalized SSC (Spectral Subband Centroid). SSC has been found to be related the human sensation of the brightness of an audio signal. By using this algorithm, recognize an unknown audio clip efficient and highly robust in the presence of noise be possible. The preliminary result of our experiments shows prominent performance.
    Audio, Language and Image Processing (ICALIP), 2012 International Conference on; 01/2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: With the rapid expansion of modern multimedia data, accurate and efficient retrieval of audio information has become one of the most important issues that need to be resolved especially in the actual noise environment. As an important part of the multimedia information, audio data provides an indispensable element for the people of auditory perception. However, in the transmission process, it will inevitably be subject to a variety of noise interference. There are generally two kinds of noise, fully distributed type and burst mixed type. In this paper, we mainly focused on noise removal of the latter kind. Combined with commonly used Philips fingerprinting algorithm, noise-removed clean audio clips are used to extract fingerprints and put into the process of retrieval. The results show that our methods can better locate noise, and get a relatively high similarity and matching accuracy.
    Audio, Language and Image Processing (ICALIP), 2012 International Conference on; 01/2012

Publication Stats

44 Citations
4.78 Total Impact Points

Institutions

  • 1998–2014
    • Shanghai University
      • School of Communication and Information Engineering (SCIE)
      Shanghai, Shanghai Shi, China
  • 2012
    • South China University of Technology
      • School of Software Engineering
      Shengcheng, Guangdong, China
  • 2011
    • Communication University of China
      Peping, Beijing, China
  • 2001
    • The Hong Kong Polytechnic University
      • Department of Electronic and Information Engineering
      Hong Kong, Hong Kong