Support Vector Machine (SVM) is the state-of-art learning machine that has been very fruitful not only in pattern recog- nition, but also in data mining areas, such as feature selec- tion on microarray data, novelty detection, the scalability of algorithms, etc. SVM has been extensively and successfully applied in feature selection for genetic diagnosis. In this pa- per, we do the contrary,i.e., ... [Show full abstract] we use the fruits achieved in the applications of SVM in feature selection to improve SVM it- self. By reducing redundant and non-discriminative features, the computational time of SVM is greatly saved and thus the evaluation speeds up. We propose combining Principal Component Analysis (PCA) and Recursive Feature Elimi- nation (RFE) into multi-class SVM. We found that SVM is invariant under PCA transform, which qualifles PCA to be a desirable dimension reduction method for SVM. On the other hand, RFE is a suitable feature selection method for binary SVM. However, RFE requires many iterations and each iteration needs to train SVM once. This makes RFE infeasible for multi-class SVM if without PCA dimension re- duction,especially when the training set is large. Therefore, combining PCA with RFE is necessary. Our experiments on the benchmark database MNIST and other commonly-used datasets show that PCA and RFE can speed up the evalua- tion of SVM by an order of 10 while maintaining comparable accuracy.