A preview of this full-text is provided by Springer Nature.
Content available from Annals of Mathematics and Artificial Intelligence
This content is subject to copyright. Terms and conditions apply.
Ann Math Artif Intell (2017) 81:3–19
DOI 10.1007/s10472-017-9538-x
Knowledge transfer in SVM and neural networks
Vladimir Vapnik1,2 ·Rauf Izmailov3
Published online: 20 February 2017
© Springer International Publishing Switzerland 2017
Abstract The paper considers general machine learning models, where knowledge transfer
is positioned as the main method to improve their convergence properties. Previous research
was focused on mechanisms of knowledge transfer in the context of SVM framework; the
paper shows that this mechanism is applicable to neural network framework as well. The
paper describes several general approaches for knowledge transfer in both SVM and ANN
frameworks and illustrates algorithmic implementations and performance of one of these
approaches for several synthetic examples.
Keywords Intelligent teacher ·Privileged information ·Similarity control ·Knowledge
transfer ·Knowledge representation ·Frames ·Support vector machine ·Neural network ·
Classification ·Learning theory ·Regression
Mathematics Subject Classification (2010) 68Q32 ·68T05 ·68T30 ·83C32
This material is based upon work partially supported by AFRL and DARPA under contract
FA8750-14-C-0008 and the work partially supported by AFRL under contract FA9550-15-1-0502. Any
opinions, findings and / or conclusions in this material are those of the authors and do not necessarily
reflect the views of AFRL and DARPA.
Rauf Izmailov
rizmailov@vencorelabs.com
Vladimir Vapnik
vladimir.vapnik@gmail.com
1Columbia University, New York, NY, USA
2AI Research Lab, Facebook, New York, NY, USA
3Vencore Labs, Basking Ridge, NJ, USA
Content courtesy of Springer Nature, terms of use apply. Rights reserved.