In this article we compare the quality of various cross-lingual embeddings on the cross-lingual text classification problem and explore the possibility of transferring knowledge between languages. We consider Multilingual Unsupervised and Supervised Embeddings (MUSE), multilingual BERT embeddings, XLM-RoBERTa (XLM-R) model embeddings, and Language-Agnostic Sentence Representations (LASER). Various classification algorithms use them as inputs for solving the task of the patent categorization. It is a zero-shot cross-lingual classification task since the training and the validation sets include the English texts, and the test set consists of documents in Russian.