Nikolai K. Vereshchagin

Moscow State Forest University, Mytishi, Moskovskaya, Russia

Are you Nikolai K. Vereshchagin?

Claim your profile

Publications (67)11.72 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: Newman's theorem states that we can take any public-coin communication protocol and convert it into one that uses only private randomness with only a little increase in communication complexity. We consider a reversed scenario in the context of information complexity: can we take a protocol that uses private randomness and convert it into one that only uses public randomness while preserving the information revealed to each player? We prove that the answer is yes, at least for protocols that use a bounded number of rounds. As an application, we prove new direct sum theorems through the compression of interactive communication in the bounded-round setting. Furthermore, we show that if a Reverse Newman's Theorem can be proven in full generality, then full compression of interactive communication and fully-general direct-sum theorems will result.
    Computational Complexity (CCC), 2013 IEEE Conference on; 01/2013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The main goal of this article is to put some known results in a common perspective and to simplify their proofs. We start with a simple proof of a result of Vereshchagin saying that $\limsup_n C(x|n)$ equals $C^{0'}(x)$. Then we use the same argument to prove similar results for prefix complexity, a priori probability on binary tree, to prove Conidis' theorem about limits of effectively open sets, and also to improve the results of Muchnik about limit frequencies. As a by-product, we get a criterion of 2-randomness proved by Miller: a sequence $X$ is 2-random if and only if there exists $c$ such that any prefix $x$ of $X$ is a prefix of some string $y$ such that $C(y)\ge |y|-c$. (In the 1960ies this property was suggested in Kolmogorov as one of possible randomness definitions.) We also get another 2-randomness criterion by Miller and Nies: $X$ is 2-random if and only if $C(x)\ge |x|-c$ for some $c$ and infinitely many prefixes $x$ of $X$. This is a modified version of our old paper that contained a weaker (and cumbersome) version of Conidis' result, and the proof used low basis theorem (in quite a strange way). The full version was formulated there as a conjecture. This conjecture was later proved by Conidis. Bruno Bauwens (personal communication) noted that the proof can be obtained also by a simple modification of our original argument, and we reproduce Bauwens' argument with his permission.
    04/2012;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Consider an American option that pays G(X^*_t) when exercised at time t, where G is a positive increasing function, X^*_t := \sup_{s\le t}X_s, and X_s is the price of the underlying security at time s. Assuming zero interest rates, we show that the seller of this option can hedge his position by trading in the underlying security if he begins with initial capital X_0\int_{X_0}^{\infty}G(x)x^{-2}dx (and this is the smallest initial capital that allows him to hedge his position). This leads to strategies for trading that are always competitive both with a given strategy's current performance and, to a somewhat lesser degree, with its best performance so far. It also leads to methods of statistical testing that avoid sacrificing too much of the maximum statistical significance that they achieve in the course of accumulating data.
    08/2011;
  • Source
    N.K. Vereshchagin, P.M.B. Vitányi
    [Show abstract] [Hide abstract]
    ABSTRACT: We examine the structure of families of distortion balls from the perspective of Kolmogorov complexity. Special attention is paid to the canonical rate-distortion function of a source word which returns the minimal Kolmogorov complexity of all distortion balls containing that word subject to a bound on their cardinality. This canonical rate-distortion function is related to the more standard algorithmic rate-distortion function for the given distortion measure. Examples are given of list distortion, Hamming distortion, and Euclidean distortion. The algorithmic rate-distortion function can behave differently from Shannon's rate-distortion function. To this end, we show that the canonical rate-distortion function can and does assume a wide class of shapes (unlike Shannon's); we relate low algorithmic mutual information to low Kolmogorov complexity (and consequently suggest that certain aspects of the mutual information formulation of Shannon's rate-distortion function behave differently than would an analogous formulation using algorithmic mutual information); we explore the notion that low Kolmogorov complexity distortion balls containing a given word capture the interesting properties of that word (which is hard to formalize in Shannon's theory) and this suggests an approach to denoising.
    IEEE Transactions on Information Theory 08/2010; · 2.62 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We consider the game-theoretic scenario of testing the performance of Forecaster by Sceptic who gambles against the forecasts. Sceptic's current capital is interpreted as the amount of evidence he has found against Forecaster. Reporting the maximum of Sceptic's capital so far exaggerates the evidence. We characterize the set of all increasing functions that remove the exaggeration. This result can be used for insuring against loss of evidence. Comment: 7 pages. This version (version 2) is identical to version 1 (May 2010). The most up-to-date version can be found at http://www.probabilityandfinance.com/ (Working Paper 34). That version includes an application to financial markets (in which case our result can be used for insuring against loss of the accumulated capital)
    05/2010;
  • Source
    Harry Buhrman, Leen Torenvliet, Falk Unger, Nikolai K. Vereshchagin
    Electronic Colloquium on Computational Complexity (ECCC). 01/2010; 17:163.
  • Source
    Glenn Shafer, Alexander Shen, Nikolai Vereshchagin, Vladimir Vovk
    [Show abstract] [Hide abstract]
    ABSTRACT: A nonnegative martingale with initial value equal to one measures evidence against a probabilistic hypothesis. The inverse of its value at some stopping time can be interpreted as a Bayes factor. If we exaggerate the evidence by considering the largest value attained so far by such a martingale, the exaggeration will be limited, and there are systematic ways to eliminate it. The inverse of the exaggerated value at some stopping time can be interpreted as a $p$-value. We give a simple characterization of all increasing functions that eliminate the exaggeration.
    Statistical Science 12/2009; 26(2011). · 2.24 Impact Factor
  • Source
    Alexey V. Chernov, Alexander Shen, Nikolai K. Vereshchagin, Vladimir Vovk
    [Show abstract] [Hide abstract]
    ABSTRACT: Classical probability theory considers probability distribu-tions that assign probabilities to all events (at least in the finite case). However, there are natural situations where only part of the process is controlled by some probability distribution while for the other part we know only the set of possibilities without any probabilities assigned. We adapt the notions of algorithmic information theory (complexity, algorithmic randomness, martingales, a priori probability) to this frame-work and show that many classical results are still valid.
    Algorithmic Learning Theory, 19th International Conference, ALT 2008, Budapest, Hungary, October 13-16, 2008. Proceedings; 01/2008
  • Source
    Harry Buhrman, Michal Koucký, Nikolai K. Vereshchagin
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we study the individual communication complexity of the following problem. Alice receives an input string x and Bob an input string y, and Alice has to output y. For deterministic protocols it has been shown in Buhrman et al. (2004), that C(y) many bits need to be exchanged even if the actual amount of information C(y|x) is much smaller than C(y). It turns out that for randomised protocols the situation is very different. We establish randomised protocols whose communication complexity is close to the information theoretical lower bound. We furthermore initiate and obtain results about the randomised round complexity of this problem and show trade-offs between the amount of communication and the number of rounds. In order to do this we establish a general framework for studying these types of questions.
    Proceedings of the 23rd Annual IEEE Conference on Computational Complexity, CCC 2008, 23-26 June 2008, College Park, Maryland, USA; 01/2008
  • N K Vereshchagin
    [Show abstract] [Hide abstract]
    ABSTRACT: In the first part of the paper we prove that, relative to a random oracle, the class NP contains infinite sets having no infinite Co-NP-subsets (Co-NP-immune sets). In the second part we prove that perceptrons separating Boolean matrices in which each row contains at least one 1 from matrices in which many rows (say 99% of them) have no 1's must have either large size or large order. This result partially strengthens the "one-in-a-box"' theorem of Minsky and Papert [16] which states that perceptrons of small order cannot decide if every row of a given Boolean matrix has a 1. As a corollary, we prove that AMCo-AMPP under some oracles.
    Izvestiya Mathematics 10/2007; 59(6):1103. · 0.64 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Assume that a program p on input a outputs b. We are looking for a shorter program q having the same property (q(a)=b). In addition, we want q to be simple conditional to p (this means that the conditional Kolmogorov complexity K(q|p) is negligible). In the present paper, we prove that sometimes there is no such program q, even in the case when the complexity of p is much bigger than K(b|a). We give three different constructions that use the game approach, probabilistic arguments and algebraic arguments, respectively.
    Theoretical Computer Science. 09/2007;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We study the two party problem of randomly selecting a string among all the strings of length n. We want the protocol to have the property that the output distribution has high entropy, even when one of the two parties is dishonest and deviates from the protocol. We develop protocols that achieve high, close to n, entropy. In the literature the randomness guarantee is usually expressed as being close to the uniform distribution or in terms of resiliency. The notion of entropy is not directly comparable to that of resiliency, but we establish a connection between the two that allows us to compare our protocols with the existing ones. We construct an explicit protocol that yields entropy n - O(1) and has 4log^* n rounds, improving over the protocol of Goldwasser et al. that also achieves this entropy but needs O(n) rounds. Both these protocols need O(n^2) bits of communication. Next we reduce the communication in our protocols. We show the existence, non-explicitly, of a protocol that has 6-rounds, 2n + 8log n bits of communication and yields entropy n- O(log n) and min-entropy n/2 - O(log n). Our protocol achieves the same entropy bound as the recent, also non-explicit, protocol of Gradwohl et al., however achieves much higher min-entropy: n/2 - O(log n) versus O(log n). Finally we exhibit very simple explicit protocols. We connect the security parameter of these geometric protocols with the well studied Kakeya problem motivated by harmonic analysis and analytical number theory. We are only able to prove that these protocols have entropy 3n/4 but still n/2 - O(log n) min-entropy. Therefore they do not perform as well with respect to the explicit constructions of Gradwohl et al. entropy-wise, but still have much better min-entropy. We conjecture that these simple protocols achieve n -o(n) entropy. Our geometric construction and its relation to the Kakeya problem follows a new and different approach to the random selection problem than any of the previously known protocols. @InProceedings{vereshchagin_et_al:DSP:2008:1309, author = {Nikolai K. Vereshchagin and Harry Buhrman and Matthias Cristandl and Michal Koucky and Zvi Lotker and Boaz Patt-Shamir}, title = {High Entropy Random Selection Protocols}, booktitle = {Algebraic Methods in Computational Complexity}, year = {2008}, editor = {Manindra Agrawal and Harry Buhrman and Lance Fortnow and Thomas Thierauf}, number = {07411}, series = {Dagstuhl Seminar Proceedings}, ISSN = {1862-4405}, publisher = {Internationales Begegnungs- und Forschungszentrum f{"u}r Informatik (IBFI), Schloss Dagstuhl, Germany}, address = {Dagstuhl, Germany}, URL = {http://drops.dagstuhl.de/opus/volltexte/2008/1309}, annote = {Keywords: Shannon entropy, Random string ds} }
    Algebraic Methods in Computational Complexity, 07.10. - 12.10.2007; 01/2007
  • Source
    Computer Science - Theory and Applications, Second International Symposium on Computer Science in Russia, CSR 2007, Ekaterinburg, Russia, September 3-7, 2007, Proceedings; 01/2007
  • Source
    Theor. Comput. Sci. 01/2007; 384:77-86.
  • Source
    Harry Buhrman, Nikolai K. Vereshchagin, Ronald de Wolf
    [Show abstract] [Hide abstract]
    ABSTRACT: We present two results for computational models that allow error probabilities close to 1/2. First, most computational complexity classes have an analogous class in communication complexity. The class PP in fact has two, a version with weakly restricted bias called PP<sup>cc</sup>, and a version with unrestricted bias called UPP<sup>cc</sup>. Ever since their introduction by Babai, Frankl, and Simon in 1986, it has been open whether these classes are the same. We show that PP<sup>cc</sup> subne UPP<sup>cc</sup>. Our proof combines a query complexity separation due to Beigel with a technique of Razborov that translates the acceptance probability of quantum protocols to polynomials. Second, we study how small the bias of minimal-degree polynomials that sign-represent Boolean functions needs to be. We show that the worst-case bias is at worst double- exponentially small in the sign-degree (which was very recently shown to be optimal by Podolski), while the average- case bias can be made single-exponentially small in the sign-degree (which we show to be close to optimal).
    22nd Annual IEEE Conference on Computational Complexity (CCC 2007), 13-16 June 2007, San Diego, California, USA; 01/2007
  • Source
    Noga Alon, Ilan Newman, Alexander Shen, Gábor Tardos, Nikolai K. Vereshchagin
    [Show abstract] [Hide abstract]
    ABSTRACT: Our main result implies the following easily formulated statement. The set of edges E of every finite bipartite graph can be split into poly(log|E|) subsets so that all the resulting bipartite graphs are almost regular. The latter means that the ratio between the maximal and minimal non-zero degree of the left nodes is bounded by a constant and the same condition holds for the right nodes. Stated differently, every finite 2-dimensional set S⊂N2 can be partitioned into poly(log|S|) parts so that in every part the ratio between the maximal size and the minimal size of non-empty horizontal section is bounded by a constant and the same condition holds for vertical sections.We prove a similar statement for n-dimensional sets for any n and show how it can be used to relate information inequalities for Shannon entropy of random variables to inequalities between sizes of sections and their projections of multi-dimensional finite sets.
    Eur. J. Comb. 01/2007; 28:134-144.
  • Sylvain Porrot, Max Dauchet, Bruno Durand, Nikolai K. Vereshchagin
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents some results about transformations of infinite random sequences by letter to letter rational transducers. We show that it is possible by observing initial segments of a given random sequence to decide whether two given letter to letter rational transducers have the same output on that sequence. We use the characterization of random sequences by Kolmogorov Complexity. We also prove that the image of a random sequence is either random, or non-random and non-recursive, or periodic, depending on some transducer's structural properties that we give.
    06/2006: pages 258-272;
  • Source
    Lane A. Hemachandra, Sanjay Jain, Nikolai K. Vereshchagin
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper shows that promise classes are so fragilely structured that they do not robustly possess Turing-hard sets even in classes far larger than themselves. We show that FewP does not robustly possess Turing-hard sets for UP coUP and IP coIP does not robustly possess Turing-hard sets for ZPP. It follows that ZPP, R, coR, UP coUP, UP, FewP coFewP, FewP, and IP coIP do not robustly possess Turing complete sets. This both resolves open questions of whether promise classes lacking robust downward closure under Turing reductions (e.g., R, UP, FewP) might robustly have Turing complete sets, and extends the range of classes known not to robustly contain many-one complete sets.
    04/2006: pages 186-197;
  • Source
    Nikolai Vereshchagin, Paul Vitányi
    [Show abstract] [Hide abstract]
    ABSTRACT: We propose and develop rate-distortion theory in the Kolmogorov complexity setting. This gives the ultimate limits of lossy compression of individual data objects, taking all effective regularities of the data into account.
    IEEE Transactions on Information Theory - TIT. 01/2006;
  • Andrei A. Muchnik, Nikolai K. Vereshchagin
    [Show abstract] [Hide abstract]
    ABSTRACT: Most assertions involving Shannon entropy have their Kolmogorov complexity counterparts. A general theorem of Romashchenko [4] states that every information inequality that is valid in Shannon’s theory is also valid in Kolmogorov’s theory, and vice verse. In this paper we prove that this is no longer true for ∀ ∃-assertions, exhibiting the first example where the formal analogy between Shannon entropy and Kolmogorov complexity fails.
    Computer Science - Theory and Applications, First International Computer Science Symposium in Russia, CSR 2006, St. Petersburg, Russia, June 8-12, 2006, Proceedings; 01/2006

Publication Stats

484 Citations
11.72 Total Impact Points

Institutions

  • 1996–2010
    • Moscow State Forest University
      Mytishi, Moskovskaya, Russia
  • 2008
    • Aix-Marseille Université
      Marsiglia, Provence-Alpes-Côte d'Azur, France
    • University of Amsterdam
      Amsterdamo, North Holland, Netherlands
  • 2000–2008
    • Moscow State Textile University
      Moskva, Moscow, Russia
  • 1997–2007
    • Lomonosov Moscow State University
      • • Faculty of Mechanics and Mathematics
      • • Department of Mathematical Logic and Theory of Algorithms
      Moskva, Moscow, Russia
    • Technische Universität Berlin
      Berlín, Berlin, Germany
    • Weizmann Institute of Science
      • Department of Mathematics
      Israel
  • 1998
    • Hungarian Academy of Sciences
      Budapeŝto, Budapest, Hungary