Content uploaded by Joris van der Hoeven

Author content

All content in this area was uploaded by Joris van der Hoeven on Apr 22, 2018

Content may be subject to copyright.

Assume that we wish to expand the product h = fg of two formal power series f and g. Classically, there are two types of algorithms to do this: zealous algorithms first expand f and g up to order n, multiply the results and truncate at order n. Lazy algorithms on the contrary compute the coefficients of f, g and h gradually and they perform no more computations than strictly necessary at each stage. In particular, at the moment we compute the coefficient hiof ziin h, only f0,⋯ , fiand g0,⋯ , giare known.

Content uploaded by Joris van der Hoeven

Author content

All content in this area was uploaded by Joris van der Hoeven on Apr 22, 2018

Content may be subject to copyright.

... For instance, with = , it amounts to power series composition. Power series reversion can then be reduced to composition, with a small overhead [15], as can further operations such as solving families of functional equations [28]. ...

... Power series. For the special case = of power series, Brent and Kung's second algorithm relying on Taylor expansion performs composition in only˜( 3/2 ) operations, provided ′ (0) and (⌈ √︁ log( )⌉)! are invertible in A; the assumption on ′ (0) can be weakened [28,Sec. 3.4.3]. ...

... Faster composition in only˜( ) operations for ( ) rem is possible for many special cases of : when is a polynomial of degree (1), but also when it is a power series solution of a polynomial equation of degree (1) via Newton's iteration, or when it is a solution of a differential equation (e.g., exp), by first forming a differential equation for ( ) and then solving it by Newton's iteration or other divide-and-conquer algorithms, generally in characteristic 0 or large enough [9, 15, 28, 54; 8, §13.4 ] Similarly, still in the case when = , if furthermore has specific properties, then composition of power series can be performed in˜( ) operations. This is the case when is a polynomial [15] of moderate degree (it is a part of Brent and Kung's fast composition algorithm), an algebraic power series [28], but also for a class of truncated power series that can be obtained via shifts, reversals, scalings, multiplications by polynomials, exponentials and logarithms [12]. ...

A new Las Vegas algorithm is presented for the composition of two polynomials modulo a third one, over an arbitrary field. When the degrees of these polynomials are bounded by $n$, the algorithm uses $O(n^{1.43})$ field operations, breaking through the $3/2$ barrier in the exponent for the first time. The previous fastest algebraic algorithms, due to Brent and Kung in 1978, require $O(n^{1.63})$ field operations in general, and ${n^{3/2+o(1)}}$ field operations in the particular case of power series over a field of large enough characteristic. If using cubic-time matrix multiplication, the new algorithm runs in ${n^{5/3+o(1)}}$ operations, while previous ones run in $O(n^2)$ operations. Our approach relies on the computation of a matrix of algebraic relations that is typically of small size. Randomization is used to reduce arbitrary input to this favorable situation.

... Other ways to express the 's are possible, for example using Bell polynomials. See also [13,43] for considerations on the efficiency of numerical implementations. ...

We give an asymptotic development of the maximum likelihood estimator (MLE), or any other estimator defined implicitly, in a way which involves the limiting behavior of the score and its higher-order derivatives. This development, which is explicitly computable, gives some insights about the non-asymptotic behavior of the renormalized MLE and its departure from its limit. We highlight that the results hold whenever the score and its derivative converge, including to non Gaussian limits.

... We assume that the reader is familiar with the technique of relaxed power series evaluations [7], which is an efficient way to solve so-called recursive power series equations. ...

The evaluation of a polynomial at several points is called the problem of multi-point evaluation. Sometimes, the set of evaluation points is fixed and several polynomials need to be evaluated at this set of points. Several efficient algorithms for this kind of “amortized” multi-point evaluation have been developed recently for the special cases of bivariate polynomials or when the set of evaluation points is generic. In this paper, we extend these results to the evaluation of polynomials in an arbitrary number of variables at an arbitrary set of points. We prove a softly linear complexity bound when the number of variables is fixed. Our method relies on a novel quasi-reduction algorithm for multivariate polynomials, that operates simultaneously with respect to several orderings on the monomials.

... We assume that the reader is familiar with the technique of relaxed power series evaluations [7], which is an efficient way to solve so-called recursive power series equations. ...

The evaluation of a polynomial at several points is called the problem of multi-point evaluation. Sometimes, the set of evaluation points is fixed and several polynomials need to be evaluated at this set of points. Several efficient algorithms for this kind of "amortized" multi-point evaluation have been developed recently for the special cases of bivariate polynomials or when the set of evaluation points is generic. In this paper, we extend these results to the evaluation of polynomials in an arbitrary number of variables at an arbitrary set of points. We prove a softly linear complexity bound when the number of variables is fixed.

... 1. Thanks to efficient algorithms [5,59,60] for computing power series solutions of the original system (1), we have an efficient randomized membership test for ideal I. This membership test allows us to remove extraneous factors after each resultant computation and evade accumulation of these factors during repeated resultant computation. ...

Elimination of unknowns in a system of differential equations is often required when analysing (possibly nonlinear) dynamical systems models, where only a subset of variables are observable. One such analysis, identifiability, often relies on computing input-output relations via differential algebraic elimination. Determining identifiability, a natural prerequisite for meaningful parameter estimation, is often prohibitively expensive for medium to large systems due to the computationally expensive task of elimination. We propose an algorithm that computes a description of the set of differential-algebraic relations between the input and output variables of a dynamical system model. The resulting algorithm outperforms general-purpose software for differential elimination on a set of benchmark models from literature. We use the designed elimination algorithm to build a new randomized algorithm for assessing structural identifiability of a parameter in a parametric model. A parameter is said to be identifiable if its value can be uniquely determined from input-output data assuming the absence of noise and sufficiently exciting inputs. Our new algorithm allows the identification of models that could not be tackled before. Our implementation is publicly available as a Julia package at https://github.com/SciML/StructuralIdentifiability.jl.

... for the computation of the coefficients m . The solution is unique when requiring that (1.1) is not necessarily "recursive", so it is not always possible to directly solve it using the techniques from [20]. Nevertheless, it can always be rewritten as a recursive equation using the algorithms from [22]. ...

Many sequences that arise in combinatorics and the analysis of algorithms turn out to be holonomic (note that some authors prefer the terminology D-finite). In this paper, we study various basic algorithmic problems for such sequences (f_n)_{n∈ℕ} : how to compute their asymptotics for large n? How to evaluate f_n efficiently for large n and/or large precisions p? How to decide whether f_n > 0 for all n? We restrict our study to the case when the generating function f = ∑_{n∈ℕ} f_n z^n satisfies a Fuchsian differential equation (often it suffices that the dominant singularities of f be Fuchsian). Even in this special case, some of the above questions are related to long-standing problems in number theory. We will present algorithms that work in many cases and we carefully analyze what kind of oracles or conjectures are needed to tackle the more difficult cases.

Assuming a widely believed hypothesis concerning the least prime in an arithmetic progression, we show that polynomials of degree less than \( n \) over a finite field \( \mathbb {F}_q \) with \( q \) elements can be multiplied in time \( O (n \log q \log (n \log q)) \) , uniformly in \( q \) . Under the same hypothesis, we show how to multiply two \( n \) -bit integers in time \( O (n \log n) \) ; this algorithm is somewhat simpler than the unconditional algorithm from the companion paper [ 22 ]. Our results hold in the Turing machine model with a finite number of tapes.

Hensel’s lemma, combined with repeated applications of Weierstrass preparation theorem, allows for the factorization of polynomials with multivariate power series coefficients. We present a complexity analysis for this method and leverage those results to guide the load-balancing of a parallel implementation to concurrently update all factors. In particular, the factorization creates a pipeline where the terms of degree k of the first factor are computed simultaneously with the terms of degree \(k-1\) of the second factor, etc. An implementation challenge is the inherent irregularity of computational work between factors, as our complexity analysis reveals. Additional resource utilization and load-balancing is achieved through the parallelization of Weierstrass preparation. Experimental results show the efficacy of this mixed parallel scheme, achieving up to 9\(\times \) parallel speedup on a 12-core machine.

Let P(s) = p1s + p2s2 + … and Q(t) = q0 + q1t + … be formal power series.

We describe the GFUN package which contains functions for manipulating sequences, linear recurrences or differential equations and generating functions of various types. This document is intended both as an elementary introduction to the subject and as a reference manual for the package.

Thesis-Harvard. Bibliography: p. [97]-[98]

Computer algebra systems are now ubiquitous in all areas of science and engineering. This highly successful textbook, widely regarded as the 'bible of computer algebra', gives a thorough introduction to the algorithmic basis of the mathematical engine in computer algebra systems. Designed to accompany one- or two-semester courses for advanced undergraduate or graduate students in computer science or mathematics, its comprehensiveness and reliability has also made it an essential reference for professionals in the area. Special features include: detailed study of algorithms including time analysis; implementation reports on several topics; complete proofs of the mathematical underpinnings; and a wide variety of applications (among others, in chemistry, coding theory, cryptography, computational logic, and the design of calendars and musical scales). A great deal of historical information and illustration enlivens the text. In this third edition, errors have been corrected and much of the Fast Euclidean Algorithm chapter has been renovated.