PreprintPDF Available

Rigorous continuation of periodic solutions for impulsive delay differential equations


Abstract and Figures

We develop a rigorous numerical method for periodic solutions of impulsive delay differential equations, as well as parameterized branches of periodic solutions. We are able to compute approximate periodic solutions to high precision and with computer-assisted proof, verify that these approximate solutions are close to true solutions with explicitly computable error bounds. As an application, we prove the existence of a global branch of periodic solutions in the pulse-harvested Hutchinson equation, connecting the state at carrying capacity in the absence of harvesting to the transcritical bifurcation at the extinction steady state.
Content may be subject to copyright.
Rigorous continuation of periodic solutions for
impulsive delay differential equations
Kevin E. M. Church1and Gabriel W. Duchesne2
1Department of Mathematics and Statistics, McGill University
2Department of Mathematics and Statistics, McGill University
October 1, 2020
We develop a rigorous numerical method for periodic solutions of impulsive
delay differential equations, as well as parameterized branches of periodic solutions.
We are able to compute approximate periodic solutions to high precision and with
computer-assisted proof, verify that these approximate solutions are close to true
solutions with explicitly computable error bounds. As an application, we prove the
existence of a global branch of periodic solutions in the pulse-harvested Hutchinson
equation, connecting the state at carrying capacity in the absence of harvesting to
the transcritical bifurcation at the extinction steady state.
1 Introduction
Numerical continuation is an old and ever-important topic in computational math-
ematics. Suppose we have a nonlinear equation
f(x, α) = 0,
and given some solution xXfor Xa Banach space, we want to continue the
solution with respect ot the parameter α. Unless Xis finite-dimensional, some finite-
dimensional projection must be made. Once this is done, continuation can proceed
using typical predictor-corrected methods, but there then remain questions about
how fine the projection must be. Such questions can be answered using so-called
rigorous continuation approaches. These ideas have been applied successfully to the
continuation of steady states of partial differential equations [8, 9, 10], invariant
manifolds [12], and periodic solutions of ordinary [3, 22], and delay [13] differential
equations among others.
Impulsive dynamical systems are characterized by a combination of continuous-
time dynamics and moments of discontinuity in state triggered by a spatiotemporal
relationship. The simplest such spatiotemporal relationship is one in which the
discontinuities – referred to as impulses – occur at fixed times. The result is that
such systems are nonautonomous, and so steady states are relatively rare (in a
geometric sense) while the simplest invariant sets one can use as organizing centers
for global dynamics are the bounded trajectories. When the times at which impulses
occur are periodic, one can consider periodic solutions as these simple objects.
Bifurcation theory for periodic solutions of impulsive functional differential equa-
tions has undergone some new developments in recent years [5, 7], but such results
are only useful if one has computed a periodic solution to begin with. This is a
main motivation for considering here the problem of computation and continuation
of periodic solutions for impulsive delay differential equations. Our main contribu-
tion is a numerical method to compute such periodic solutions, continue them, and
rigorously prove their existence using the assistance of the computer.
Methods for computer-assisted proofs of stability for linear impulsive delay dif-
ferential equations has recently been accomplished [7] using Chebyshev spectral
collocation techniques, and the approach we take here shares some similarities. The
idea is as follows. If the period of impulse effect and the discrete delay are com-
mensurate, we show that any periodic solution will be piecewise C. If the vector
field is analytic, they will be piecewise-analytic. We can then show that computing
a periodic solution is equivalent to computing the solution of a higher-dimensional
boundary-value problem, where the number of extra dimensions is related to the
ratio between the delay and the periodic of impulse effect. The great thing is that
this boundary-value problem can be expressed as an ordinary differential equations
without delays or impulses. To rigorously compute solutions, we then exploit the
analyticity of any such solution and expand it in uniformly convergent Chebyshev
series. This allows conversion between the problem of computing a solution of a
boundary-value problem into one of computing a zero of an infinite-dimensional
nonlinear map in a sequence space. By truncating the number of modes in the
Chebyshev series, we obtain a finite-dimensional zero-finding problem. We then
apply numerical methods for the computation and continuation of such zeroes. Rig-
orous numerics can then be used to prove that such numerical zeroes (respectively,
branches of zeroes) are proximal to true zeroes (respectively, branches). The error
between the numerical and true solution can be rigorously computed.
Converting problems in nonlinear dynamics into zero-finding problems in se-
quence spaces is not a new idea. In the context of sequence spaces representing
Chebyshev series coefficients, see [2, 7, 15, 20] for a few recent applications. One soli-
tary application of Chebyshev expansions in nonlinear impulsive dynamical systems
we could find appears in [24], where it was used to generate a simplified approxima-
tion of an optimal control problem involving impulsive integrodifferential equations.
The idea was that the approximate problems could be solved using existing soft-
ware. However, sequence space concepts are not used there, so we are comfortable
in asserting that the present paper is the first such application of Chebyshev series
that is truly instrumental to solving a problem in nonlinear impulsive dynamical
We should remark especially that periodic solutions of delay equations have been
computed using a Chebyshev integrator in [15], with the period being an integer
multiple of the delay. There, the idea is based on an implicit formulation of the
integral form of the step map. The advantage of that formulation is that one
can rigorously integrate from arbitrary initial data. To contrast, here we do not
explicitly integrate the delay differential equation; rather, we carefully determine
how periodic solutions of the impulsive delay differential equations are generated by
ordinary differential equations with unfixed boundary conditions. The result is that
the zero-finding problem we get for periodic solutions is comparatively simpler, it is
easier to derive bounds for the computer-assisted proofs, and we can find periodic
solutions with non-integer multiples of the delay. However, we can not integrate
from arbitrary initial data and can only find periodic solutions.
It is perhaps initially surprising that we do not expand into Fourier series, since
our stated objective is to compute and continue periodic solutions. Fourier expan-
sions works very well for wholly continuous problems, and there are numerous such
applications in rigorous numerics [3, 13, 17, 20, 22]. However, the Fourier basis
is ill-suited for approximating functions with discontinuities. Indeed, the Fourier
series associated to a discontinuous function does not even converge pointwise and
the coefficients decay at best linearly in the frequency. In contrast, with our setup
using piecewise Chebyshev expansions, we can represent solutions with respect to
a Schauder basis were we have geometric decay of the coefficients and uniform con-
vergence of the series. The cost is that we need to increase the dimension of the
As a test case, we apply our methods to periodic orbits in the pulse-harvested
Hutchinson equation
K, t /TZ
x=hx(t), t TZ.
Several authors have studied the Hutchinson equation with impulse effect [19, 25,
23], but very strong assumptions were needed to ensure existence of positive periodic
(or almost periodic) solutions. For example, with an impulse effect of the form
x=bkxat times t=tk, it was assumed in the previously cited publications that
the function t7→ Q0<tk<t(1 + bk) is periodic or almost periodic. This is a strong
assumption, since it is then strictly necessary for the sequence bkto switch signs
infinitely many times. The displayed equation above does not satisfy this condition
since we have bk=hfor h(0,1), which means bk<0 for all kZ.
The proofs of existence of periodic/almost periodic solutions in [19, 25, 23] make
use of fixed-point theory. The results are mathematically elegant because very little
is assumed apart from the sign-switching condition on the sequence bk. However,
the results are not explicit as the solutions are not explicitly constructed, and they
are not general because of this sign-switching requirement. To contrast, we here
use our rigorous numerical method to prove there is a global branch of positive
periodic solutions that exists for h[0, h], where h= 1 erT corresponds to a
transcritical bifurcation with the steady state solution x= 0. The codes necessary
to complete the computer-assisted proofs can be found at [4].
The outline of the paper is as follows. In Section 2 we formulate the equivalent
boundary-value problem that can be used to compute periodic solutions. We de-
velop the rigorous numerical method in Section 3. Our application to the impulsive
Hutchinson equation appears in Section 4. We conclude with Section 5.
2 From periodic solutions to boundary-value
The class of dynamical systems we consider in this paper are impulsive delay differ-
ential equations with a single discrete delay:
˙x=f0(x(t), x(tτ), β), t /TZ
x=g(x(t), x(tτ), β), t TZ.
Here, βRis a parameter. We will often require f0and gto be real-analytic and
that the delay τ > 0 and the period of impulse effect T > 0 are commensurate
– that is, p:= T
τis rational. Under this condition, we can perform a change of
independent variable so that the delay becomes unity and the period of impulse
effect is p. Specifically, we set t=τ˜
tfor a new time variable ˜
t. Completing the
change of variables and dropping the tilde, we get
˙x=f(x(t), x(t1), β), t /pZ(1)
x=g(x(t), x(t1), β), t pZ,(2)
with f=τf0. We will from this point work with (1)–(2). The previous hypotheses
are then equivalent to.
H.1 f:Rd×Rd×RRdand g:Rd×Rd×RRdare analytic.
H.2 p=p1
p2is rational, with p1and p2coprime and positive.
We will make it clear when these hypotheses are assumed. Note that it is not strictly
necessary to have p1and p2coprime, but our numerical method is more efficient
when p1is small. It is therefore beneficial to have the rational pexpressed in lowest
2.1 Periodic solutions
The impulsive delay differential equation (1)–(2) is periodically forced, since every
ptime units there is a forced discontinuity in the state variable from the impulse
effect. The first result we prove is that except in very special (i.e. degenerate)
situations, periods of periodic solutions are constrained to be integer multiples of
the forcing period.
Proposition 1. If ψis a periodic solution of (1)(2) with period P, then exactly
one of the following must occur.
P=mp for some mN,
P /pNand ψis continuous; in particular, g(ψ(kp), ψ (kp 1)) = 0 for all
Proof. Suppose P /pN. Since ψis a solution it satisfies (2) at time t=kp for
kZ, so
ψ(kp) = ψ(kp) + g(ψ(kp), ψ(kp 1)).
By periodicity, ψ(kp) = ψ(kp +P) and ψ(kp) = ψ((kp +P)). Since P /pN, we
have kp +P /pZ, so ψis continuous at kp +Pand therefore ψ(kp +P) = ψ((kp +
P)). Combining the previous two observations on periodicity and continuity, it
follows that ψ(kp) = ψ(kp), which means that ψis continuous at t=kp. As kZ
was arbitrary and the discontinuities of ψare a subset of pZ, we conclude that ψis
continuous. This also implies g(ψ(kp), ψ(kp 1)) = 0.
Proposition 1 indicates that if we are interested in discontinuous periodic solu-
tions – that is, solutions of (1)–(2) that explicitly do not solve the delay differential
equation (1) in isolation – we need only concern ourselves with solutions whose
period is an integer multiple of p.
2.2 Splitting into solution segments
To make the process of finding such periodic solutions more concrete (although
perhaps a bit indirect), we make an argument that can be considered a twist on
the classical method of steps for delay differential equations. Let ψbe a periodic
solution of (1)–(2) with period mp. For t[0, mp) we can write
ψ(t) =
p2(t)zk,q tkp q
where we define zk,q :h0,1
p2iRdby zk,q(t) = ψt+kp +q
p2for t < 1
p2, and
extend to the closure by continuity. In this way, a periodic solution ψis uniquely
associated to the family {zk,q :k= 0,...,m 1, q = 0,...,p11}of continuous
functions. We will call these functions the solution segments. See Figure 1 for a
Figure 1: To visualize the segments, we can instead define ˜zk,q (t) = ψ(t) for kp +q
t < kp +q+1
p2. Then, the restriction of ψto such intervals of tcorrespond precisely to the
˜zk,q . Above is a visual depiction of these shifted segments; the ones in blue correspond
to k= 0 and the ones in red to k= 1. The vertical lines are used to delineate the
boundaries of the domains. Since the boundary between ˜z0,p11and ˜z1,0corresponds to
an impulse time, namely t=p, we have made the curve discontinuous there. We obtain
the segments by pulling the domains of the ˜zk,q to the interval [0,1
p2) by translation and
taking a continuous closure at the endpoint.
2.3 Ordinary differential equations for the solution seg-
The solution segments zk,q themselves satisfy differential equations. To establish
this correspondence, we will need the following construction. See Figure 2 for a
more intuitive diagram.
Definition 1. For (k, q)∈ {0,...,m1} × {0,...,p11}, the delay shift is the
unique pair η(k, q) = (η1, η2)N×Nwith 0η1< m and 0η2< p1such that
kp 1 + q
The delay shift is indeed well-defined. Also, if thkp +q
p2, kp +q+1
t1kp 1 + q
, kp 1 + q
, η1p+η2+ 1
Since ψis periodic with period mp, this means that if we denote zη(k,q)=zη12for
η(k, q) = (η1, η2), then
ψ(t) = f(ψ(t), ψ(t1), β ) = fψ(t), ψ η1p+η2
+s, β
where s=tkp q
p2). But ψη1p+η2
p2+s=zη(k,q)(s) by (3), while
ψ(t) = zk,q (s). It follows that the functions zk,q satisfy the ordinary differential
˙zk,q =f(zk,q , zη(k,q), β ).(5)
Remark 1. We previously said that what we do here is a “twist” on the method of
steps for delay differential equations. Indeed, the method of steps exploits the fact
that in solving an initial-value problem
˙y=F(y(t), y(t1)), y0(θ) = h(θ)
Figure 2: The delay shift defines a bijection from the set of solution segments to itself. The
segments zk,q are identified by their indices (k, q), and the delay shift η(k, q) corresponds
to the segment that would need to be evaluated if we wanted to compute ψ(t1) for
p2t < (k+mN)p+q+1
p2and arbitrary NZ, in the sense of the
representation (3) for ψ. In the figure above, we have p=2
p2and m= 2, so
the periodic solution has period mp =4
7. The segments are identified with the indices
(0,0), (0,1), (1,0) and (1,1). Since 1 = 7
7, shifting back by one time unit results in
“wrapping backwards” by seven solution segments, each of length 1
7. The result is that
η(0,1) = (1,0). This “wrapping backwards” is illustrated by the arrow diagram above
the figure. The vertical lines delineate boundaries between solution segments, and the
colours and line styles are formally analogous to what we have in Figure 1.
for some initial function h: [1,0] Rd, one can compute y(t)for t[0,1] by
solving the nonautonomous ordinary differential equation
˙y=F(y(t), h(t1))
subject to the initial condition y(0) = h(0). The idea is that the solution history (in
this case the function h) informs the future evolution and allows one to temporarily
forget that one is solving a delay differential equation. We do a very similar thing
here, this time exploiting that our history is can be identified with some solution
segment after applying the correct delay shift.
2.4 Boundary condtions for the solution segments
The solution segments zk,q also satisfy some boundary conditions. First, since ψ(t)
is continuous whenever t /pZ, the segments must satisfy
zk,q+1 (0) = zk,q 1
whenever 0 q < p11. Indeed, one can verify from (3) that in this case,
ψkp +q+ 1
+s=zk,q 1
ψkp +q+ 1
p2=zk,q+1 (0),
and requiring continuity forces (6). On the other hand, at impulse times, (2) implies
ψ(kp) = ψ((kp)) + g(ψ(kp), ψ(kp 1), β )
=ψ((kp)) + g(ψ((kp)), ψ (η1p+η2/p2), β)
=ψ((kp)) + g(ψ((kp)), zη(k,0) (0), β)
where η=η(k, 0). Since ψ(kp) = zk,0(0), we can write
zk,0(0) = ψ((kp)) + g(ψ(kp), zη(k ,0)(0), β ).
To proceed further, we need to express ψ(kp) in terms of the solution segments.
This can be done as follows:
ψ(kp) =
p2, k > 0
p2, k = 0.
We can, however, simplify this formula if we introduce the convention z1,q =
zm1,q. Doing this, we can fully express the impulse condition (2) at the level of
the solution segments by
zk,0(0) = zk1,p111
p2, zη(k,0)(0), β .(7)
2.5 The boundary-value problem
Combining (5), (6) and (7), we can write down the boundary-value problem for the
solution segments. It is given by
˙zk,q =f(zk,q , zη(k,q), β ),(8)
zk,q+1 (0) =
zk,q 1
p2, q < p11
zk1,q 1
p2+gzk1,q 1
p2, zη(k,0)(0), β , q =p11,(9)
where we have introduced a similar convention to z1,q =zm1,q from the previous
section: we define zk,p1zk,0so that the case q=p11 has a sensible meaning.
Since fis analytic, we may conclude by the Cauchy-Kovalevskaya theorem that
Lemma 2. Assume H.1 and H.2. Every solution of the boundary-value problem
(8)(9) is analytic.
Conversely, given a solution of the boundary-value problem, one can reverse the
argument from Section 2.2 and obtain a periodic solution ψof the impulsive delay
differential equation (1)–(2) by extending (3) to a periodic function on R. The
argument is straightforward, and the following lemma is therefore proven.
Lemma 3. Assume H.1 and H.2. Every periodic solution of (1)(2) of period mp
for mNis uniquely associated with an analytic solution z: [0,1/p2](Rd)mp1
of the boundary-value problem (8)(9).
Remark 2. The construction we have completed in this section crucially relies on
the period pbeing rational. In fact, if the period is not rational then not only is
our solution segment idea no longer applicable, but periodic solutions can be highly
irregular. See for example Proposition 2.3.1 of [7].
As a final bit of preparation for what is to come, we will perform a change
of variables so that the domain of the solution zof the bondary-value problem
becomes [1,1]. The reason for this is because we will eventually expand solutions
of the BVP as Chebyshev series, and these have very nice convergence properties
on [1,1].
Corollary 4. Assume H.1 and H.2. With the reparameterization of time t7→ t+1
solutions zk,q of the boundary-value problem (8)(9) are transformed to solutions
φk,q : [1,1] Rdof
φk,q =1
f(φk,q , φη(k,q), β ),(10)
φk,q+1 (1) = φk,q (0), q < p11
φk1,q(0) + gφk1,q (0), φη(k ,0)(1), β , q =p11,(11)
3 Rigorous numerics setup
In this section we review some basics of Chebyshev series before converting the
boundary-value problem (10)–(11) into the problem of finding a zero of a function
Fon a weighted sequence space. We then show how Fcan be discretized so that
approximate solutions can be found with Newton’s method. Then, we state the
general-purpose a-posteriori solution validation tool that can be used to validate
branches of F= 0. We will then explain some subtleties of the method as it applies
to the BVP (10)–(11) before proceeding to an example.
3.1 Preliminaries on Chebyshev series
In what follows, sequences of vectors in Rdwill be indicated with curly braces:
{a}nN. The Chebyshev polynomials are the polynomials Tnfor nNdefined by
the recursion
Tn+1(x) = 2xTn(x)Tn1(x), n 1
T1(x) = x,
T0(x) = 1.
If h: [1,1] Rdis analytic, then it can be uniquely extended to a Chebyshev
h(t) = h0+ 2 X
hnTn(t) (12)
that is uniformly convergent on the Bernstein ν-ellipse (in the complex plane) for
some ν > 1 [21], for some sequence {h}nN. Moreover, there exists some ν > 1 such
that the quantity
is finite, where the sequence of weights ωis defined by ω0= 1 and ωn= 2νnfor
n1, and | · | is any norm on Rd. The symbol 1
νwill denote the normed vector
space of all sequences {h}nNfor which the norm || · ||νis finite. This is a Banach
space. We will sometimes write 1
ν(Rd) when we want to emphasize the dimension
of terms of the sequence.
Let {a}nNand {b}nNbe the coefficients of two Chebyshev series. The product
of those series can be written such that
a0+ 2 X
b0+ 2 X
=a0b0+ 2 X
where we define the convolution by
Likewise, the convolution operator defines a bilinear map on 1
νand one can show
that ||(ab)||ν≤ ||a||ν||b||ν, so (1
ν,) is in fact a Banach algebra.
Let h(t) be an analytic function that can be extended to a Chebyshev series such
h(t) = a0+ 2 X
h0(t) = b0+ 2 X
Note that for n2
ZTn(t)dt =1
n+ 1 Tn1(t)
h(t) = a0+ 2 X
anTn(t) = Zh0(t)dt
=Zb0+ 2 X
=Zb0+ 2b1t+ 2 X
n+ 1 Tn1(t)
=C+b0T1(t) + b1T2(t) + 1
Since this is true for all t[1,1], for n1 we have
The above equation precisely relates the Chebyshev series coefficients of a func-
tion and those of its derivative. Finally, we should mention that the Chebyshev
polynomials satisfy
Tn(1) = 1, Tn(1) = (1)n(16)
for all n0.
3.2 Boundary-value problem to zero-finding problem
In what follows, we will assume hypotheses H.1 and H.2. Denote X= (1
p1is as in H.2 and we seek periodic solutions of period mp. We will sometimes write
instead X=Xνwhen we want to emphasize the base νof the weight ωn= 2νn
from the space 1
ν. Also we will let Y= (Ω)mp1where the norm in Ω is
Equipped with the || · ||norm, Ω is a Banach space and so is Ywhen given the
induced max norm. Elements aof Xwill be identified as follows:
a={ai,j :i= 0,...,m1, j = 0,...,p11},
where each aij 1
ν. In this case, ai,j,n (ai,j )nwill denote the nth element of the
sequence. Equip Xwith the norm ||a||X:= maxi,j ||ai,j ||ν.Xis a Banach space
and the following inclusion property is elementary.
Proposition 5. There is a continuous embedding Xν2Xν1whenever ν1ν2.
There is also a continuous embedding XνY.
Remark 3. If aXν0and ν0>1, then aXνfor |νν0|sufficiently small.
This fact will be important later when we consider continuation of zeroes.
Since solutions of the boundary-value problem (10)–(11) are analytic (Lemma
3), we are free to make the Chebyshev expansion
φk,q (t) = ak,q,0+ 2
ak,q,n Tn(t),(17)
for aXand some ν > 1 (recall, the weight sequence ωdepends on ν). The next
step is to substitute this expansion into the boundary-value problem. To facilitate
this, assume that we can make the convergent Chebyshev expansion
f(φk,q (t), φη(k,q)(t), β ) = fk,q,0(a, β) + 2
fk,q,n (a, β)Tn(t).(18)
This is not truly an assumption since we have required fto be analytic, although
there are some subtleties. For example, if fhas poles in the complex plane then it
might be that if aXνthen f(a, β)/Xνfor the same value of ν. This will be
discussed further in Section 3.4. For the nonlinearity g, we write
g(φk1,p11(0), φη(k,0)(1)) = gk(a, β ).(19)
Remark 4. The coefficients fk,q,n and gkwill generally depend nonlinearly on the
coefficients of aand on β, which is why we have included the explicit dependence. If
fis polynomial, then the coefficients fk,q,n are expressible in terms of convolutions
(see (14) for the scalar case). On the other hand, if fis expressible in terms of
elementary functions then one can embed it in a polynomial vector field using auto-
matic differentiation [11, 14]. Whether gis polynomial or not is of no consequence
(although polynomials do make things easier) since the evaluations in (19) are at
+1 and 1, which leads to an explicit expression for gkin terms of all coefficients
of ak1,p11and aη(k,0) due to (16).
We can now substitute the Chebyshev expansion of φinto the boundary-value
problem. Substituting into the ODE (10), we get
φk,q (t) = 1
f(φk,q (t), φη(k,q)(t), β ) = 1
2p2 fk,q,0(a, β ) + 2
fk,q,n (a, β)Tn(t)!
On the other hand, from (15) we get
2nak,q,n =bk,q,n1bk ,q,n+1
for n1, where bare the Chebyshev series coefficients of ˙
φk,q . Since these can be
directly extracted from (20), we get
2nak,q,n =1
(fk,q,n1(a, β )fk,q,n+1(a, β)) , n 1.(21)
To get an equation for the order n= 0 modes, we need to use the boundary
conditions. Substituting the Chebyshev expansion (17) into (11) and using (16)
and (19), we get
ak,q+1,0+ 2
(1)nak,q+1,n =ak,q,0+ 2 P
n=1 ak,q,n , q < p11
gk(a, β) + ak1,q,0+ 2 P
n=1 ak1,q,n, q =p11.
Note that in the above equation, we must remember that our cyclic variable con-
vention means that ak,p1,n =ak,0,n and a1,q,n =am1,q,n for all n0.
To conclude, if φis a solution of the BVP (10)–(11) then its sequence of Cheby-
shev coefficients must satisfy (21) and (22), and aXfor some ν > 1. One can
similarly reverse the argument; any sequence aXfor some ν > 1 that satisfies
both of those equations generates a solution φof the boundary-value problem by
(17). Motivated by this, we formally define a nonlinear map Fas follows:
F(a, β)k,q ,n =
ak,q,0ak,q+1,0+ 2 P
n=1 ak,q,n (1)nak,q+1,n , n = 0, q < p11
gk(a, β) + ak1,q ak,q +1,0+ 2 P
n=1 ak1,q,n (1)nak,q+1,n , n = 0, q =p11
2nak,q,n 1
2p2(fk,q,n1(a, β )fk,q,n+1(a, β)) , n 1
Observe that if F(a, β) = 0 for some aX, then asatisfies both (21) and (17).
The converse also holds, and we have the following lemma.
Lemma 6. Every solution of the boundary-value problem (10)(11) is uniquely
associated to some aXνsatisfying F(a, β) = 0, for some ν > 1.
While a given solution aof F(·, β) = 0 might be an element of Xν, the map
F(·, β) generally does not map Xνinto itself. However, it does map into Yprovided
the nonlinearity fis entire.
Lemma 7. Suppose fis entire. The linear map L:XνYwith L(a)k,q,n =
2nak,q,n is well-defined, as is the map F:Xν×RY. Also, this map is C.
Proof. Define ˜
F=FL. We will show that ˜
Fhas range in Xνand Lis well-defined,
which will give the required result for F. If aXν, then
||L(a)k,q ||=
nνn·2n|ak,q,n = 2
νn|ak,q,n | ≤ ||ak,q ||ν,
from which the well-definition of Lfollows. As for that of ˜
F, observe that it is
sufficient to prove that f(a, β) = {fk,q ,n(a, β )} ∈ Xν. From here on we will suppress
the input βfor brevity. Since fis entire, we have
f(x) =
for cjRd,x= (x, y) and xjthe standard scalar-valued multiindex power. At the
level of the Chebyshev coefficients,
fk,q,n =
cj(ak,q )j
where ak,q = (ak,q , aη(k,q)) the convolution power is defined componentwise (i.e.
in the components in (Rd)2) such that it is compatible with the previous power
series. However, since ak,q and aη(k,q)are elements of 1
ν, each of their components
is summable with respect to the weight sequence ω. Write a= (a(1),...,a(2d)) for
these components, so that each a(i)1
ν(R). Then define rR2das follows:
Using the Banach algebra and the assumption that fis entire one can then verify
||fk,q ||ν
|cj| · |(ak,q )j
ωn|(ak,q )j
Since each fk,q is an element of 1
ν, it follows that f(a, β)Xνas required. Smooth-
ness of ˜
Fis a consequence of the regularity of fand g.
The above lemma covers the cases where fis polynomial, of course. By Remark
4, many non-polynomial nonlinearities can be handled using polynomial embed-
3.3 Finite-dimensional projection and numerical zeroes
Let NNbe fixed. Define the projection map πN:XXaccording to
πN(a)k,q,n =ak,q,n , n N
0n > N,
Then, define the complementary projector π:XνXνby π=IXνπN.
Let XN=πN(X) (we will write XN
νif we want to emphasize the value of ν) and
introduce the “computational isomorphism” iN:XN((Rd)N+1)m·p1as follows.
First, for a given x((Rd)N+1)m·p1we make the indexing convention xk,q,n Rd
for k∈ {0,...,m1},q∈ {0,...,p11}and n∈ {0,...,N}. We can then define
iN(a)k,q,n =ak,q,n .
Whenever we want to think of an element of Xwith zero tail (i.e. an= 0 for n > N)
as being a vector in some finite-dimensional space, we can apply the isomorphism
iNto it. Similarly, we can apply the inverse
N:XNXN, XN:= ((Rd)N+1 )mp1(24)
to embed a finite-dimensional vector object of appopriate dimension into XN.
In what follows, we will use bars to denote “numerical” objects (i.e. objects
that in practice will be represented as finite matrices or vectors) while quantities
without bars will typically be analytical. Define the maps FN:X×RXand
FN(a, β) = πNF(πNa, β ), F N(a, β) = iNFN(i1
Na, β).(25)
Intuitively, FNis the restriction of Fto Chebyshev series with Nnonzero modes,
with the output truncated to Nmodes. FNis the representation of this nonlinear
map on the Euclidean space XN. By Lemma 7, each of these maps is C.
Since FNFpointwise, it seems reasonable that approximate zeroes of FN(·, β)
in XNfor Nlarge enough should yield good approximations to zeroes of F(·, β ).
Equivalently, approximate zeroes of FN(·, β) on embedding by i1
Nshould be good
approximations to zeroes of F(·, β). In practice (and, indeed, in our application in
Section 4), such approximate solutions can be computed by implementing FN(and
its derivative) in a computer and applying Newton’s method.
3.4 A-posteri analysis for branches of zeroes
If a numerical branch of zeroes has been computed – that is, one has a discrete
set of parameters β0,...,βMand a corresponding discrete set of numerical zeroes
a0, . . . , aMsuch that FN(aj, βj)0 – we want a means of proving that there
is a unique, continuous branch of true zeroes (i.e. of F) nearby. The primary
theoretical tool we will use for the a-posteriori analysis of numerical branches of
zeroes is often called the radii polynomial approach. It is a twist on the standard
Newton-Kantorovich theorem.
We will present the result in general and subsequently apply it to our problem.
To begin, let X,Ybe Banach spaces, let F:X × R→ Y be a nonlinear map
depending on a real parameter (note: Fis distinct from the objects of the previous
sections) and suppose x0and x1satisfy F(x0, λ0)0 and F(x1, λ1)0 for some
λ0, λ1R. The meaning of the symbol is essentially arbitrary, although the
intuition is of course that x0and x1are approximate zeroes of Fat the parameters
λ0and λ1. Define the convex predictors
xs= (1 s)x0+sx1, λs= (1 s)λ0+1.(26)
The following is a more explicit (and in some sense, slightly more general) version
of a theorem from [16], although the proof is identical and as such, will be omitted.
Theorem 8. Let Xand Ybe Banach spaces with a continuous embedding X→ Y.
Let FCk(X × R,Y)for some k1and assume there exist bounded linear
operators AB(X,Y)and AB(Y,Y)such that the following range hypotheses
are satisfied:
AF has range in Xand A:Y → Y is injective,
AAhas range in X,
ADxF(x, λ)has range in Xfor all x∈ X and λR.
Suppose there exist Y0,Z0,Z1and Z20such that
||AF (xs, λs)k|XY0,s[0,1] (27)
||A[DxF(x0, λ0)A]||B(X,X)Z1(29)
||A[DxF(xs+b, λs)DxF(x0, λ0)]||B(X,X)Z2(r),s[0,1], b Br(0).
Define the radii polynomial
p(r) = Z2(r)r+ (Z1+Z01)r+Y0.(31)
If there exists r0>0such that p(r0)<0, then there exists a Ckfunction
˜x: [0,1] [
such that Fx(s), λs) = 0. Furthermore, these are the only zeroes of Fin the tube
Ss[0,1] Br0(xs).
Since such computer-assisted a-posteriori error analysis is somewhat new in the
impulsive dynamical systems literature, we would do well to explain how this the-
orem works and what the individual pieces (e.g. Aand A) represent. The basic
idea is that we would like to apply a uniform Newton-Kantorovich argument; that
is, we would like to prove convergence of the iterated map
x7→ xDxF(x0, λ0)1F(x, λs)
for all s[0,1]. If this could be done, then we would be guaranteed the existence
of ˜x(s) such that F(˜x(s), λs) = 0 for all s[0,1] as desired. The problem is that
proving the convergence of this method is complicated by the fact that the inverse
of DxF(x0, λ0) is difficult to impossible to express analytically. However, there is
no reason why we should need to use the exact inverse of this operator; we could
instead make use of an approximation. In practice, we therefore take AB(X,Y)
to be an approximate derivative:ADxF(x0, λ0). Similarly, AB(Y,Y) is
thought of as an approximate inverse of DxF(x0, λ0). We explicitly construct A
so that it is easy to invert.
There is a bit of a subtle point here: the derivative DxF(x0, λ0) is a bounded
linear map from Xto Y, so DxF(x0, λ0)1B(Y,X). In practice, it might be
difficult to construct an approximate inverse ADxF(x0, λ0)1that maps into
Xor explicitly prove that the latter is true. However, to prove the theorem it
is sufficient to require AB(Y,Y) and that the compositions of Awith each of
F,Aand DxF(x, λ) have range in X. The bound inequalities (27)–(30) will fail
automatically if these range conditions are not satisfied anyway, while in most cases,
establishing the bounds will indirectly verify the range conditions.
With these constructions complete, the theorem is proven by showing that if
p(r0)<0, the linear operator T:X × [0,1] → X defined by
T(x, s) = xAF (x, λs)
is a uniform (in s) contraction on the tube s[0,1]Br0(xs). Note that the conditions
that AF and ADxF(x, λ) have range in Xensures that both Tand the derivative
DxT(x, s) : X → X are well-defined. For details, the reader should consult [16].
As for the bounds Y0,Z0,Z1and Z2, they each have a fairly straightforward
interpretation. Y0is a bound on the numerical defect across the convex predictor.
Since Ais basically an approximate left-inverse of A, the bound Z0measures the
quality of this approximate inverse. Z1is a bound on the error between the deriva-
tive DxF(x0, λ0) and the approximate derivative A. As for Z2, it is essentially
a uniform (in s) local bound on the second derivative of F(or in the C1case, a
uniform local Lipschitz constant for the first derivative).
Remark 5. We have not elaborated on how the numerical branch of zeroes can
be extended globally. Theorem 8 only applies to one segment of the branch –
that is, between two numerical zeroes x0and x1for parameters λ0and λ1. In
fact, one can show (see [16]) that if Theorem 8 is successfully applied to the zeroes
{(x0, λ0),{x1, λ1}and to the zeroes {(x1, λ1),(x2, λ2)}so that the Theorem grants
the existence of two Ckcurves of zeroes, then the curve obtained by “gluing” them
together is Ck.
3.5 Construction of Aand Afor the boundary-value
For the boundary-value problem (10)–(11), there is a straightforward way to con-
struct Aand Asuch that the range hypotheses of Theorem 8 are satisfied for Fthe
nonlinear map (23) with domain and codomain as stated in Lemma 7. To begin,
suppose x0XNis an approximate (i.e. numerical) zero of FN(·, λ0); this might be
computed using Newton’s method applied to FN(·, λ0). Denote x0=i1
this object has the same interpretation as x0from Theorem 8. Define the finite-
dimensional linear map (representable by a matrix)
A=DxFN(x0, λ0) (32)
Define the linear map L:XYby
L(a)k,q,n = 2nak,q,n ,(33)
and define the linear map A:XYby
Formally, A“applies the finite-dimensional truncation A” to the first Nmodes,
and applies the diagonal operator Lto the tail (the modes N+ 1 onward). This
structure results in Abeing diagonally dominant.
To construct A, we first let Abe a numerical inverse of A. That is, Ais a
matrix of the same dimenesion as Asuch that ||IA A|| ≈ 0. Let L+:YX
be the linear map
L+(a)k,q,n =(0, n = 0
2nak,q,n , n > 0.
Note that L+behaves like a Moore-Penrose pseoduinverse of L, hence the use of
the symbol L+. We can now define A:YXas follows:
Lemma 9. Aand Aare well-defined and bounded, and the range hypotheses are
satisfied provided provided Ais maximal rank.
Proof. It is easy to check that L+:YXνis bounded. Since Ais bounded and
πNhas range in XN, boundedness of Afollows. Checking that Ais well-defined
and bounded is similar. To prove that Ais injective, one makes use of the fact
that Ais injective (since it is maximal rank) and the restriction of L+to π(X)
is injective. There is no need to verify the range hypotheses because Aalready has
range in X.
Remark 6. The choice of Yis not unique and we could just have easily used the
space ()mp1instead, since there is an embedding . However, if this were
done, then we would have A:YYrather than A:YXand we would have
had to explicitly verify the range hypotheses. This is not difficult, but our choice for
the space Yis in some sense “ minimal”. The inclusion of the range hypotheses in
Theorem 8 is a reflection of this non-uniqueness of the space Yand the consequential
“roughness” or robustness of the radii polynomial approach.
With these choices of Aand Aand the condition that Ais maximal rank, the
range hypotheses of Theorem 8 are satisfied for the boundary-value problem (10)–
(11). In fact, the range hypotheses and at least C2smoothness of fand gimply that
the bounds Y0and Z0,Z1and Z2from (27)–(30) do indeed exist; they just may not
be small enough to ensure p(r0)<0 for some positive r0.Computing these bounds
is another matter entirely, but as it has become a somewhat standard routine in
the field of rigorous numerics we will not compute the fully general bounds for the
map Ffrom (23). Rather, we will instead compute them in Section 4 for a specific
3.6 Implementation
Here we will comment on the role of νand the standard approach to implementation
of Theorem 8 in the computer. νserves a few different purposes. First, it char-
acterizes the regularity of the functions that can be represented by the Chebyshev
series in coefficients of X: higher values of νcorrespond to more regular functions
since the coefficients of the Chebyshev series decay geometrically with a faster rate.
Second, if a zero of Fis very irregular it might require a very large number Nof
modes to represent accurately. However, if νis fixed and Nis increased, some of the
technical bounds (27)–(30) might explode (in practice) because of large numerical
roundoff errors near the limits of the floating point number system, as these bounds
always require computing powers νnfor nN. While we will always rigorously
track numerical roundoff by using interval arithmetic in our implementation, even-
tually a limit might be reached that can only be overcome by decreasing ν. The
good news is that the embedding property of Proposition 5 guarantees we can not
lose zeroes of Fby decreasing ν. However, we can lose zeroes by increasing ν. For
example, if a solution of (10)–(11) has its Chebyshev coefficients in Xνif and only
if ν < 1.1 (for example, based on precise location of the poles of the solution), then
the computer-assisted proof will always fail to validate a representative numerical
zero with ν= 1.11 unless there is a (human) error in the implementation of the
In practice, we implement Theorem 8 in a computer by first determining ex-
plicit formulas for the bounds Y0,Z0,Z1and Z2using a combination of analytical
estimates and finite-dimensional computations such as matrix norms. We then im-
plement these bounds on the computer using interval arithmetic. In MATLAB,
we use the package INTLAB [18] to accomplish this. After computing the numer-
ical branch of zeroes of FNusing a double arithmetic implementation of Newton’s
method, we feed all of the data into our bounds. The result is we obtain rigorously
verified over-estimates of our explicit bounds. We can then rigorously enclose the
roots of the radii polynomial (if Z2(r) is a polynomial in r) or, at worst, reliably
compute the sign of p(r0) for a given r0(if Z2(r) is not a polynomial), thereby
checking all conditions of the theorem.
4 Transcritical branch in the pulse-harvested
Hutchinson equation
The Hutchinson equation, sometimes called the delay logistic equation, is the scalar
delay differential equation
xrepresents a single-species population. The parameters rand Kare positive, and
are called respectively the intrinsic growth rate and carrying capacity. τ > 0 is a
delay that takes into account such factors as maturation or gestation time. If a
linear impulsive harvesting is introduced such that the population is reduced to a
proportion β[0,1] each T > 0 time units, we get the impulsive delay differential
K, t /TZ(36)
x= (β1)x(t), t TZ.(37)
The equation is referred to as the Hutchinson equation with pulse harvesting because
we can also interpret h= 1 β[0,1] as being the proportion of the population
that is removed (i.e. harvested) at times kT .
We can perform a few changes of variables to eliminate some of the parameters.
If we define
y(t) = 1
Kx(), α =rτ, u =T
then the Hutchinson equation with pulse harvesting becomes
˙y=αy(t)[1 y(t1)], t /uZ(38)
y= (β1)y(t), t uZ.(39)
Doing this, we eliminate two parameters and have re-scaled the delay to unity. Using
our rigorous numerical method for continuation of periodic solutions in additional
to analytical results on the pulse-harvested Hutchinson equation (which we will
develop in Section 4.1), we will ultimately prove the following theorem.
Theorem 10. Let K > 0be arbitrary and let the parameters r,τ,Tbe one of the
sets from (the rows of) Table 1, define β=erT and consider the pulse-harvested
Hutchinson equation (36)(37). The solution x= 0 undergoes a transcritical bifur-
cation with a nontrivial periodic solution of period Tat β=β. Let this branch be
denoted xβ. The following are true.
For 0< β < β, there are no nontrivial non-negative periodic solutions, x= 0
is the global attractor on R+, and any nontrivial periodic solution xin this
parameter regime satisfies
tRy(t)K1 + log(β)
rT .
β7→ xβis continuous 1for β[ββtol,1], with βtol = 0.001. This branch
has no folds, x1=Kand this is the only branch of periodic solutions that
crosses through the trivial solution x= 0.
If β= 0, it is easy to see that every solution converges to zero uniformly in finite
time, so the dynamics are trivial and we do not mention this in the theorem. The
existence of the transcritical bifurcation and the first conclusion of the theorem will
be established by analytical means. The nonlocal part concerning the branch yβ
will be proven using our rigorous numerical method.
4.1 Analytical results
Since our focus with this publication is on the rigorous numerical method for periodic
solution continuation rather than the analysis of the pulse-harvested Hutchinson
equation, we will skip many of the details for the proofs of the analytical parts and
refer the reader to relevant background to fill in the gaps. The first result concerns
the hyperbolicity of the equilibrium solution y= 0 in (38)–(39). The following can
be proven by linearizing (38)–(39) about y= 0 and using the theory from [5, 6].
Lemma 11. Define β=eαu . The equilibrium solution y= 0 is hyperbolic if
β6=β. When β=βthe centre fibre bundle is one-dimensional.
Lemma 12. Let {βn:nN}be a convergent sequence of parameters and suppose
yβnis a sequence of nontrivial periodic solutions of period ufor parameter βn. If
yβn0then βnβ.
1Here, continuity is with respect to the supremum norm on the function space consisting of bounded
φ:RRthat are continuous from the right with limits on the left.
Proof # r τ T Proof runtime (seconds) u1β
1 3/10 1/10 1 79.7391 10 0.7408
2 1 1/2 1 22.9848 2 0.6065
3 1 1/4 1 63.2405 4 0.6065
4 2 1/2 1 460.1527 2 0.1353
5 3/10 7/5 2 157.4058 10 0.5488
6 3/10 7/5 3 327.9192 15 0.4066
7 1 3/2 1 20.3223 2 0.6065
8 1 5/3 1 33.3345 3 0.6065
Table 1: Parameters (r, τ, T ) for the rigorous branch continuation of the pulse-harvested
Hutchinson equation for Theorem 10. The function run proofs.m uses Proof # refer-
ences. Runtimes are for a machine with an AMD Ryzen 5 3600XT CPU and 16gb of
DDR4 memory. Larger values of u1result in generally slower proofs because the dimen-
sion of the system being handled is larger: for example, proofs far away from βinvolve
matrices of size u1(N+ 1) ×u1(N+ 1). Smaller values of βalso require more computa-
tion time because a longer section of the branch of periodic solutions must be computed
and validated, namely for β[ββtol,1].
Proof. Suppose not; let limn→∞ βn=b6=β. Let S(·, β ) : XXdenote the
time umap starting from initial time t0= 0 at parameter β, where Xis the set of
right-continuous functions with limits on the left with domain [1,0] and codomain
R. This map is C1[5] and its differential at y= 0 is the monodromy operator.
Since b6=β, the equilibrium y= 0 is hyperbolic so 1 is not an eigenvalue of
DS(0, b). Since the latter is compact, it follows that DS(0, b)Iis a Banach space
isomorphism, which implies that the equation S(y, β) = yhas a unique C1solution
curve (yβ, β) defined on a neighbourhood of β=b. This is a contradiction, since
S(0, β) = 0 for all βRbut we know that S(yβn, βn) = yβn.
The next result concerns the (essential) sign-constancy of solutions. Its proof is
simple and omitted.
Lemma 13. If β[0,1], every solution of (38)(39) is eventually either strictly
positive, strictly negative, or zero. In particular, every periodic solution is either
strictly positive, strictly negative, or identically zero.
Lemma 14. If β < eαu then:
y= 0 is the global attractor in R+;
any nontrivial periodic solution ysatisfies inftRy(t)1 + (αu)1log(β).
Proof. If yis a nonnegative solution, then ysatisfies the impulsive integral inequality
y(t)y(0) + Zt
αy(s)ds, t /uZ
y(t) = (β1)y(t), t uZ.
Solving this with the impulsive Gronwall-Bellman inequality [1], we get y(u)
eαuβy(0) < y(0). It follows that for any initial condition y(0) >0, we have
limn→∞ y(nu) = 0 for integer n. On the other hand, y(nu +t)eαuy(nu) for
t[0, u). These two facts together imply limt→∞ y(t) = 0, so y= 0 is the global
attractor as claimed.
To prove the other claim, first observe that by Proposition 1, any nontrivial
periodic solution must have its period be an integer multiple of u. Suppose the
period of such a solution is ku for some kN. Let C= inf t[0,ku]y(t). Since we
have already proven that y= 0 is the global attractor in R+, any nontrivial periodic
solution must be strictly negative by the previous lemma. It follows that |y(t)| ≤ C.
Let w(t) = y(t). Then
w(t)w(0) + Zt
αw(s)(1 + C)ds, t /uZ
w= (β1)w(t), t uZ.
Applying the Gronwall-Bellman inequality again, we get w(ku)w(0)βkeαku(1+C).
But since w(ku) = w(0) by periodicity, this implies 1 (βeαu(1+C))kand, con-
sequently, C≥ −1(αu)1log β. Since |y(t)| ≤ Cand yis negative, we get
y(t)≤ −C1 + (αu)1log β, as claimed.
Lemma 15. The solution y= 0 undergoes a transcritical bifurcation with a branch
of nontrivial periodic solutions yβof period uat β=β. There is a neighbourhood
URof zero such that for |ββ|small, the only solutions y:RRthat are
defined for all time and contained in Uare the trivial solution and yβ.
Proof. At β=β, we know the centre fibre bundle is one-dimensional. The dy-
namics on the centre manifold can be computed using the theory from [6]; they are
given to quadratic order by
˙z=αφ(t1,0)z2+O(z3), φ(t) = eαu[bt
uc− t
Since the quadratic term is strictly negative, the associated quadratic coefficient
of the time umap restricted to the parameter-dependent centre manifold does not
vanish. On the other hand, the dominant Floquet multiplier of y= 0 (in fact,
the only Floquet multiplier) is given precisely by µ(β) = βeαu. This crosses the
unit circle transversally at β=β. It follows that a transcritical bifurcation occurs
at β=βin the parameter-dependent centre manifold at the level of the time u
(Poincar´e) map, which in turn forces [6] a transcritical bifurcation with a nontrivial
periodic solution in the impulsive delay differential equation.
4.2 Rigorous branch continuation far away from β=β
Here we apply the theory from Section 3 to the rescaled impulsive Hutchinson
equation (38)–(39). We will set up the rigorous continuation scheme for periodic
solutions of period u, so in the notation of the aforementioned section this means
that m= 1.
4.2.1 The F= 0 map
The boundary-value problem (10)–(11) for the impulsive Hutchinson equation can
be written
2u2φq(1 φq+δ) for q= 0,1, ..., u11
φq(1) = φq+1(1) for q= 0,1, ..., u12
βφu11(1) = φ0(1),
where δis defined according to
δ= min ({ku1u2:kN} ∩ R+) (41)
and the indices on the φare subject to the cyclic condition φkφ[k]u1for [·]u1
the remainder modulo u1. Note that the delay shift function ηsatisfies for k=
0,...,u11 the equation
η(k, 0) = [k+δ]u1,
hence our decision to impose this cyclic variable convention.
Transforming now to the F= 0 map, we get
Fq,n(a, β ) = (aq)0(aq+1)0+ 2 Pm1(aq)m(aq+1 )m(1)mn= 0,
2n(aq)n+ (γq)n+1 (γq)n1n1,
for q= 0,...,u12, and
Fu11,n(a, β ) = β(au11)0(a0)0+ 2 Pm1β(au11)m(a0)m(1)mn= 0,
2n(au11)n+ (γu11)n+1 (γu11)n1n1,
where the nonlinear function f(x, y) = α
2u2x(1 y) that defines the right-hand side
of the differential equation in (40) has been converted into Chebyshev form and
stored in the sequence γ, which is defined according to
Note that the multiplications in φhave turned into convolutions in a. Define T:
XX(or T:YY) as follows:
(T a)q,n =0, n = 0
aq,n+1 aq,n1, n 1.(43)
This operator allows us to express the operator Fa bit more cleanly and will be
useful in computing the bounds (27) through (30). Note that for this example,
X= (1
ν)u1. The operator Tis a componentwise tridiagonal operator.
4.2.2 A few technical estimates
The following will be beneficial in the computation of the bounds Y0,Z0,Z1and
Z2. First, the tridiagonal operator T:XXsatisfies
The proof of this bound is a standard exercise and is omitted. The next result
essentially allows for the computation of the norm of the “finite part” of an operator
on X. Its proof can be accomplished using Fubini’s theorem and some careful
bookkeeping, and is also omitted. We will state it specifically as it applies to the
present situation where X= (1
ν)u1with 1
νconsisting of scalar sequences, but of
course there are more general versions; see for example [7, 15].
Lemma 16. Suppose L:XNXNis represented in the form
[L(a)]q,n =
(Lq,j )n,maj,m
for reals Lq,j . Define the norm || · ||XNon XNaccording to ||a||XN=||iNa||X.
Define the quantities
Bm,j (L) = max
|(Lm,j )r,n|ωr,
Bm,j (L).
We will typically not write down the explicit bound appearing in Lemma 16, but
will rather use the symbol || · ||B(XN)as a proxy for the associated upper bound.
The following is a consequence of (or can be used to prove) the previous lemma. Its
proof is also omitted.
Lemma 17. Let L:XNXNbe represented as in Lemma 16. With the same
notation as in that lemma, for h= (h0,...,hu11)XN, we have the bound
Bm,j (L)||hj||ν.
Next, we need a result concerning the dual of XN. Its proof will be omitted.
Lemma 18. For a linear functional U:XNRwith
Uh =
for reals Uj,m, define B
j(U) := maxm0ω1
m|Uj,m|. Then
|Uh| ≤
The next result will be useful in Z2bound calculation. Its proof is straightfor-
ward and is omitted.
Lemma 19. Let L:YXbe an operator of the form L=i1
Let h= (h0,...,hu11)Y. If L:XNXNis represented as in Lemma 16,
then with the notation from that lemma, we have
Bm,j (L)||hj||ν+1
2(N+ 1) ||hm||ν)
The final result we will need concerns bounds for convolutions of sequences with
particular structure.
Lemma 20. Suppose aπN(1
ν)and hπ(1
ν)with ||h||ν1. Then for
k∈ {0,...,N + 1},
|(ah)k| ≤ max
:= ak,
where the right-hand side is treated as zero when k= 0.
Proof. By definition,
where the second equality is due to hπ(1
ν) and the third because aπN(1
We can then make the estimate
|(ah)k| ≤
4.2.3 The operators Aand A
To apply Theorem 8 and compute the bounds (27)-(30), we need to define the
linear operators Aand Aas in Section 3.5. To do so, we first need to compute two
numerical approximations of the solution (a0, β0) and (a1, β1) such that F(ai, βi)
0 for i= 0,1.
We define the linear operators Aand Asuch as (34) and (35) respectively from
Section 3.5 using the approximate solution (a0, β0). We now have everything to
compute the bounds from Theorem 8.
4.2.4 The bound Y0
For the bound Y0, we have
AF (as, βs) = i1
NAF N(as, βs) + L+πF(as, βs)Y(1)
where asand βsare convex predictors define in the same way as (26). Using the
mean-value inequality, we can bound the first term by
0||X=||AF N(as, βs)||XN
≤ ||AF N(a0, β0)||XN+||A[FN(as, βs)FN(a0, β0)]||XN
≤ ||AF N(a0, β0)||XN+Z1
AD(a,β)FN(a0+tsa, β0+tsβ)sa
≤ ||AF N(a0, β0)||XN+ sup
AD(a,β)FN(as, βs)sa
with ∆a=a1a0, where the norm on XNis simply define as
and Dis the Frchet derivative. For the second term, we have
0= max
2n|(T γq(as))n|2νn
N+ 1
|(T γq(as))n|νn.(46)
This last inequality came from the fact that for n > 2N+ 1, we have T γq(as) = 0.
As mention in Section 3.6, we can rigorously compute (45) and (46) using interval
arithmetic and we can define
=||AF N(a0, β0)||XN+ sup
AD(a,β)FN(as, βs)a
+ max
N+ 1
|(T γq(as))n|νn.
which satisfies the bound (27).
4.2.5 The bound Z0
For the bound Z0, we have
Using Lemma 16, we can rigorously compute a numerical bound using interval
arithmetic such that
4.2.6 The bound Z1
To simplify the notation, let zdef
=DF (a0, β0)Ahwith ||hx|| ≤ 1, then
PkN+1[1 + (β01)δq,u11](hu11)k(h1)kif n= 0,
q(a0)q+δ+ (a0)qhI
q+δ)nif 1 nN,
q(a0)q+δ(a0)qhq+δ)nif nN+ 1,
with hI
i=hiπN(hi) for i= 0,...,u11 and where δq,u11is the Kronecker
delta. Then, we can write
Az =i1
NA z0+i1
NA(zNz0) + L+πzZ(1)
where zk=iNπkzfor k= 0, N . For the first term, we need to find a bound for the
terms (zq)0. To achieve this, we see that
1≥ |(hq)0|+ 2
|(hq)k| ≤ 1
2νN+1 .
Using this result, we get
[1 + (β01)δq,u11](hu11)k(h1)k
[2 + (β01)δq,u11]
νN+1 ,
where the last inequality came from the fact that β0[0,1]. Now, let e=
(e0,...,eu11)X, such that
(eq)n=1 if n= 0
0 if n1
for q= 0,...,u11, then we can bound the first term by
1||X=||A z0||XN||Ae||XN
νN+1 .
Using Lemma 20, we define aqand aq+δsuch that
= max
2νn≥ |(aqhI
= max
2νn≥ |(aq+δhI
for q= 0,...,u11. Then, for 1 nNwe have
q(a0)q+δ+ (a0)qhI
and we define z= (z0,...,zu11)XNsuch that
(zq)n=(0n= 0
Now using these results, we get
≤ ||(AA)1
2|zNz0| ||XN
≤ ||(AA)1
For the last term, we have
1||X= max
= max
2(N+ 1)u2ν+ν1max
2(N+ 1)u2ν+ν1max
q=0,...,u11{1 + ||(a0)q+δ||ν+||(a0)q||ν}
We define the bound Z1satisfying (29) by
νN+1 +||(AA)1
2(N+ 1)u2ν+ν1max
q=0,...,u11{1 + ||(a0)q+δ||ν+||(a0)q||ν}
and we use interval arithmetic to do the numerical computations.
4.2.7 The bound Z2
For Z2, we have
||A[DF (as+b, βs)DF (a0, β0)]||B(X)≤ ||A||B(X)||T||B(X)max
2h)n=0 if n= 0,
2u2(hq[s(∆a)q+δ+b] + [s(∆a)q+b]hq+δ)nif n1.
Then, for ||h||X1, we have
2u2(hq[s(∆a)q+δ+b] + [s(∆a)q+b]hq+δ)n2νn
||hq[s(∆a)q+δ+b] + [s(∆a)q+b]hq+δ||ν
(||aq||ν+||aq+δ||ν+ 2r).
Using Lemma 19 and (44), we can define the bound Z2by
u22ν+ν1 max
Bm,j (A) + 1
2(N+ 1) )!r
2u22ν+ν1 max
m=0,...,u11((||am||ν+||am+δ||ν) u11
Bm,j (A) + 1
2(N+ 1) !)!.
This bound satisfies (30) and once again all the computations are finite, meaning
that we can use interval arithmetic to rigorously compute this bounds.
4.3 Rigorous branch continuation near β=β
At β=eαu, we have from Lemma 15 that the trivial solution y= 0 of the
impulsive DDE (38)–(39) undergoes a transcritical bifurcation. We can use the
rigorous continuation from Section 4.2 to do continuation in β[β+δ, 1] for δ > 0
large enough, but as βeαu the transcritical branch gets O(|ββ|) close to
the solution y= 0, resulting in a lack of isolation of zeroes of F. Since the radii
polynomial approach is based on the contraction mapping principle, the computer-
assisted proof will eventually fail when βgets close to β. In this section we explore
a way to resolve this issue.
4.3.1 Interlude: desingularizing the bifurcation
To handle the lack of isolation of zeroes near β=eαu, we will use desingularization
to quotient out the known trivial solution a= 0. To do this, we first perform a re-
scaling at the level of the boundary-value problem (40). Introduce a quasi-amplitude
parameter and define {z0,...,zu11}by the equation φq=zq. The boundary-
value problem becomes
2u2zq(1 zq+δ) for q= 0,1, ..., u11
zq(1) = zq+1(1) for q= 0,1, ..., u12
βzu11(1) = z0(1)
There are now two parameters: βand .
4.3.2 The F= 0 map
Our nonlinear map that encodes the solutions of the boundary-value problem as
zeroes is very similar to the previous one from Section 4.2. The main difference is
in the γcoefficient from (42). The required modification is
(γq(a, ))n=α
Then, {z0,...,zu11}is a solution of the BVP (48) if and only if its Chebyshev
coefficients aXsatisfies F(a, , β) = 0 where
Fq,n(a, , β ) =
(aq)0(aq+1)0+ 2 Pm1(aq)m(aq+1 )m(1)mn= 0, q 6=u11
β(au11)0(a0)0+ 2 Pm1β(au11)m(a0)m(1)mn= 0, q =u11
2n(aq)n+ (γq(a, ))n+1 (γq(a, ))n1n1,
In other words, zeroes of a7→ F(a, , β) are uniquely associated to periodic solutions
with specific quasi-amplitude .
Since zeroes of F(·,·, β) are not isolated, we will introduce a phase condition.
The function F: (X×R)×R(Y×R) defined by
F(a, , β) =
F(a, , β)
will be called the desingularized periodic solution map. Note that if F(a, , β )=0
then F(a, , β) = 0, which by previous discussion means ais uniquely associated
with a periodic solution with quasi-amplitude . In terms the original map F, this
implies F(a, β) = 0.
4.3.3 Properties of the map F
Before we proceed with the rigorous numerics, we will develop a few properties of
the map F. These will be useful later in the more analytical aspects of the branch
continuation proof.
Proposition 21. If F(a, β ) = 0 and a6= 0, then g(a) := (a0)0+2 P
m=1(a0)m6= 0.
Proof. Zeroes of F(·, β) correspond uniquely to solutions of (40). Suppose by way of
contradiction that (a0)0+ 2 P
m=1(a0)m= 0. By properties of Chebyshev polyno-
mials, this implies z0(1) = 0. Since the differential equations of the boundary-value
problem are smooth, this implies φ00 and consequently, a0= 0. The bound-
ary condition then implies that φ1(1) = 0 and by the same argument, we get
φ10 and therefore a1= 0. A simple inductive argument then gives aq= 0 for
q= 0,...,u11. But a6= 0, which is a contradiction.
The following proposition guarantees that nontrivial zeroes of F(·, β) can be
transformed into zeroes of F(·, , β) for some . Specifically, we have the following.
Lemma 22. The transformation
d:a7→ 1
g(a)a, g(a)(52)
maps nontrivial zeroes aof F(·, β)to zeroes of F(·,·, β).
Proof. Let =g(a). By the previous proposition, 6= 0. The second component
of F(1a, , β) is zero by design since it is precisely 1 g(1a) = 0. For the first
component, it is simple to check that Fq,0(1a, , β) = 0, while for n1,
Fq,n(1a, , β ) = 2n1(aq)n+ (γq(1a, ))n+1 (γq(1a, ))n1
2u2(1aq)n+1 (1aq1aq+δ)n+1
[(aq)n+1 (aqaq+δ)n+1]
=1F(a, β)
= 0.
We conclude F(1a, , β) = 0 as claimed.
As we will see in the computer-assisted proof, in the bifurcation regime βeαu
the map F(·,·, β) has an isolated zero with small quasi-amplitude 0. Also
of importance, we have a result that guarantees folds in a branch of nontrivial
zeroes of the map Fmust induce a fold in the analogous branch of zeroes of F.
To be more precise, we will say a map G:U×RUfor Ua metric space
has a branch point at (y, z) if there exist sequences y1,n , y2,n Uand znR
for nNsuch that the following hold: y1,n 6=y2,n for all n, limn→∞ yj,n =y
for j= 1,2, and limn→∞ zn=z. Branch points include folds but also higher
codimension singularities.
Lemma 23. Let bRbe given, and let F(a, b)=0for a6= 0. If the map
F:X×RYhas a branch point at (a, b), then F: (X×R)×R(Y×R)
has a branch point at (d(a), b).
Proof. Let there be sequences aj,n for j= 1,2 such that a1,n 6=a2,n,aj,n aand
a parameter sequence bnbwith F(aj,n, bn) = 0. Since g(a)6= 0 and ker(g) is
closed, the sequences aj,n are eventually (i.e. for large finite n) contained in the open
set X\ker(g). The map (52) is continuous and injective on X\ker(g), from which
it follows that Fhas a branch point at (d(a), b) as evidenced by the sequences
d(aj,n) and bn.
Remark 7. The requirement a6= 0 of the lemma is crucial. We know from Lemma
15 that Fhas a branch point at (0, eαu)because there is a transcritical bifurcation
of periodic solutions there. The contrapositive to Lemma 23 is the result we will
make the most use of; any point that is not a branch point of Fis either not a
branch point of F, or corresponds to the trivial zero of F(·, β)by way of the map
(a, )7→ a.
4.3.4 Discretization of Fand the operators Aand A
Now that we have defined the desingularized periodic solution map F: (X×R)×R
(Y×R), we will need to perform the setup for Theorem 8 and compute the bounds
from (27)–(30). Strictly speaking, since the structure of Fis different from our
general map Fof (23) from Section 3, we can not use the definition of Aand A
from Section 3.5 and will have to construct them from scratch. Thankfully, since
the structure of the map Fis so similar, the changes are not too dramatic. Before
we continue though, we should emphasize that now, the space (X×R) plays the
role of the space Xfrom Theorem 8, while the space (Y×R) plays the role of the
space Yfrom the theorem. We will use script for the operators Aand Ato avoid
confusion with Aand Afrom Section 4.2.3.
To begin, we define the finite-dimensional projections FN: (X×R)×R(Y×R)
and FN: (XN×R)×R(XN×R) much like (25):
FN(a, , β) = πN0
0 1 F(πNa, , β),FN(a, β) = iN0
0 1 FN(i1
Na, , β).
We then define Aas the differential
A=D(x,)FN(x0, 0, β0),
where x0XNand 0Rare such that
FN(x0, 0, β0)0.
That is, they are an approximate (i.e. numerical) zero of the finite-dimensional
projection at parameter β0. We can then define Aand Ain a formally analogous
way to what we did in Section 3.5. We set
0 1 AiNπN0
0 1 +0
0 0 ,(53)
0 1 AiNπN0
0 1 +L+π0
0 0 ,(54)
where Ais a numerical inverse of A. One can then check that A: (X×R)
(Y×R), A: (Y×R)(Y×R) and the nonlinear map F: (X×R)×R(Y×R)
satisfy the range hypotheses of Theorem 8.
In the sections that follow, we we will denote xs= (as, s)(XN×R) for the
as= (1 s)i1
Na1, s= (1 s)0+s1
for s[0,1], given numerical zeroes (a0, 0) and (a1, 1) for parameters β0and β1.
We also define the predictor
xs= (1 s)x0+sx1(XN×R)
that lives in the computer. The parameter predictor is βs= (1 s)β0+1.
4.3.5 The bound Y0
For the bound (27), we can use the definition of FNand Ato write