Page 1

arXiv:0909.3901v1 [math.AP] 22 Sep 2009

Gradient Estimates for the Perfect and Insulated Conductivity

Problems with Multiple Inclusions

Ellen ShiTing Bao∗

YanYan Li†

Biao Yin‡

Abstract

In this paper, we study the perfect and the insulated conductivity problems with multiple

inclusions imbedded in a bounded domain in Rn,n ≥ 2.

conductivity problems, the gradients of their solutions may blow up as two inclusions approach

each other. We establish the gradient estimates for the perfect conductivity problems and an upper

bound of the gradients for the insulated conductivity problems in terms of the distances between

any two closely spaced inclusions.

For these two extreme cases of the

0 Introduction

In this paper, a continuation of [5], we establish gradient estimates for the perfect conductivity problems

in the presence of multiple closely spaced inclusions in a bounded domain in Rn(n ≥ 2). We also

establish an upper bound of the gradients for the insulated conductivity problems. For these two

extreme cases of the conductivity problems, the electric field, which is represented by the gradient of

the solutions, may blow up as the inclusions approach to each other, the blow-up rates of the electric field

have been studied in [1, 3, 5, 19, 20]. In particular, when there are only two strictly convex inclusions,

and let ε be the distance between the two inclusions, then for the perfect conductivity problem, the

optimal blow-up rates for the gradients, as ε approaches to zero, were established to be ε−1/2, (ε|lnε|)−1

and ε−1for n = 2, 3 and n ≥ 4 respectively. A criteria, in terms of a functional of boundary data, for

the situation where blow-up rate is realized was also given. See e.g. the introductions of [5] and [20] for

a more detailed description of these results. More recently, Lim and Yun in [15] have obtained further

estimates with explicit dependence of the blow-up rates on the size of the inclusions for the perfect

conductivity problem (see also [1] for results of this type), and H. Ammari, H. Kang, H. Lee, M. Lim

and H. Zribi in [2] have given more refined estimates of the gradient of solutions.

The partial differential equations for the conductivity problems arise also in the study of composite

materials. In R2, as explained in [14], if we use the bounded domain to represent the cross-section of a

fiber-reinforced composite and use the inclusions to represent the cross-sections of the embedded fibers,

then by a standard anti-plane shear model, the conductivity equations can be derived, in which the

electric potential corresponds to the out-of-plane elastic displacement and the electric field corresponds

to the stress tensor. Therefore, the gradient estimates for the conductivity problems provide valuable

information about the stress intensity inside the composite materials.

When conductivities of the inclusions are away from zero and infinity, the boundedness of the gra-

dients were observed numerically by Babuska, Anderson, Smith and Levin [4]. Bonnetier and Vogelius

[6] proved it when the inclusions are two touching balls in R2. General results were established by Li

and Vogelius [14] for second order divergence form elliptic equations with piecewise smooth coefficients,

∗School

email: shbao@math.umn.edu

†Department

email: yyli@math.rutgers.edu

‡Department

email: yin@math.uconn.edu

ofMathematics,Universityof Minnesota,206Church St SE,Minneapolis, MN 55455,

ofMathematics, Rutgers University,110 FrelinghuysenRd.Piscataway, NJ08854,

of Mathematics,Universityof Connecticut,196 AuditoriumRd. Storrs,CT 06269,

1

Page 2

and then by Li and Nirenberg [13] for second order divergence form elliptic systems, including linear

system of elasticity, with piecewise smooth coefficients. See also [12] and [16] for related studies.

Acknowledgment: We would like to thank Haim Brezis, Luis Caffarelli, Hyeonbae Kang and

Micheal Vogelius for their suggestions, comments and encouragements to our work. The work of Y.Y.

Li is partially supported by NSF grant DMS-0701545.

1 Mathematical set-up and the main results

Let Ω be a domain in Rnwith C2,αboundary, n ≥ 2, 0 < α < 1. Let {Di} (1 ≤ i ≤ m) be m strictly

convex open subsets in Ω with C2,αboundaries, m ≥ 2, satisfying

the principal curvature of ∂Di≥ κ0,

εij:= dist(Di,Dj) > 0, (i ?= j)

dist(Di,∂Ω) > r0, diam(Ω) <

1

r0,

(1.1)

where κ0,r0> 0 are universal constants independent of {εij}. We also assume that the C2,αnorms of

∂Diare bounded by some universal constant independent of {εij}. This implies that each Dicontains

a ball of radius r∗

0> 0 independent of {εij}.

We state more precisely what it means by saying that the boundary of a domain, say Ω, is C2,αfor

0 < α < 1: In a neighborhood of every point of ∂Ω, ∂Ω is the graph of some C2,αfunction of n − 1

variables. We define the C2,αnorm of ∂Ω, denoted by ?∂Ω?C2,α, as the smallest positive number

such that in the 2a−neighborhood of every point of ∂Ω, identified as 0 after a possible translation and

rotation of the coordinates so that xn= 0 is the tangent to ∂Ω at 0, ∂Ω is given by the graph of a C2,α

function, denoted as f, which is defined as |x′| < a, the a−neighborhood of 0 in the tangent plane.

Moreover, ?f?C2,α(|x′|<a)≤1

Denote

?Ω := Ω\∪m

?

uk= ϕ

0for some universal constant r∗

1

a

a.

i=1Di.

Given ϕ ∈ C1,α(∂Ω), the conductivity problem can be modeled by the following equation:

div(ak(x)∇uk) = 0 in Ω,

on ∂Ω,

(1.2)

where k = (k1,...,km) and

ak(x) =

?ki∈ (0,∞)

1

in Di,

in?Ω.

(1.3)

The existence and uniqueness of solutions to the above equation is well known. Moreover, we have

?uk?H1(Ω)≤ C?ϕ?C1,α(∂Ω)for some constant C independent of k. Therefore, by passing to a subse-

quence, we have uk⇀ u∞in H1(Ω) as ki→ ∞ for all 1 ≤ i ≤ m, where u∞∈ H1(Ω) is the solution

to the following perfect conductivity problem,

where

∂u

∂ν

t→0+

∆u = 0

u|+= u|−

∇u ≡ 0

?

u = ϕ

in

on ∂Di, (i = 1,2,...,m),

in Di(i = 1,2,...,m),

?Ω,

∂Di

∂u

∂ν

???

+= 0(i = 1,2,...,m),

on ∂Ω,

(1.4)

???

+:= lim

u(x + tν) − u(x)

t

.

2

Page 3

Here and throughout this paper ν is the outward unit normal to the domain and the subscript ±

indicates the limit from outside and inside the domain, respectively. For the derivation of the above

equation, readers can refer to the Appendix of [5]. Note that the proof there is for k1= k2= ··· = km,

but it works also for the general case with modification.

Since the high stress concentration only occurs in the narrow regions between the fibers, we only need

to focus on those narrow regions.

For i ?= j, denote

dist(xi

ij,xj

ij) = dist(Di,Dj) = εij> 0, xi

ij∈ ∂Di, xj

ij∈ ∂Dj,

and

x0

ij:=1

2(xi

ij+ xj

ij).

It is easy to see that there exists some positive constant δ <

{?∂Di?C2,α}, but is independent of {εij} such that

if εij< 2δ, B(x0

1

4which depends only on κ0, r0 and

ij,2δ) only intersects with Diand Dj. (1.5)

Denote

ρn(ε) =

1

√ε

for n = 2,

1

ε|lnε|

1

ε

for n = 3,

for n ≥ 4.

(1.6)

Then we have the following gradient estimates for the perfect conductivity problem

Theorem 1.1 Let Ω,{Di} ⊂ Rn, n ≥ 2, {εij} be defined as in (1.1), ϕ ∈ L∞(∂Ω), δ be the universal

constant satisfying (1.5). Suppose u∞∈ H1(Ω) is the solution to equation (1.4), then for any εij< δ,

we have

?∇u∞?L∞(eΩ∩B(x0

where C is a constant depending only on n, κ0, r0, {?∂Di?C2,α}, but independent of εij.

Note that if εij≥ δ, by the maximum principle and the boundary estimates of harmonic functions,

we immediately get ?∇u∞?L∞(eΩ∩B(x0

that u∞is constant on each ∂Di. Then by Theorem 1.1 and standard boundary Schauder estimates,

see e.g. Theorem 8.33 in [9], we have the global gradient estimates of u∞in?Ω.

Corollary 1.1 Let Ω,{Di} ⊂ Rn, n ≥ 2, {εij} be defined as in (1.1), ε := min

C1,α(∂Ω), 0 < α < 1, and let u∞∈ H1(Ω) be the solution to equation (1.4). Then

?∇u∞?L∞(eΩ)≤ Cρn(ε)?ϕ?C1,α(∂Ω).

ij,δ))≤ Cρn(εij)?ϕ?L∞(∂Ω)

ij,δ))≤ C?u?L∞(eΩ)≤ C?ϕ?L∞(∂Ω). Here we have used the fact

i?=jεij > 0, and ϕ ∈

where C is a constant depending only on n, m, κ0, r0, ?∂Ω?C2,α, {?∂Di?C2,α}, but independent of ε.

Remark 1.1 The proof of Theorem 1.1 does not need Di and Dj to be strictly convex, the strict

convexity is only used in a fixed neighborhood of x0

{εij}). In fact, our proofs of Theorem 1.1 also apply, with minor modification, to more general situations

where two closely spaced inclusions, Diand Dj, are not necessarily convex near points on the boundaries

where minimal distance ε is realized; see discussions after the proof of Theorem 1.1 in Section 2.

ij(The size of the neighborhood is independent of

Next, we study the insulated conductivity problem. Similar to the perfect conductivity problem,

the solution to the insulated conductivity problem is also the weak limit of ukin H1(?Ω) as k approaches

to 0. Here we consider the insulated conductivity problem with anisotropic conductivity.

3

Page 4

Let Ω,Di ⊂ Rn, εij be defined as in (1.1), ϕ ∈ C1,α(∂Ω), suppose A(x) :=

matrix function in?Ω, where aij(x) ∈ Cα(?Ω) and for some constants Λ ≥ λ > 0,

?aij(x)?

is a symmetric

?aij?Cα(eΩ)≤ Λ,aij(x)ξiξj≥ λ|ξ|2, ∀ξ ∈ Rn,x ∈?Ω.

Then the anisotropic insulated conductivity problem can be described by the following equation,

The existence and uniqueness of solutions to equation (1.7) are elementary, see the Appendix.

As mentioned before, the blow-up can only occur in the narrow regions between two closely spaced

inclusions. Therefore, we only derive gradient estimates for the solution to (1.7) in those regions.

Without loss of generality, we consider the insulated conductivity problem in the narrow region between

D1and D2. Assume

ε = dist(D1,D2)

∂i(aij∂ju) = 0

aij∂juνi= 0

u = ϕ

in?Ω,

on ∂Ω.

on ∂Di(i = 1,2,...,m),

(1.7)

After a possible translation and rotation, we may assume

(ε/2,0′) ∈ ∂D1, (−ε/2,0′) ∈ ∂D2.

Here and throughout this paper by writing x = (x1,x′), we mean x′is the last n − 1 coordinates of x.

We denote the narrow region between D1and D2and its boundary on ∂D1and ∂D2as follows

O(r) :=?Ω ∩ {x ∈ Rn??|x′| < r}

Γ−:= ∂D2∩ {x ∈ Rn??|x′| < r}

where r is some universal constant depending only on {?∂Di?C2,α}.

With the above notations, we consider the following problem,

?

aij∂juνi= 0

Γ+:= ∂D1∩ {x ∈ Rn??|x′| < r}

(1.8)

∂i(aij∂ju) = 0in O(r),

on Γ+∪ Γ−.

(1.9)

Then we have:

Theorem 1.2 If u0∈ H1(O(r)) is a weak solution of (1.9), then

|∇u0(x)| ≤C?u0?L∞(O(r))

?ε + |x′|2

, for all x ∈ O(r

2).(1.10)

where C is a constant depending only on n, κ0, r0, Λ, λ, r and ?∂Di?C2,α(i = 1,2), but independent of

ε.

Remark 1.2 Theorem 1.2 also remains true for general second order elliptic systems, its proof is

essentially the same as for the equations.

A consequence of Theorem 1.2 is the following global gradient estimates for the insulated conductivity

problem.

Corollary 1.2 Let Ω,{Di} ⊂ Rn, {εij} be defined as in (1.1), ε := min

u0∈ H1(?Ω) be the weak solution to equation (1.7), then

?∇u0?L∞(eΩ)≤

i?=jεij> 0, and ϕ ∈ C1,α(∂Ω), let

C

√ε?ϕ?C1,α(∂Ω). (1.11)

where C is a constant depending only on n, κ0, r0, ?∂Ω?C2,α, {?∂Di?C2,α}, but independent of ε.

4

Page 5

Note that throughout this paper we often use C to denote different constants, but all these constants

are independent of ε.

The paper is organized as follows. In Section 2 we consider the perfect conductivity problem and

prove Theorem 1.1. In Section 3 we show Theorem 1.2 for the insulated case. Finally in the Appendix

we present some elementary results for the insulated conductivity problem.

2 The perfect conductivity problem with multiple inclusions

In this section, we consider the perfect conductivity problem (1.4). Note that from equation (1.4), we

know that u ≡ Ci on Di, 1 ≤ i ≤ m, where {Ci} are some unknown constants. In order to prove

Theorem 1.1, we first estimate |Ci− Cj| for 1 ≤ i ?= j ≤ m, which later will allow us to control the

gradient of u in the narrow region between Diand Dj.

2.1 A Matrix Result

To estimate |Ci− Cj|, the following proposition plays a crucial role.

Let m be a positive integer, P = (pij) an m × m real symmetric matrix satisfying,

(A1) pij= pji≤ 0 (i ?= j);

(A2) 0 < r1≤ ¯ pi:=

m

?

j=1

pij≤ r2,

where r1and r2are some positive constants.

Remark 2.1 An m × m matrix P satisfying |pii| >?

Such a matrix is nonsingular, see [10]. (A1) and (A2) imply that the matrix P is diagonally dominant.

j?=i

|pij| is called a diagonally dominant matrix.

Proposition 2.1 Let P = (pij) be an m × m real symmetric matrix satisfying (A1) and (A2), m ≥ 1.

For β ∈ Rm, let α be the solution of

Pα = β,(2.1)

then

|αi− αj| ≤ m(m − 1)r2

r1

|β|

|pij| + r1,(2.2)

where |β| = max

Before proving the proposition, we introduce the following lemmas.

Denote

I(l) = {all l × l diagonal matrices whose diagonal entries are 1 or −1},

Ie(l) = {I ∈ I(l)??I has even numbers of −1 in its diagonal},

i

|βi|.

Io(l) = {I ∈ I(l)??I has odd numbers of −1 in its diagonal}.

Lemma 2.1 For any x ∈ R and any l × l matrix A, l ≥ 1,

?

?

I∈Ie(l)

det(xI + IA) ≡ 2l−1(xl+ detA);

I∈Io(l)

det(xI + IA) ≡ 2l−1(xl− detA).

5

Page 6

Proof:

the above identities stand for l = k − 1 ≥ 1, we will prove them for l = k. Observe that the above

identities hold when x = 0. To prove them for all x, it suffices to show that the derivatives with respect

to x in both sides of the identities coincide. Since for any I ∈ I(k),

We prove it by induction. The above identities can be easily checked for l = 1. Suppose that

(det(xI + IA))′=

k

?

i=1

det(xI + IiAi)

where Aiand Iiare the submatrices obtained by eliminating the ith row and the ith column of A and

I respectively.

Notice that if I runs through all the elements of Ie(k), Iiwill run through all the elements of I(k −1)

for every fixed i ∈ {1,2,...,k}, so we have

?

k

?

k

?

= k2k−1xk−1= 2k−1(xk+ detA)′.

I∈Ie(k)

(det(xI + IA))′

=

i=1

?

?2k−2(xk−1+ detAi) + 2k−2(xk−1− detAi)?

?

I∈Ie(k−1)

det(xI + IAi) +

?

I∈Io(k−1)

det(xI + IAi)?

=

i=1

(By induction)

Therefore, we have proved the first identity. The second one follows from the first one by changing the

sign of one row of A.

As a consequence of Lemma 2.1, we have

Corollary 2.1 Let A be an l × l matrix, if det(I + IA) ≥ 0 for any I ∈ I(l), then |detA| ≤ 1.

Lemma 2.2 Given integers m > l ≥ 1, let Q = (qij) be an m × l real matrix which satisfies, for

j = 1,2,...,l,

qjj>

?

i?=j

|qij|. (2.3)

Let A be the set of all l×l submatrices of the above matrix Q and S1∈ A the matrix obtained from the

first l rows of Q, then we have

detS1= max

S∈A|detS|.

Proof : For any S ∈ A, by rearranging the order of its rows we do not change |detS|. Thus we can

treat S as a matrix obtained by replacing some rows of S1by some other rows of Q. Note that S and

S1could have no rows in common, which means S is obtained by replacing all the rows of S1by some

other rows of Q.

Given any I ∈ I(l), we claim:

det(S1+ IS) ≥ 0

Proof of the claim: There are two cases between S1and S:

Case 1. S1and S have no rows in common. Then by (2.3), we know that S1+IS is diagonally dominant,

therefore det(S1+ IS) > 0.

Case 2. S1and S have some common rows, denote the order of these rows by 1 ≤ i1< ··· < is≤ l,1 ≤

s ≤ l. If row is0of IS is opposite to row is0of S for some 1 ≤ s0≤ s, then row is0of S1+ IS is 0,

therefore det(S1+ IS) = 0. Otherwise row itof IS is the same as that of S and S1for any 1 ≤ t ≤ s,

then we take out the common factors 2 in these rows when we compute det(S1+ IS), thus we have

det(S1+ IS) = 2sdet(S1+ IˆS),

6

Page 7

whereˆS is the matrix obtained by replacing row itof S by 0 for any 1 ≤ t ≤ s. We know that S1+IˆS

is diagonally dominant according to (2.3), then det(S1+ IˆS) > 0, it yields that det(S1+ IS) > 0.

Therefore, the claim is proved.

Since detS1> 0 and

det(S1+ IS) = det(I + ISS−1

1)detS1

we have, by the claim, that for any I ∈ I(l),

det(I + ISS−1

1) ≥ 0

By Corollary 2.1, we have

|det(SS−1

1)| ≤ 1

therefore

detS1≥ |detS|.

Now we are ready to prove Proposition 2.1.

Proof of Proposition 2.1: For m = 1 the inequality is automatically true. For m = 2, we have, by

Cramer’s rule,

????

p21

p22

α1− α2=

β1

β2

p11

p12

p22

p12

????

????

????

−

????

p11

p21

p11

p21

β1

β2

p12

p22

????

????

????

=

????

β1

β2

p11

p21

¯ p1

¯ p2

p12

p22

????

????

????

Since r1≤ ¯ pi≤ r2by Condition (A2),

????

¯ p1

¯ p2

β1

β2

¯ p1

¯ p2

????= β1¯ p2− β2¯ p1≤ 2r2|β|

????= ¯ p1p22− ¯ p2p12≥ ¯ p1p22≥ r1(r1+ |p12|).

On the other hand, by Condition (A1) and (A2)

????

For m ≥ 3, we only estimate |α1− α2| since the other estimates can be obtained by switching

columns of P.

Since α satisfies (2.1), by Cramer’s rule, we have:

?????????

p21

p22

···

...

pm1

pm2

···

???????????

p21

p22

···

...

pm1

pm2

···

p11

p21

p12

p22

????=

????

p12

p22

Therefore, Proposition 2.1 for m = 2 follows from the above.

α1− α2=

β1

β2

...

βm

p11

p12

p22

...

pm2

p12

···

···

...

···

···

p1m

p2m

...

pmm

p1m

p2m

?????????

?????????

...

...

...

pmm

?????????

−

?????????

p11

p21

...

pm1

p11

p21

...

pm1

β1

β2

...

βm

p12

p22

...

pm2

???????????

···

···

...

···

···

···

...

···

p1m

p2m

...

pmm

p1m

p2m

?????????

?????????

...

pmm

?????????

=

β1

β2

β3

...

βm

p11+ p12

p21+ p22

p31+ p32

...

pm1+ pm2

p11

p13

p23

p33

...

pm3

···

···

···

···

...

···

p1m

p2m

...

pmm

p1m

p2m

p3m

...

pmm

?????????

?????????

p12

...

...

7

Page 8

By adding the last (m − 2) columns of the matrix in the numerator to its second column, we have

???????????

p21

p22

...

pm1

pm2

α1− α2=

β1

β2

β3

...

βm

?????????

¯ p1

¯ p2

¯ p3

...

¯ pm

p13

p23

p33

...

pm3

p12

···

···

···

...

···

p1s

p2s

p3s

...

pmm

p1m

p2m

...

pmm

???????????

p11

···

···

...

···

...

?????????

:=det?P

detP.

Next we estimate the determinants of the above two matrices separately.

Expanding detP with respect to the first column, we have

detP =

m

?

j=1

pj1Pj1

where Pjiis the cofactor of pj1.

Applying Lemma 2.2 to the m×(m−1) matrix obtained by eliminating the first column of P, we know

that, among the cofactors Pj1, P11> 0 has the largest absolute value. Since pj1= p1j≤ 0 (j ?= 1) and

p11> 0 by condition (A1) and (A2), we have

detP ≥

m

?

j=1

pj1P11= ¯ p1P11.

For the same reason, we have

P11=

???????

p22

...

pm2

···

...

···

p2m

...

pmm

???????

≥?

m

?

j=2

p2j

?

???????

p33

...

pm3

···

...

···

p3m

...

pmm

???????

.

Combining the above two inequalities and using condition (A1) and (A2), we have

???????

≥ r1(|p12| + r1)

pm3

detP ≥ ¯ p1

m

?

j=2

p2j

p33

...

pm3

···

...

···

p3m

...

pmm

???????

...

= ¯ p1(¯ p2− p21)

???????

p33

...

pm3

···

...

···

p3m

...

pmm

???????

???????

p33

...

···

...

···

p3m

pmm

???????

.

(2.4)

By Laplace expansion, see e.g. page 130 of [17], we can expand det?P with respect to the first two

det?P =

where 1 ≤ i1< i2≤ m and?Pi1i212is the cofactor of the 2nd-order minor in row i1,i2and column 1,2

Applying Lemma 2.2 to the m × (m − 2) matrix obtained by eliminating the first 2 columns of?P, we

???????

8

columns of P, namely,

?

i1,i2

????

βi1

βi2

¯ pi1

¯ pi2

?????Pi1i212, (2.5)

of?P.

know that, among all those cofactors,

p33

...

pm3

···

...

···

p3m

...

pmm

???????

Page 9

has the largest absolute value. Since 0 < ¯ pi≤ r2by condition (A2),

????

βi1

βi2

¯ pi1

¯ pi2

????≤ 2r2|β|,

???????

then by (2.5), we have

??det?P??≤ m(m − 1)r2|β|

p33

...

pm3

···

...

···

p3m

...

pmm

???????

. (2.6)

By (2.4) and (2.6), we have

|α1− α2| =|det?P|

|detP|≤ m(m − 1)r2

r1

|β|

|p12| + r1.

2.2 Proof of Theorem 1.1

As in [5], we decompose u∞into m + 1 parts:

u∞= v0+

m

?

i=1

Civi, (2.7)

where vi∈ H1(?Ω) (i = 0,1,2,...,m) are determined by the following equations:

for i = 1,2,...,m,

Since u∞satisfies the integral conditions in equation (1.4), using the decomposition formula (2.7), we

know that the vector (C1,C2,...,Cm) satisfies the following system of linear equations

am1

am2

···

where

?

?

Similar to the two inclusions case in [5], we first investigate the properties of vi (i = 0,1,··· ,m),

the matrix A = (aij) and the vector b defined by (2.11) and (2.12). Here we state the following lemma,

for its proof, readers may refer to Lemma 2.4 in [5].

for i = 0,

∆v0= 0

v0= 0

v0= ϕ

in

?Ω,

on ∂D1, ∂D2,... ∂Dm,

on ∂Ω.

(2.8)

∆vi= 0

vi= 1

vi= 0

vi= 0

in

on ∂Di,

on ∂Dj, for j ?= i,

on ∂Ω.

?Ω,

(2.9)

a11

a21

...

a12

a22

...

···

···

...

a1m

a2m

...

amm

C1

C2

...

Cm

=

b1

b2

...

bm

(2.10)

aij:=

∂Dj

∂vi

∂ν, (i,j = 1,2,...,m),(2.11)

bi:= −

∂Di

∂v0

∂ν,

(i = 1,2,...,m). (2.12)

Lemma 2.3 For 1 ≤ i, j ≤ m, let aij and bi be defined by (2.11) and (2.12), then they satisfy the

following:

9

Page 10

(1) aii< 0, aij= aji> 0 (i ?= j),

?

(3) |bi| ≤ C?ϕ?L∞(∂Ω),

where C > 0 is a universal constant depending only on n, κ0, r0, ?∂Ω?C2,α, but independent of εij.

Remark 2.2 From property (1) and (2) in Lemma 2.3, we know that A is diagonally dominant, there-

fore it is nonsingular.

(2)− C ≤

1≤j≤m

aij≤ −1

C,

Lemma 2.4 Let v0,vi(i = 1,...,m) be the solutions of equations (2.8) and (2.9) respectively, δ is

the constant satisfying (1.5), then there exists a universal constant C depending only on n, m, r0, κ0,

?∂Di?C2,α and ?∂Ω?C2,α, but independent of {εij} such that,

(1) ?∇v0?L∞(eΩ)≤ C;

(2) ?∇vi?L∞(B(x0

ij,δ)∩eΩ)≤

C

εij

if εij< δ;

(3) |∇vi| ≤ C on

?Ω \??

j?=i,εij<δB(x0

ij,δ)?.

Proof :

constant satisfying (1.5), then if εij< δ, then by (1.5), we know that B(x0

and Dj, and B(x0

principle and standard boundary estimates for harmonic functions. For the same reason, to prove (3),

we only need to prove ?∇vi?L∞(B(x0

assume k = 1,l = 2,i = 3. Let ? v3be the solution of the following equation,

Then we have ? v3≥ v3on ∂?Ω, by the maximum principle, ? v3≥ v3in?Ω. Since ? v3= v3= 0 on ∂D1, we

∂ν ∂ν

But |∇? v3| < C on ∂D1∩ B(x0

?∇v3?L∞(∂D1∩B(x0

The proof of (1) is the same as the proof of Lemma 2.3 in [5]. Since ?vi?L∞(eΩ)= 1, δ is the

ij,δ) only intersects with Di

ij,δ) is at least δ away from other inclusions. Then (2) just follows from the maximum

kl,δ)∩eΩ)≤ C if k,l ?= i and εkl< δ. Without loss of generality, we

∆? v3= 0

? v3= 1

in Ω \ D1∪ D3,

on ∂D1,

on ∂D3,

on ∂Ω.

? v3= 0

? v3= 0

have

∂? v3

≥∂v3

≥ 0.

12,δ) by the boundary estimates of harmonic functions, then we have

12,δ))= ?∂v3

∂ν?L∞(∂D1∩B(x0

12,δ))< C.(2.13)

Similarly, we have

?∇v3?L∞(∂D2∩B(x0

12,δ))= ?∂v3

∂ν?L∞(∂D2∩B(x0

12,δ))< C.(2.14)

Furthermore, by gradient estimates and boundary estimates of harmonic functions, we have

?∇v3?L∞(∂B(x0

12,δ)∩eΩ)< C.(2.15)

Since ∇v3is still harmonic function on B(x0

principle, we have

12,δ) ∩?Ω, by (2.13), (2.14) and (2.15) and the maximum

?∇v3?L∞(eΩ∩B(x0

12,δ))< C.

Next, we derive some further estimates of A = (aij).

10

Page 11

Lemma 2.5 Let aij be defined as in (2.11), then there exists a universal constant C > 0, depending

only on n, r0, κ0, ?∂Di?C2,α and ?∂Ω?C2,α, but independent of {εij}, such that for 1 ≤ i ?= j ≤ m,

C

?min

−C|ln(min

−C < aii< −1

Proof : Without loss of generality, we assume i = 1,j = 2. The proof of the estimates for a11is the

same as that in Lemma 2.5, Lemma 2.6, and Lemma 2.7 in [5]. Here we prove the estimate for a12. In

the following, we use C to denote some universal constant depending only on n, r0, κ0, ?∂Di?C2,α and

?∂Ω?C2,α, but independent of {εij}.

Notice that if ε12 is larger than some universal constant, then the proof is trivial. Therefore, we

can assume ε12< δ, where δ < 1/4 is the universal constant satisfying (1.5). By (1.5), we know that

B(x0

12,δ) only intersects with D1and D2.

Denote

Γi:= ∂Di∩ B(x0

Since B(x0

12,2δ) does not intersect with Di(i ≥ 3) or ∂Ω by (1.5), then

dist(Γ3,∪m

by standard gradient estimates and boundary estimates for harmonic functions, we have

−

k?=iεik

< aii< −

1

C?min

C|ln(min

k?=iεik

,

1

C√εij

< aij<

C

√εij, for n = 2,

k?=iεik)| < aii< −1

k?=iεik)|,

1

C|lnεij| < aij< C|lnεij|,

1

C< aij< C,

for n = 3,

C,

for n ≥ 4.

12,δ) (i = 1,2), Γ3:= ∂B(x0

12,δ) \ (D1∪ D2)

i=3∂Di) > δ, dist(Γ3,∂Ω) > δ,

?∇v1?L∞(Γ3)< C (2.16)

By Lemma 2.4, we have ?∇v1?L∞(∂D2\Γ2)< C.

Therefore, we have

a12=

?

∂D2

∂v1

∂ν

=

?

Γ2

∂v1

∂ν

+

?

∂D2\Γ2

∂v1

∂ν

=

?

Γ2

∂v1

∂ν

+ O(1).

(2.17)

By the harmonicity of v1on B(x0

12,δ) ∩?Ω and (2.16), we have

+

Γ2

∂ν

Γ3

0 =

?

Γ1

∂v1

∂ν

?

∂v1

+

?

∂v1

∂ν

=

?

Γ1

∂v1

∂ν

+

?

Γ2

∂v1

∂ν

+ O(1).

(2.18)

Meanwhile, by Green’s formula and (2.16), we have

?

−

B(x0

12,δ)∩eΩ

|∇v1|2=

?

?

Γ1

v1∂v1

∂ν

+

?

Γ2

v1∂v1

v1∂v1

∂ν

+

?

Γ3

∂v1

∂ν

v1∂v1

∂ν

=

Γ1

∂v1

∂ν

+

?

Γ3

∂ν

=

?

Γ1

+ O(1)

(2.19)

Therefore, by combining (2.17), (2.18) and (2.19), we have

a12=

?

B(x0

12,δ)∩eΩ

|∇v1|2+ O(1).

Similar to the energy estimates given in Lemma 1.5, Lemma 1.6, and Lemma 1.7 in [5], we have

?

1

C|lnε12| <

B(x0

1

C<

B(x0

1

C√ε12

<

B(x0

12,δ)∩eΩ

|∇v1|2<

C

√ε12,for n = 2

?

?

12,δ)∩eΩ

|∇v1|2< C|lnε12|,for n = 3

12,δ)∩eΩ

|∇v1|2< C,for n ≥ 4.

11

Page 12

Therefore,

1

C√ε12

< a12<

C

√ε12, for n = 2,

1

C|lnε12| < a12< C|lnε12|,

1

C< a12< C,

for n = 3,

for n ≥ 4.

?

Knowing enough properties of the system of linear equations (2.10) from Lemma 2.3 and Lemma

2.5 , we have

Proposition 2.2 Let u∞∈ H1(Ω) be the weak solution to equation (1.4) and Ci the value of u∞ on

Di, then for any 1 ≤ i ?= j ≤ m, there exists a universal constant C > 0 depending only on n, κ0, r0,

?∂Ω?C2,α, {?∂Di?C2,α}, but independent of {εij} such that

|Ci− Cj| ≤ C√εij?ϕ?L∞(∂Ω)

|Ci− Cj| ≤ C

|Ci− Cj| ≤ C?ϕ?L∞(∂Ω)

for n = 2,

1

|lnεij|?ϕ?L∞(∂Ω)

for n = 3,

for n ≥ 4.

(2.20)

Proof:

Proposition 2.1 on (2.10), we have, for any 1 ≤ i ?= j ≤ m,

By Lemma 2.3, we know that the matrix −A satisfies condition (A1) and (A2), then applying

|Ci− Cj| ≤

C

aij?ϕ?L∞(∂Ω)

where C is some constant depending on n, κ0, r0, ?∂Ω?C2,α, {?∂Di?C2,α}, but independent of {εij}.

By Lemma 2.5, we immediately finish the proof.

Now we are ready to complete the proof of Theorem 1.1.

Proof of Theorem 1.1: We prove the estimates in dimension 2, the proof for the higher dimensional

cases is similar. Without loss of generality, we assume i = 1, j = 2 and ε12< δ. Now we need to prove

the gradient estimates for u∞ in the narrow region between D1 and D2. For simplicity, we assume

?ϕ?L∞(∂Ω)= 1.

By the decomposition formula (2.7), we have

?

∇u∞= (C1− C2)∇v1+ C2(∇(v1+ v2)) +

m

?

i=3

Ci∇vi+ ∇v0

By Lemma 2.4, we have

?∇v1?L∞(eΩ∩B(x0

12,δ))<

C

ε12, ?∇v0?L∞(eΩ∩B(x0

12,δ))< C (2.21)

where C is some universal constant.

For i = 3,...,m, we have, by Lemma 2.4,

?∇vi?L∞(eΩ∩B(x0

12,δ))< C. (2.22)

Since v1+ v2= 1 on both ∂D1and ∂D2, similar to the proof of Lemma 2.4, we can show that

?∇(v1+ v2)?L∞(eΩ∩B(x0

12,δ))< C. (2.23)

12

Page 13

By Proposition 2.2, (2.21), (2.22) and (2.23), we have

?∇u∞?L∞(eΩ∩B(x0

12,δ))≤ |C1− C2|?∇v1?L∞(eΩ∩B(x0

m

?

≤ C√ε12

ε12

C

√ε12.

12,δ))+ |C2|?∇(v1+ v2)?L∞(eΩ∩B(x0

12,δ))

+

i=3

|Ci|?∇vi?L∞(eΩ∩B(x0

12,δ))+ ?∇v0?L∞(eΩ∩B(x0

12,δ))

1

+ C

≤

As we mentioned in Remark 1.1, the strict convexity assumption of the two inclusions can be

weakened. In fact, our proof of Theorem 1.1 applies, with minor modification, to more general inclusions

as below.

In Rn, n ≥ 2, for two closely spaced inclusions Diand Djwhich are not necessarily strictly convex,

assume ∂Di∩ B(0,r) and ∂Dj∩ B(0,r) can be represented by the graph of x1 = f(x′) +

x1= −g(x′) −εij

λ1|x′|2l≤ g(x′) + f(x′) ≤ λ2|x′|2l,

where λ2≥ λ1> 0,l ∈ Z+.

Under the above assumption, let u∞ ∈ H1(Ω) be the solution to equation (1.4). Then, for εij

sufficiently small, we have

εij

2

and

2, then f(0′) = g(0′) = 0, ∇(g + f)(0′) = 0. Assume further that

∀|x′| ≤ r/2, (2.24)

?∇u∞?L∞(eΩ∩B(x0

ij,δ))≤ C?ϕ?L∞(∂Ω)ε−n−1

2l

ij

if n − 1 < 2l,

?∇u∞?L∞(eΩ∩B(x0

ij,δ))≤ C?ϕ?L∞(∂Ω)

1

εij|lnεij|

1

εij

if n − 1 = 2l,

?∇u∞?L∞(eΩ∩B(x0

ij,δ))≤ C?ϕ?L∞(∂Ω)

if n − 1 > 2l.

(2.25)

where C is a constant depending on n, λ1, λ2, r0, ?∂Di?C2,α and ?∂Dj?C2,α, but independent of εij.

For the proof, please refer to the corresponding discussion after the proof of Theorem 0.1-0.2 in [5].

3The insulated conductivity problem

In this section, we consider the anisotropic insulated conductivity problem, which is described by

Equation (1.7). As we mentioned in the introduction, the gradient can only blow up when two inclusions

are close to each other. In order to establish the gradient estimates for this problem, we first consider

the local version of the problem, namely Equation (1.9).

To make the problem easier, we first consider the equation in a strip. In this case, by using a

“flipping” technique, we derive the gradient estimates in the strip.

Denote, for any integer l

Ql:= {z ∈ Rn??(2l − 1)δ < z1< (2l + 1)δ,|z′| ≤ 1},

Γ−

Γ+

l:= {z ∈ Rn??z1= (2l + 1)δ and |z′| ≤ 1},

l:= {z ∈ Rn??z1= (2l − 1)δ and |z′| ≤ 1},

Q = {z ∈ Rn??|z1| ≤ 1 and |z′| ≤ 1}.

and

We consider the following equation in Q0

∂zi

?

bij(z) ∂zjw

?

= 0 in Q0,

on Γ±

b1j∂zjw = 0

0.

(3.1)

13

Page 14

where (bij) ∈ Cα(Q0)(0 < α < 1) is a symmetric matrix function in Q0, and there exist constants

Λ2≥ λ2> 0 such that, for all ξ ∈ Rn,

?bij(z)?Cα(Q0)≤ Λ2,

Then we have

λ2|ξ|2≤ bij(z)ξiξj, ∀z ∈ Q0,ξ ∈ Rn.

Lemma 3.1 Suppose w ∈ H1(Q0) ∩ L∞(Q0) is a weak solution of (3.1), then there exists a constant

C > 0 depending only on n, λ2,Λ2, but independent of δ, such that

?∇w?L∞(Q0(1

2))≤ C?w?L∞(Q0),

where Q0(1

Proof :

2) := {z ∈ Rn??|z1| ≤ δ and |z′| ≤1

For any integer l, We construct a new function ? w by “flipping” w evenly in each Ql. We define

2}.

? w(z) = w((−1)l(z1− 2lδ),z′),∀z ∈ Ql.

Therefore, we have defined ? w piecewisely in Q.

for α = 2,3,...,n,

We define the corresponding elliptic coefficients as follows

?bα1(z) =?b1α(z) = (−1)lb1α((−1)l(z1− 2lδ),z′),

?bij(z) = bij((−1)l(z1− 2lδ),z′),

Then for any test function ψ ∈ C∞

?

∀z ∈ Ql.

for all other indices

∀z ∈ Ql.

Under the above definitions of ? w and?bij, we can easily check that, for any integer l,

∂zi

??bij(z) ∂zj? w

?

= 0 in Ql,

on Γ±

?b1j∂zj? w = 0

l,

0(Q), we have

Q

?bij(z) ∂zj? w∂ziψ =

?

l

?

?by the definition of weak solution)

??bij(z) ∂zi? w?= 0

?? Bij(z) ∂zju?= 0

limz∈Ql, z→((2l−1)δ, 0′)?bij(z)

limz∈Ql, z→((2l+1)δ, 0′)?bij(z)

?F?Ys,p = sup

Ql

?bij(z) ∂zj? w∂ziψ

= 0

Therefore ? w ∈ H1(Q) satisfies

Following exactly from [13], we first introduce a new equation

∂zj

in Q.

∂zi

in Q

where

?Bij(z) =

z ∈ Ql,l > 0;

z ∈ Q0

z ∈ Ql,l < 0;

?bij(0)

then we define the norm

0<r<1r1−s(

?

−rQ|F|p)

1

p

Since bij(z) ∈ Cα(Q0) ,?bij(z) is piecewise Cαcontinuous in Q, then we can immediately check that

??bij−? Bij?Y1+α,2 < C

14

Page 15

where C is some constant only depending on Λ2. Using Proposition 4.1 in [13], we have

?∇? w?L∞(1

2Q)≤ C?? w?L2(Q)≤ C?? w?L∞(Q),

Then by the definition of ? w, we have

?∇w?L∞(Q0(1

2))≤ C?w?L∞(Q0)

where C > 0 depends on n, λ2, Λ2, but is independent of δ.

?

Since D1and D2are strictly convex domains, we can write O(r), which is defined by (1.8), as follows

O(r) = {x ∈ Rn??− g(x′) − ε/2 < x1< f(x′) + ε/2, |x′| < r}

With the side boundary Γ+and Γ−as

Γ+= {x ∈ Rn??x1= f(x′) + ε/2,|x′| < r}, Γ−= {x ∈ Rn??x1= −g(x′) − ε/2,|x′| < r}

where f(x′) and g(x′) are strictly convex functions, moreover they satisfy

f(0′) = g(0′) = 0, ∇f(0′) = ∇g(0′) = 0.

Under the above notation, we prove Theorem 1.2:

Proof of Theorem 1.2:Fix one point (0,x′

g(x′) are strictly convex, then there exists a universal constant C depending only on ?∂D1?C2,α and

?∂D2?C2,α such that

1

C

We shift the origin to (0,x′

can be written as follows

?

y′= (x′− x′

Let

v(y) = u0(δy1,x′

0) ∈ O(r

2) and let δ =

?f(x′

0) + g(x′

0) + ε, since f(x′) and

?

|x′

0|2+ ε < δ < C

?

|x′

0|2+ ε.(3.2)

0) and rescale the coordinates with δ, then the new coordinates y = (y1,y′)

y1= x1/δ,

0)/δ.

(3.3)

0+ δy′),

? aij(y) = aij(δy1,x′

2+ f(x′

0+ δy′).

Denote

? O(? r) := {y ∈ Rn??−ε

2− g(x′

0+ δy′) < δy1<ε

0+ δy′), |y′| < ? r}

With its side boundary

?Γ+:= {y ∈ Rn??δy1=ε

?Γ−:= {y ∈ Rn??δy1= −ε

2+ f(y′

0+ δy′), |y′| < ? r}

2− g(y′

0+ δy′), |y′| < ? r}.

By (3.2), we can find some universal constant ? r depending only on ∂D1and ∂D2, such that? O(? r) is in

?

? aij∂yjvνi= 0

the image of O(r) under the above transform. Thus we have

∂yi(? aij∂yjv(y)) = 0 in

? O(? r),

on

?Γ+∪?Γ−.

(3.4)

where the coefficients ? aijsatisfy, for some universal constant C,

?? aij?Cα(e

O(e r))≤ C?aij?Cα(O(r))≤ CΛ1, λ1|ξ|2≤ ? aij(y)ξiξj(∀y ∈? O(? r), ∀ξ ∈ Rn).

15