Content uploaded by Mike Hillman

Author content

All content in this area was uploaded by Mike Hillman on Jul 01, 2022

Content may be subject to copyright.

Consistent Weak Forms for Meshfree Methods: Full Realization of

h-reﬁnement, p-reﬁnement, and a-reﬁnement in Strong-type Essential

Boundary Condition Enforcement.

Michael Hillman∗† Kuan-Chung Lin‡

Highlights

•Two weak forms are introduced that are consistent with meshfree approximations

•Higher order optimal h-reﬁnement previously unavailable

•p-reﬁnement previously unavailable

•New ability to increase accuracy called a-reﬁnement

Abstract

Enforcement of essential boundary conditions in many Galerkin meshfree methods is non-trivial due to the fact

that ﬁeld variables are not guaranteed to coincide with their coeﬃcients at nodal locations. A common approach

to overcome this issue is to strongly enforce the boundary conditions at these points by employing a technique

to modify the approximation such that this is possible. However, with these methods, test and trial functions do

not strictly satisfy the requirements of the conventional weak formulation of the problem, as the desired imposed

values can actually deviate between nodes on the boundary. In this work, it is ﬁrst shown that this inconsistency

results in the loss of Galerkin orthogonality and best approximation property, and correspondingly, failure to pass

the patch test. It is also shown that this induces an O(h) error in the energy norm in the solution of second-order

boundary value problems that is independent of the order of completeness in the approximation. As a result, this

places a barrier on the global order of accuracy of Galerkin meshfree solutions to that of linear consistency. That

is, with these methods, it is not possible to attain the higher order accuracy oﬀered by meshfree approximations

in the solution of boundary-value problems. To remedy this deﬁciency, two new weak forms are introduced that

relax the requirements on the test and trial functions in the traditional weak formulation. These are employed in

conjunction with strong enforcement of essential boundary conditions at nodes, and several benchmark problems

are solved to demonstrate that optimal accuracy and convergence rates associated with the order of approximation

can be restored using the proposed method. In other words, this approach allows p-reﬁnement, and h-reﬁnement

with pth order rates with strong enforcement of boundary conditions beyond linear (p > 1) for the ﬁrst time. In

addition, a new concept termed a-reﬁnement is introduced, where improved accuracy is obtained by increasing

the kernel measure in meshfree approximations, previously unavailable.

keywords Meshfree methods, essential boundary conditions, reﬁnement, transformation method

1 Introduction

Galerkin meshfree methods [9] are a unique class of numerical methods based on a purely point-based discretization.

They oﬀer advantages in classes of problems where mesh-based ﬁnite elements encounter diﬃculty, such as those

involving extreme-deformation, multi-body evolving contact, fragmentation, among others; they also oﬀer other

attractive features like arbitrary smoothness or roughness uncoupled with the order of accuracy, ease of discretization,

ease of adaptivity, and intrinsic enrichment [3,7, 9, 24]. However, their implementation is less trivial than the ﬁnite

∗Kimball Assistant Professor, Department of Civil and Environmental Engineering, The Pennsylvania State University, University

Park, PA 16802, USA.

†Corresponding author; email: mhillman@psu.edu; postal address: 224A Sackett Building, The Pennsylvania State University, Uni-

versity Park, PA 16802, USA.

‡Graduate Student Researcher, Department of Civil and Environmental Engineering, The Pennsylvania State University, University

Park, PA 16802, USA.

1

element method. For instance, careful attention needs to be paid to numerical quadrature, and enforcement of

essential boundary conditions (cf. [9]). The focus of this work is the latter issue.

Enforcement of essential (or Dirichlet) boundary conditions is non-trivial in Galerkin meshfree methods since the

nodal coeﬃcients of shape functions do not coincide with their ﬁeld variables at nodal locations in the general case.

Colloquially, this is described as lacking the Kronecker delta property, or weak-Kronecker delta property (although

an even weaker condition is suﬃcient to impose values at nodes on the boundary as will be discussed). Therefore,

unlike the ﬁnite element method, essential boundary conditions cannot be directly enforced on the shape functions’

coeﬃcients. Several techniques have been proposed to overcome this diﬃculty.

In general, these methods can be classiﬁed into two categories: (1) strong enforcement of essential boundary

conditions at nodal locations [1, 11,27,31,34], and (2) weak enforcement of boundary conditions, such as the Lagrange

multiplier method [4], the penalty method [34] and Nitsche’s method [15, 29]. In the ﬁrst category, the idea is to

modify the approximations such that nodal coeﬃcients correspond to ﬁeld variables on the essential boundary. For

the second, these methods allow test and trial functions which do not need to satisfy any particular requirement

related to the essential boundary, and instead impose boundary conditions weakly, i.e., in the sense of a distribution.

The ﬁrst method proposed for enforcing essential boundary conditions in meshfree methods was the Lagrange

multiplier approach used in the element free Galerkin (EFG) method [4]. While this circumvents the aforementioned

diﬃculties in a relatively straight-forward manner, additional degrees of freedom are introduced, and the stiﬀness

matrix is also positive semi-deﬁnite. The choice of the approximation for these multipliers is also subject to the

Ladyzenskaja-Babuˇska-Bezzi (LBB) stability condition, which is an inf-sup condition necessary for well-posedness of

the discrete problem [2, 6]; an approximation to the multiplier that is not ”well-balanced” with the discretization of

the primary variable will not yield a stable solution. Shortly after, a modiﬁed variational principle [26] was proposed

to overcome these shortcomings. In this method, the idea is to substitute the physical meaning of the Lagrange

multiplier (the constraint ”forces”) in terms of the primary variable back into the weak form; thus, the problem does

not involve any additional degrees of freedom, and is not subject to the LBB condition. However, this method does

not guarantee stability either as it is equivalent to using a penalty value of zero in Nitsche’s method, while a minimum

penalty value is necessary for stability [17].

The penalty method is also a straight-forward way to enforce essential boundary conditions, which augments

the potential with a weak penalty on the constraint. However, the solution is strongly dependent on the value of

the penalty parameter: lower values lead to large errors on the essential boundary, while large values lead to an

ill-conditioned system matrix [15]. Nitsche’s method can be viewed, in some sense, as a combination of the modiﬁed

variational principle and the penalty method. The solution error is much less sensitive to the value of the penalty

parameter than the penalty method, as the penalty parameter plays an alternate role of ensuring solution stability

rather than enforcing boundary conditions. Nevertheless, an extremely large or small parameter also leads to the same

issues as the penalty method [15]. A reliable way to select the parameter is based on an eigenvalue problem related

to the discretization [17]. However, an important corollary is that the parameter depends on the discretization,

and for meshfree methods that have a variety of free parameters, this entails the distribution of points, order of

approximation, kernel measure, kernel function, etc. In the authors’ experience, it is diﬃcult to choose a suitable

penalty parameter (to maintain desired convergence rates) a priori for accuracy higher than linear. More details on

the eﬀect and choice of the penalty value for these methods can be found in [15, 17].

So far, the methods discussed are all in the class of weak enforcement of essential boundary conditions. Strong

methods have been developed as well, which modify the approximation such that their enforcement is similar to the

ﬁnite element method. The transformation method also known as the collocation method was ﬁrst introduced in [11].

This method constructs the relationship between nodal coeﬃcients and their ﬁeld values in order to achieve the

Kronecker delta property in the approximation. This however requires the inverse of a somewhat dense system-size

matrix to solve the problem at hand. This technique was independently derived and discussed by several researchers

later [1,27, 31, 34]. To avoid inverting a dense system-size matrix, techniques have been introduced to greatly reduce

the density of the ﬁnal system matrix after transformation procedures [12, 34], which has been termed the mixed

transformation method. It is worth mentioning the work in [12] oﬀers convenient and simple implementations of these

transformation methods with row-swap operations on the system matrix. Using these techniques is equivalent to

employing Lagrange multipliers to enforce the essential boundary constraint point-wise at nodal locations [12].

Alternatively, approximations can also be constructed so that direct imposition of essential boundary conditions

can be performed without inverting any matrices. These techniques are most convenient for explicit dynamic cal-

culations for obvious reasons. Approaches include coupling of meshfree shape functions with ﬁnite elements near

the essential boundary [5, 21, 33], employing singular kernel functions for nodes on the essential boundary of the

domain [12], and constructing moving-least squares approximations with the interpolation property via primitive

functions [8]. Forcing the correction function to be zero on the essential boundary has also been introduced [16],

which yields the interpolation property (for a discussion on this aspect of meshfree approximations see [28]), but

this technique is diﬃcult to use in high dimensions and complex geometry. More recently a conforming kernel ap-

2

proximation has been introduced which possesses the weak Kronecker delta property, and can thus strictly satisfy

the requirements on the test function (and for simple boundary conditions, the trial function) in the weak formu-

lation [20]. Finally, outside of these two classes of methods, a novel way to impose boundary conditions using

D’Alembert’s principle was introduced in [18].

The most common method employed in the literature appears to be strong enforcement at nodal points. So far,

to the best of the authors’ knowledge, there has been no published work examining the accuracy of higher-order

meshfree approximations used with these strong-form type methods, except one paper [8]. There it was reported

that while using quadratic basis to approximate a function can yield expected convergence rates, employing it in the

Galerkin equation results in only ﬁrst-order accuracy, a discrepancy which was attributed to a lack of verifying the

desired conditions for test and trial functions in between the nodes.

In this paper, this assertion, and the eﬀect of this discrepancy in the strong-type approach is closely examined,

where it is shown that the requirements on test and trial functions in the weak form are indeed not veriﬁed between

nodal locations. And, in fact, the diﬀerence between the desired values is of order hon the boundary (his the

nodal spacing), independent of the approximation order p. It is further shown that this discrepancy results in failure

to pass the patch test, and loss of Galerkin orthogonality. Patch tests performed demonstrate that the L2norm

of the error in the domain is restricted to order O(h2) due to these inconsistencies, and order O(h) in the energy

norm, regardless of the order of approximation employed. Correspondingly, much lower rates of convergence are

obtained than expected for meshfree basis functions of order higher than linear (p > 1), and the rate of convergence

is limited to that of employing approximations of linear consistency. To remedy these deﬁciencies, two weak forms

are introduced that allow for larger spaces of test and trial functions. When employed with the strong-type methods,

optimal convergence rates (for suﬃciently regular solutions) are obtained. This technique thus allows, for the ﬁrst

time using strong methods, p-reﬁnement, and h-reﬁnement with pth order optimal rates beyond linear. Further, it

is shown that the proposed method provides improved accuracy by increasing the kernel measure ain the meshfree

approximation, previously unavailable, which is termed a-reﬁnement.

The remainder of this paper is organized as follows. The reproducing kernel approximation is ﬁrst introduced

in Section 2 as a basis for examination of a typical meshfree method, and issues with strong essential boundary

condition enforcement are discussed. In Section 3, two weak forms are introduced which allow the enlargement of the

approximations space to include meshfree approximations constructed under the strong-type enforcement techniques.

Numerical procedures are described in Section 4, and numerical results are then given in Section 5 to demonstrate

the eﬀectiveness of the proposed methods. Section 6 provides concluding remarks.

2 Background

2.1 Reproducing kernel approximation

In this work, the reproducing kernel is chosen as a model approximation that does not strictly meet the requirements

of the commonly used weak statement of a problem that includes Dirichlet boundary conditions.

Let a domain ¯

Ω = Ω ∪∂Ω be discretized by a set of Np nodes S={x1,· · · ,xNP|xI∈¯

Ω}with corresponding

node numbers η={I|xI∈S}. The pth order discrete reproducing kernel (RK) approximation uh(x) of a function

u(x) is deﬁned as [11, 25]:

uh(x) = X

I∈η

Ψ[p]

I(x)uI(1)

where {Ψ[p]

I(x)}I∈ηis the set of RK shape functions, and {uI}I∈ηare the associated coeﬃcients.

The shape functions (1) are constructed by the product of a kernel function Φa(x−xI) and a correction function

C[p](x;x−xI):

Ψ[p]

I(x)=Φa(x−xI)C[p](x;x−xI).(2)

The correction function is composed of a linear combination of monomials up to order p, which allows the exact

reproduction of these monomials and pth order accuracy in the approximation (1). In matrix form this function can

be expressed as:

C[p](x;x−xI) = H[p](x−xI)Tb[p](x) (3)

where H[p](x) is a column vector of complete pth order monomials and b[p](x) is a column vector of coeﬃcients. The

coeﬃcients are obtained by enforcing the following reproducing conditions:

X

I∈η

Ψ[p]

I(x)H[p](xI) = H[p](x),(4)

3

or equivalently,

X

I∈η

Ψ[p]

I(x)H[p](x−xI) = H[p](0).(5)

Employing (2)-(5), the RK shape functions in (1) are constructed as:

Ψ[p]

I(x) = H[p](0)T{M[p](x)}−1H[p](x−xI)Φa(x−xI) (6)

where

M[p](x) = X

I∈η

H[p](x−xI)H[p](x−xI)TΦa(x−xI) (7)

and is called the moment matrix. Without modiﬁcation, the approximation is in general non-interpolatory, that is,

uh(xI)6=uI. A simple demostration of this property is given in Figure 1.

0 2 4 6 8 10

-1

-0.5

0

0.5

1

1.5

0 2 4 6 8 10

-6

-4

-2

0

2

4

6

8

data

RK approximation

Figure 1: Example of a meshfree approximation of data uI=xIsin(xI).

2.2 Strong enforcement of essential boundary conditions at nodal locations

2.2.1 Model problem: Poisson’s equation

Without loss of generality, in this work we consider the strong form (S) of Poisson’s equation as a model boundary

value problem, which asks: given s: Ω →R,h:∂Ωh→R, and g:∂Ωg→R, ﬁnd u:¯

Ω→Rsuch that the following

conditions hold:

∇2u+s= 0 in Ω (8a)

∇u·n=hon ∂Ωh(8b)

u=gon ∂Ωg(8c)

where ∇2≡ ∇ · ∇, and ∂Ωhand ∂Ωgdenote the natural boundary and essential boundary, respectively, with

∂Ωg∩∂Ωh=∅,∂Ω = ∂Ωg∪∂Ωh, and ¯

Ω=Ω∪∂Ω.

2.2.2 Conventional Galerkin approximation

A weak form (W) of the of Poisson’s equation (8) can be constructed that seeks u∈H1

g,H1

g={u|u∈H1(Ω), u =

gon ∂Ωg}such that for all v∈H1

0,H1

0={v|v∈H1(Ω), v = 0 on ∂Ωg}the following equation holds:

a(v, u)Ω= (v, s)Ω+ (v , h)∂Ωh(9)

where

a(v, u)Ω=ZΩ

∇v· ∇udΩ,(10a)

(v, s)Ω=ZΩ

vs dΩ,(10b)

(v, h)∂Ωh=Z∂Ωh

vh dΓ.(10c)

4

With approximations vhof test functions vand uhof trial functions u, with vh= 0 on ∂Ωgand uh=gon ∂Ωg,

a proper Galerkin approximation to (9) can be constructed which employs ﬁnite-dimensional subsets Sg⊂H1

gand

S0⊂H1

0, and seeks uh∈ Sgsuch that for all vh∈ S0the following equation holds:

a(vh, uh)Ω= (vh, s)Ω+ (vh, h)∂Ωh.(11)

In approximations which possess the Kronecker delta property, and in particular the weak Kronecker delta prop-

erty, a subset of H1

0is usually easily constructed. For instance, in linear ﬁnite elements, the boundary of the

computational domain is deﬁned by element edges where nodal values are linearly interpolated, so enforcement of

a value of zero at nodes on the boundary ensures vh= 0 on ∂Ωg. For any method with the weak Kronecker delta

property and the partition of unity, the same argument follows. For construction of a subset of H1

g, a common choice

is to let the approximation interpolate values of gon the essential boundary, and Sgis also subset of H1

g, or closely

resembles a subset of H1

g.

For meshfree methods which generally do not posses these properties, it is apparent from these discussions that

the construction of subsets of H1

0and H1

gis non-trivial.

2.3 Strong nodal imposition in meshfree methods

Strong imposition of essential boundary conditions at nodal locations is a popular choice in meshfree methods to

(approximately, as will be shown) construct admissible test and trial functions for the conventional weak formulation

(9). Essentially, these entail a modiﬁcation to meshfree shape functions such that nodal degrees of freedom on

the essential boundary coincide with their ﬁeld variables. For this to be the case, the Kronecker delta property is

not actually necessary [8, 12], and instead the set of modiﬁed shape functions {ˆ

Ψ[p]

I(x)}I∈ηonly need to verify the

requirements:

ˆ

Ψ[p]

J(xI)=0 ∀I∈ηg, J ∈η\ηg(12)

and ˆ

Ψ[p]

I(xJ) = δIJ ∀I∈ηg, J ∈ηg(13)

where δIJ is the Kronecker delta function, and η\ηgis the complement of the set of node numbers ηg={I|xI∈Sg}

for nodes Sg={xI|xI∈∂Ωg}located on the essential boundary. The above means that all ”inside nodes” should not

contribute to the approximation at ”boundary nodes”, while all ”boundary nodes” need to verify the delta property

at nodal locations on the boundary.

It is important to note that (12) and (13) only verify the prescribed conditions at nodal locations, but not in

between nodes. Therefore one may enforce boundary conditions on nodal coeﬃcients, as is done in the literature,

but cannot ensure proper approximation spaces are constructed.

In contrast, the above properties are distinct from the weak Kronecker delta property, where only boundary shape

functions contribute to the approximation on the entire essential boundary:

ˆ

Ψ[p]

J(x)=0 ∀x∈∂Ωg, J ∈η\ηg.(14)

From the above, it is apparent that approximations with (14) will have little issue with constructing proper subsets

(or very close approximations) necessary for the weak formulation (9). Meanwhile for meshfree approximations with

only (12) and (13), and not (14), as is most common, constructing proper subsets is not possible.

2.3.1 Test function construction

Using these modiﬁed shape functions, in an attempt to construct a test space satisfying S0⊂H1

0, the following

approximation is typically employed:

vh(x) = X

I∈η\ηg

ˆ

Ψ[p]

I(x)vI(15)

where {ˆ

Ψ[p]

I(x)}I∈ηis the set of modiﬁed shape functions with properties (12) and (13), and {vI}I∈η\ηgare coeﬃcients

of the test function.

Due to (12) and (13), the test functions verify vh(xI)=0∀I∈ηg. However, for these meshfree approximations,

the value of vh(x) is in the general case, non-zero between nodes on the essential boundary and therefore violates the

construction S0⊂H1

0.

To illustrate this, consider a domain ¯

Ω=[−1,1] ×[−1,1] discretized uniformly in each direction by 9 nodes with

9×9 = 81 nodes total. A linear RK approximation (p= 1 in (15)) is employed using a cubic B-spline kernel function

with a normalized support of 3. A test function with the arbitrary coeﬃcients set to unity is constructed using

5

the transformation method, with ∂Ω = ∂Ωg. As seen in Figure 2, the test functions are in fact non-zero between

nodes along ∂Ωgwith the employment of (15). According to the norms computed in Table 1, the “error” (deﬁned

as non-zero values on the essential boundary) does converge at about a rate of one (O(h)) in the L2(∂Ωg) norm, yet

the magnitude of the error (in L∞(∂Ωg)) stays about the same regardless of the discretization. According to [30],

the L2(∂Ωg) error should be O(h3/2), however it seems to be O(h) when observed numerically, at least for meshfree

approximations.

Figure 2: Example of a test function in meshfree methods using the transformation method.

Table 1: Norms of error for boundary conditions imposed by test and trial functions, p= 1, varying h.

L2(∂Ωg)L∞(∂Ωg)

htest rate trial rate test trial

0.5000 0.01821 - 0.03615 - 0.03516 0.08443

0.2500 0.01014 0.84513 0.02137 0.75856 0.03645 0.10125

0.1250 0.00523 0.95507 0.01119 0.93372 0.03630 0.10495

0.0625 0.00262 0.99535 0.00573 0.96561 0.03628 0.10688

Next, the same setup is tested with p= 2 and p= 3, since a ”linear” error occurs for the previous test, and linear

basis was employed. The same norms are computed, shown in Tables 2 and 3, respectively for the two cases. Again

an O(h) error is observed, and it is seen that this error is apparently independent of the order of approximation.

Later, it will be shown that this error can be directly related to the error in the energy norm of the problem—which

will limit the rate of convergence for higher order (p > 1) approximations. This will then be conﬁrmed numerically.

Table 2: Norms of error for boundary conditions imposed by test and trial functions, p= 2, varying h.

L2(∂Ωg)L∞(∂Ωg)

htest rate trial rate test trial

0.5000 0.01105 - 0.01935 - 0.02249 0.05458

0.2500 0.00352 1.65251 0.00640 1.59508 0.01476 0.03715

0.1250 0.00172 1.03368 0.00330 0.95787 0.01441 0.03723

0.0625 0.00086 1.00177 0.00176 0.90788 0.01440 0.03990

6

Table 3: Norms of error for boundary conditions imposed by test and trial functions, p= 3, varying h.

L2(∂Ωg)L∞(∂Ωg)

htest rate trial rate test trial

0.5000 0.00666 - 0.00975 - 0.01241 0.02231

0.2500 0.01016 -0.60978 0.01614 -0.72816 0.04419 0.09271

0.1250 0.00317 1.68073 0.00702 1.20050 0.02588 0.07773

0.0625 0.00163 0.95634 0.00353 0.99222 0.02653 0.07701

Finally, as a test, the kernel measure ais varied, with p= 1 and h= 1/4 ﬁxed; the results are shown in Table 4.

One can ﬁrst observe that if a≈1 then the error (not shown to full signiﬁcant digits) is machine precision; in this

case the RK approximation closely resembles a bilinear ﬁnite element discretization. Then, as the kernel measure

increases, the error on the boundary increases as well. It is generally expected that in the solution of PDEs, that

increasing the measure of an approximation will increase the accuracy of the solution; however this is not observed

in practice, and an ”optimal” value is observed in meshfree methods [25]. The increasing error on the boundary can

explain that there exists two competing mechanisms: increasing error with increasing adue to failure to satisfy the

requirements of test functions, and increasing the accuracy of the approximation with increasing a.

Table 4: Norms of error for boundary conditions imposed by test and trial functions, h= 1/4, p= 1, varying a.

L2(∂Ωg)L∞(∂Ωg)

atest trial test trial

1.01 0.00000 0.00000 0.00000 0.00000

1.50 0.00118 0.00207 0.00592 0.01363

2.00 0.00483 0.00873 0.01821 0.04438

2.50 0.00863 0.01693 0.03007 0.08018

3.00 0.01014 0.02137 0.03645 0.10125

3.50 0.01085 0.02303 0.04106 0.11543

4.00 0.01207 0.02563 0.04533 0.13379

2.3.2 Trial function construction

Strong enforcement at boundary nodes uh(xI) = g(xI) is also typically introduced, and in an attempt to construct

Sg⊂H1

g, the following approximation is employed:

uh(x) = X

I∈η\ηg

ˆ

Ψ[p]

I(x)uI+gh(x),(16)

where

gh(x) = X

I∈ηg

ˆ

Ψ[p]

I(x)gI,(17)

the values {uI}∈η\ηgare the trial functions coeﬃcients, and gI≡g(xI) is the prescribed value of g(x) at an essential

boundary node xI∈ Sg. Because of the properties (12) and (13), the trial functions verify uh(xI) = g(xI)∀I∈ηg.

While essential boundary conditions for trial functions are veriﬁed at nodal locations, the condition uh=gis again

not enforced between the nodes. Figure 3 depicts a linear function prescribed as g(x) = x+ 2yand approximated

by (17) using the same discretization that was employed for the test function. Again it can be seen that along the

boundary, the solution is collocated only at nodal points. As shown in Table 1, the L2(∂Ωg) norm of the diﬀerence

between gand ghalso converges at a rate of approximately one (O(h)) just as the test function, while the magnitude

of error (in L∞(∂Ωg)) also stays roughly the same, despite reﬁnement. It should be noted that even though linear

bases are employed, the function is not exactly represented due to the inﬂuence of the interior nodes on the value of

the meshfree approximation on the essential boundary between nodes. That is, it should be clear from Figure 3 that

the RK approximation under the transformation framework does not possess the weak Kronecker delta property.

7

Figure 3: Approximation gh(x) in meshfree methods using the transformation method.

Next, p= 2, and p= 3 are tested, with the same norms computed and shown in Tables 2 and 3, respectively.

Again an O(h) error is observed, and it is seen that this error in representing the essential boundary conditions is

also apparently independent of the order of approximation. The kernel measure ais again varied, with p= 1 and

h= 1/4 ﬁxed, and the results are shown in Table 4. Again for a≈1 the boundary conditions are represented quite

well, as the RK approximation simply interpolates the boundary condition in the limit of a→1. Then, as the kernel

measure increases, the error on the boundary increases as before.

In the next section, it will be shown that the errors on the boundary in the test and trial functions are directly

related to the error in the solution of PDEs. That is, while O(h) in L2(∂Ωg), the errors manifest as errors of O(h2)

in L2(Ω) and O(h) in H1(Ω), limiting the rate of convergence of the solution.

2.3.3 Error assessment of inconsistencies

As a point of departure in considering the error induced by these inconsistencies, we ﬁrst examine the weighted

residual formulation, which is more a more general way to arrive at a weak formulation than a potential. The latter

point of view will be revisited.

Integrating the product of an arbitrary weight function vand the residual of (8a) over Ω we have:

(v, ∇2u+s)Ω= 0.(18)

Integrating (18) by parts and employing divergence theorem one obtains

a(v, u)Ω= (v, s)Ω+ (v , n· ∇u)∂Ω.(19)

Per the usual procedures, employing (8b), v= 0 on ∂Ωg, and the boundary decomposition, we have the weak form

(W) in (9) which asks to ﬁnd u∈H1

gsuch that for all v∈H1

0the following equation holds:

a(v, u)Ω= (v, s)Ω+ (v , h)∂Ωh.

Provided uis suﬃciently smooth, the above equation can be integrated by parts to obtain

(v, ∇2u+s)+(v, h − ∇u·n)∂Ωh−(v, ∇u·n)∂Ωg= 0 (20)

where

(v, ∇u·n)∂Ωg=Z∂Ωg

v∇u·ndΓ.(21)

Employing the fact that v= 0 on ∂Ωg,u=gon ∂Ωg, and the arbitrary nature of vone obtains the strong form (8),

that is we have the following equivalence

(W)⇔(S)

However, in meshfree methods it is diﬃcult to achieve vh= 0 on ∂Ωgin the Galerkin discretization as discussed

previously. And, in fact, as shown in [12], the transformation method is actually consistent with a weak formulation

8

that only attests to strong enforcement of essential boundary conditions at nodal locations, rather than the entire

essential boundary in the true strong form.

Either way, to demonstrate one signiﬁcant consequence of employing (9), consider the following relation found by

using Green’s ﬁrst identity and the conditions in (8):

a(vh, u)Ω=−(vh,∇2u)Ω+ (vh,n· ∇u)∂Ω

= (vh, s)Ω+ (vh, h)∂Ωh+ (vh,n· ∇u)∂Ωg.(22)

Subtracting (22) from (9) gives

a(vh, uh−u)Ω= (vh,n· ∇uh)∂Ωg(23)

which is the relation given in [30], and demonstrates that if vh6= 0 on ∂ΩgGalerkin orthogonality is lost. It can be

easily shown that using this relation, the best approximation property no longer holds, i.e., the minimum error in

the norm induced by a(·,·) is not obtained for the Galerkin solution. One immediate consequence is that the patch

test will fail.

Now, as discussed in [30], the left hand side is bounded by a(uh−u, uh−u)1/2

Ω. Since the discrepancy on the

boundary induced by the inadmissibility of test functions has been numerically observed as O(h), one should expect

O(h) error in the energy norm of the problem. This will be conﬁrmed numerically in the next Subsection.

Remark 1 To further elucidate the failure of the patch test, consider the viewpoint of variational consistency

presented in [10]. Starting from (9), and following [10], it can be shown using (5) and (8), that the requirements for

obtaining an exact solution u[p]of order pusing the traditional weak formulation is

ahvh, u[p]iΩ=−hvh,∇2u[p]iΩ+hvh,n· ∇u[p]i∂Ωh(24)

where ah·,·i,h·,·iΩ, and h·,·i∂Ωhdenote the quadrature versions of a(·,·), (·,·)Ω, and (·,·)∂Ωh, respectively. However,

using integration by parts, with suﬃciently high order (e.g. machine precision) quadrature it is obvious that

ahvh, u[p]iΩ≈ −(vh,∇2u[p])Ω+ (vh,n· ∇u[p])∂Ω6=−(vh,∇2u[p])Ω+ (vh,n· ∇u[p])∂Ωh(25)

and a patch test will fail unless vh= 0 on ∂Ωg. That is, no matter how high order the quadrature (or even with

exact integration), one will not be able to pass the patch test.

2.3.4 Numerical assessment of the order of errors in boundary value problems

To examine the eﬀect of these inconsistencies on the numerical solution to PDEs, and verify the assertions made

in the previous section, a few patch tests are ﬁrst performed, with the solution obtained using the transformation

method.

Twenty by twenty Gaussian quadrature per background cell is used for domain integration over a two-dimensional

domain Ω, with twenty Gauss points on each cell boundary intersecting ∂Ω for integration of boundary terms. Gauss

cells are coincident with the nodal spacing such that each cell is associated with four nodes. The reason that this

”overkill” quadrature is employed is to avoid the eﬀect of numerical integration (which has a strong eﬀect on solution

accuracy and convergence, cf. [10,14]) and isolate the issue of boundary condition enforcement. That is, the twenty

by twenty Gauss integration employed is suﬃcient to element errors due to quadrature [10], and any remaining error

should be due to any other variational crimes. For the test cases below, the only inconsistency present is the inability

to satisfy the requirements on test and trial functions in the weak form [30]. Cubic B-spline kernels are employed for

the kernel function in the RK approximation. Unless otherwise stated, these parameters will be employed throughout

this manuscript.

Consider the Poisson problem (8) on the domain ¯

Ω=[−1,1] ×[−1,1] with the pure essential boundary condition

∂Ωg=∂Ω. First, let the prescribed body force and boundary conditions be consistent with the linear solution

u= 0.1x+ 0.3y:

u= 0.1x+ 0.3yon ∂Ωg,(26a)

s= 0 in Ω.(26b)

The problem is solved with linear basis, which can exactly represent the solution, and the ”overkill” quadrature

employed should, according to conventional wisdom, result in passing the patch test. The errors in the L2(Ω) norm

and H1(Ω) semi-norm are shown in Figure 4. First, it can be seen that the patch tests are indeed not passed,

which can be attributable to the errors in constructing the proper approximation spaces, since there are no other

9

variational crimes committed. It is also seen that through reﬁnement of the discretization (decreasing h), the order

of error induced by the inconsistency in the boundary conditions on the test and trial functions manifest as O(h2)

and O(h), for the L2(Ω) norm and H1(Ω) semi-norm, respectively. That is, the errors reduce with reﬁnement, at

a rate consistent with employing linear basis. One may thus expect that these errors will have no inﬂuence on the

convergence rates in the solution of PDEs with linear basis, which will be conﬁrmed later.

-1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-5.0

-4.0

-3.0

-2.0

log (L2 error)

2.0

-1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-5.0

-4.0

-3.0

-2.0

log (semi-H1 error)

1.0

Figure 4: Norms of error transformation method in linear patch test of Poisson problem, rate of convergence indicated.

Next, consider a quadratic patch test with quadratic basis, which should according to conventional wisdom, also

result in a solution with machine precision error when high-order quadrature is employed. Here the following quadratic

solution is considered u= 0.1x+ 0.3y+ 0.8x2+ 1.2xy + 0.6y2. The following conditions result in this solution:

u= 0.1x+ 0.3y+ 0.8x2+ 1.2xy + 0.6y2on ∂Ωg,(27a)

s= 2.8 in Ω.(27b)

The error in the L2(Ω) norm and H1(Ω) semi-norm are shown in Figure 5; here, again it can be seen that the

inconsistent enforcement of boundary conditions result in errors O(h2) and O(h), respectively. That is, the errors

again decrease at a rate consistent with ”linear” accuracy, despite the fact that higher-order accurate basis functions

are employed. One may then expect that these errors will limit the order of convergence in the solution of PDEs,

which will be conﬁrmed later.

-1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-5.0

-4.0

-3.0

-2.0

log (L2 error)

2.0

-1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-5.0

-4.0

-3.0

-2.0

log (semi-H1 error)

1.0

Figure 5: Norms of error for transformation method in quadratic patch test of Poisson problem, rate of convergence

indicated.

To conclude, the patch test results indicate that the error due to the inability to construct proper approximation

spaces manifest as errors of linear order in the solution of PDEs.

To examine the possible, and now expected, eﬀect on convergence rates, consider (8) with the source term and

10

pure essential boundary ∂Ωg=∂Ω with domain ¯

Ω = [0,1] ×[0,1]:

g(x, 0) = sin(πx), g(x, 1) = g(0, y ) = g(1, y) = 0 on ∂Ωg,(28a)

s= 0 in Ω.(28b)

The exact solution of this problem is high order [32]:

u={cosh(πy)−coth(π) sinh(πy)}sin(πx).(29)

Linear, quadratic, and cubic bases are employed with the transformation method, with uniform reﬁnements of the

domain. Various normalized support sizes (denoted ”a” in the Figure legends) are employed, to examine the eﬀect of

varying the measure of Φa(x−xI), as it is well known that linear basis degenerates to linear ﬁnite elements as the

normalized measure aapproaches unity. Thus, larger values of aare expected to show more pronounced error due to

boundary condition enforcement, since ﬁnite elements have little to no diﬃculty in constructing proper approximation

spaces, or at least ones which do not induce signiﬁcant solution errors.

Figure 6 shows the convergence for linear basis in the L2(Ω) norm and H1(Ω) semi-norm; it can be seen that the

optimal rates of two and one are essentially maintained, regardless of the kernel measure.

-1.6 -1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-7.0

-6.0

-5.0

-4.0

-3.0

-2.0

log (L2 error)

a = 1.5 : 2.04

a = 2.0 : 2.09

a = 2.5 : 2.44

a = 3.0 : 1.97

Optimal : 2

-1.6 -1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-5.0

-4.0

-3.0

-2.0

-1.0

0.0

log (semi-H1 error)

a = 1.5 : 1.06

a = 2.0 : 1.48

a = 2.5 : 1.17

a = 3.0 : 0.98

Optimal : 1

Figure 6: Convergence of transformation method with linear basis with various kernel measures a: rates indicated in

legend.

For quadratic basis, it can be seen in Figure 7 that these same linear rates are also generally obtained, yet the

optimal rates for quadratic basis should be three and two for the L2(Ω) norm and H1(Ω) semi-norm, respectively.

Therefore optimal rates are not obtained in this case, and rather, the solution exhibits linear accuracy rather than

quadratic.

-1.6 -1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-8.0

-7.0

-6.0

-5.0

-4.0

-3.0

-2.0

log (L2 error)

a = 2.5 : 2.62

a = 3.0 : 2.08

a = 3.5 : 1.97

a = 4.0 : 1.44

Suboptimal (linear) : 2

Optimal : 3

-1.6 -1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-6.0

-5.0

-4.0

-3.0

-2.0

-1.0

log (semi-H1 error)

a = 2.5 : 1.84

a = 3.0 : 1.15

a = 3.5 : 0.99

a = 4.0 : 0.35

Suboptimal (linear) : 1

Optimal : 2

Figure 7: Convergence of transformation method with quadratic basis with various kernel measures a: rates indicated

in legend.

11

For the case of cubic basis, shown in Figure 8, it can again be seen that the the rates obtained are far lower than

expected; the linear rates of two and one are again obtained in most cases for the L2(Ω) norm and H1(Ω) semi-norm,

respectively, when the optimal convergence rates associated with employing approximations with cubic completeness

in displacements are four and three, respectively. Again, the solution exhibits linear accuracy, rather than cubic.

-1.6 -1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-9.0

-8.0

-7.0

-6.0

-5.0

-4.0

-3.0

-2.0

log (L2 error)

a = 3.5 : 1.98

a = 4.0 : 1.52

a = 4.5 : 1.94

a = 5.0 : 1.87

Suboptimal (linear) : 2

Optimal : 4

-1.6 -1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-7.0

-6.0

-5.0

-4.0

-3.0

-2.0

-1.0

log (semi-H1 error)

a = 3.5 : 0.98

a = 4.0 : 0.51

a = 4.5 : 0.96

a = 5.0 : 0.91

Suboptimal (linear) : 1

Optimal : 3

Figure 8: Convergence of transformation method with cubic basis with various kernel measures a: rates indicated in

legend

Higher order bases were also tested but are not shown here for conciseness of presenting the present study. The

transformation method also provided only linear solution accuracy.

To conclude, the numerical results in this section indicate that the error due to the inability to satisfy the

requirements of the conventional weak form (9) is characterized as O(h2) error in the L2(Ω) norm and O(h) error in

the H1(Ω) semi-norm, limiting the rate of convergence for bases higher than linear.

It seems that through the popular choice of linear basis in meshfree approximations over the past two decades,

this observation has somehow been overlooked, or hardly reported in the literature. To the best of the authors’

knowledge, only [8] reports results with quadratic basis and strong enforcement of boundary conditions (using the

RK approximation with interpolation property), where the same trend was observed.

3 Consistent weak forms for meshfree methods

3.1 Consistent weak form I: A consistent weak formulation for inadmissible test func-

tions

A consistent weak formulation for test functions inadmissible in the conventional weak form can be derived by

considering the possibility of vh6= 0 on ∂Ωgin between nodes. First, consider the weighted residual of (8), as before:

(v, ∇2u+s)Ω= 0.

Integrating (18) by parts and employing divergence theorem one obtains

a(v, u)Ω= (v, s)Ω+ (v , n· ∇u)∂Ω.

Now, by employing (8b) and allowing v6= 0 on ∂Ωg, a consistent weak form which we denote (W1

C) is arrived at,

which asks to ﬁnd u∈H1

g, such that for all v∈H1, the following equation holds:

a(v, u)Ω−(v, n· ∇u)∂Ωg= (v , s)Ω+ (v, h)∂Ωh(30)

where the requirement on v∈H1

0has been relaxed to simply v∈H1where H1=H1(Ω), which allows the employment

of (15) for the test function without committing a variational crime.

It is important to note, that when (30) is integrated by parts, it is straightforward to show the weak form (30)

attests to (8), and the equivalence of the weak form and the strong form is veriﬁed, that is, W1

C⇔S:

(v, ∇2u+s)+(v, h − ∇u·n)∂Ωh.(31)

Since vin the above is arbitrary and u∈H1

g, the strong form (8) is recovered.

12

The corresponding Galerkin approximation seeks uh∈ Sg,Sg⊂H1

gsuch that for all vh∈ S,S ⊂ H1the following

holds

a(vh, uh)Ω−(vh,n· ∇uh)∂Ωg= (vh, s)Ω+ (vh, h)∂Ωh(32)

where vhis constructed from (15) and uhis constructed from (16).

In this formulation, we have relaxed the condition on the test function, but still attempt to construct approximation

spaces that satisfy the usual conditions. That is, the present weak formulation (W1

C) can be considered a consistent

way to employ the condition vh= 0 on ∂Ωgstrongly at nodes.

So far, the inconsistency in the construction of the trial function is neglected, yet in the numerical examples in

Section 5 it is shown that this has little consequence on the solution accuracy.

Remark 2 Subtracting (22) from (30) gives

a(vh, uh−u)Ω= 0 (33)

and Galerkin orthogonality is restored (compare to (23)). If one recalls that the left hand side is bounded by

a(uh−u, uh−u)1/2

Ω, this indicates that the limiting error on the boundary in (23) will be released and proper

convergence rates associated with the approximation space should be achieved.

Remark 3 From a potential point of view, it is easy to show (33) is equivalent to the minimization of the following

energy functional for the present problem:

Π(W1

C)uh=1

2auh−u, uh−uΩ(34)

and the best approximation property is also restored. This relation also will be useful for comparison purposes later.

Remark 4 The consistent weighted residual procedure generalizes easily to various boundary value problems (see

Appendix A).

3.2 Consistent weak form II: A consistent weak formulation for inadmissible test and

trial functions with symmetry

The employment of (30) yields a non-symmetric stiﬀness matrix which is often undesirable. In addition, unless trial

functions can satisfy the essential boundary conditions exactly, we do not have W1

C⇔S, and strictly speaking W1

C

is still not consistent with a meshfree discretization.

To address these two issues, consider a more general form of the weighted residual formulation with weights vΩ

on Ω and vgon ∂Ωg:

(vΩ,∇2u+s)Ω+ (vg, u −g)∂Ωg= 0.(35)

Various weights can be chosen, however the choice of vΩ=vand vg=n· ∇vyields a symmetric weak form which

will be shown as follows. Further impetus is provided by the fact that a ﬂux term n· ∇uis the “work-conjugate”

to uin terms of the potential associated with (8) and yields consistent ”units” of the problem at hand. With this

choice, (35) is expressed as

(v, ∇2u+s)Ω+ (n· ∇v, u −g)∂Ωg= 0.(36)

Integrating (36) by parts and employing the natural boundary condition (8b), one obtains a symmetric weak form

that we denote (W2

C), which asks to ﬁnd to ﬁnd u∈H1such that for all v∈H1, the following equation holds

a(v, u)Ω−(v, n· ∇u)∂Ωg−(n· ∇v , u)∂Ωg= (v, s)Ω+ (v, h)∂Ωh−(n· ∇v, g)∂Ωg.(37)

The above allows the complete relaxations of simply requiring v∈H1and u∈H1, and now both (15) and (16) can

be employed without committing a variational crime.

Applying integration by parts to a(·,·) in (37) yields:

(v, ∇2u+s)∂Ω+ (v, n· ∇u−h)∂Ωh+ (v , u −g)∂Ωg= 0 (38)

where it is immediately apparent that the strong form of the problem can be recovered, hence (W2

C)⇔(S).

The weak from (W2

C) is the same one identiﬁed in reference [26], and can be also derived from a variational

viewpoint. Here, the key diﬀerence between this work and that in [26], is that the weak form is employed with (15)

13

and (16) as to rectify the deﬁciencies of the standard use of these approximations. We also note that employing (37)

alone does not guarantee stability [17].

The corresponding Galerkin approximation seeks uh∈ S such that for all vh∈ S,S ⊂ H1the following holds

a(vh, uh)Ω−(vh,n· ∇uh)∂Ωg−(n· ∇vh, uh)∂Ωg= (vh, s)Ω+ (vh, h)∂Ωh−(n· ∇vh, g)∂Ωg(39)

where vhis again constructed from (15) and uhis constructed from (16). It is easy to see that when a Bubnov-Galerkin

approximation is employed, (39) leads to a symmetric system matrix.

With the complete relaxation on test and trial functions, this weak formulation (W2

C) can be considered a consistent

way to employ both the conditions vh= 0 on ∂Ωgand uh=gon ∂Ωgstrongly at nodes.

Remark 5 Rather than satisfying Galerkin orthogonality, by employing (22), the Galerkin discretization of the

consistent weak form (W2

C) satisﬁes the following:

avh, uh−uΩ=n· ∇vh, uh−g∂Ωg+vh,n· ∇(uh−g)∂Ωg.(40)

Note that if uh=gon ∂Ωg, then the standard orthogonality relation is recovered.

Remark 6 The relation (40) leads to the insight that a Galerkin discretization of (W2

C) minimizes the error in the

norm induced by a(·,·) augmented by the the ”work” of the error on the essential boundary (compare to (34)):

Π(W2

C)uh=1

2auh−u, uh−uΩ−(uh−u, n· ∇(uh−u))∂Ωg.(41)

That is, (W2

C) can be obtained by minimization of the above potential with respect to uh. This illuminates the

possibility of balancing errors on the domain and boundary, following [19], although the numerical examples in

Section 5 indicate that this is likely not necessary since optimal rates are obtained—that is, with (41), the order of

errors due to the imposition of conditions on the domain and boundary may already be balanced.

Remark 7 The potential associated with (39) can also be stated in a more conventional manner:

Π(W2

C)uh=1

2auh, uhΩ−(uh, s)Ω−(uh, h)∂Ωh−(uh−g, n· ∇uh)∂Ωg(42)

where it can be seen that the last term accounts for the work done by the error on the essential boundary. Thus,

considering the possibility of error on the boundary is one way to arrive at a consistent weak form. The other, is to

minimize the error in both the domain and boundary, in terms of appropriate work-conjugates, as in (41).

Remark 8 This weak form can also be generalized to other boundary value problems, for a discussion, refer to the

Appendix.

Remark 9 The employment of (W2

C) or (W1

C) is consistent with the variationally consistent framework proposed

in [10], which requires the weak form attest to the strong form. In contrast, the pure transformation method does

not.

In summary, two weak forms have been developed, which are consistent with the inability of an approximation to

meet the requirements of the conventional weak form. The ﬁrst considers the fact that the weight function is possibly

non-zero on the essential boundary, but that the essential boundary conditions still hold strongly. This results in a

non-symmetric stiﬀness matrix, but is more consistent with meshfree approximations. This weak form attests to the

strong form, and is shown to restore Galerkin orthogonality and the best approximation property. The second weak

form relaxes the requirements on both the test and trial functions, and they only need to be constructed to possess

square-integrable derivatives. The particular form taken here results in a symmetric system, at least for the model

problem at hand (see the Appendix for a brief discussion). This weak form attests to the strong form, and is shown

to satisfy a diﬀerent orthogonality relation, which illuminates that it minimizes the error in the domain in terms of

the energy norm, as well as the error on the boundary in terms of the ﬁeld variable and it’s corresponding ”ﬂux” (or

work-conjugate) term.

14

4 Numerical procedures

In this section, the matrix forms for the consistent weak forms are given and boundary condition enforcement

procedures are discussed. As a starting point, let us ﬁrst deﬁne terms common to the weak formulations discussed:

let ddenote a column vector of {uI}I∈η, ΨIand BIdenote the Ith shape function and the column vector of it’s

derivatives respectively, and let nrepresent the unit normal to ∂Ωgin column vector form. In two dimensions this

yields:

d=

d1

d2

.

.

.

dNP

,BI=ΨI,1

ΨI,2,n=n1

n2.(43)

The following ﬁnal system of matrix equations is also common to all formulations:

Kd =f(44)

where the system size is Np×Np. The above system is left statically uncondensed purposefully, as special procedures

are needed to apply boundary conditions in meshfree methods. These techniques are discussed in Section 4.4.

4.1 Conventional weak formulation

Under the conventional weak formulation (9), the scalar entries of Kand fin (44) are computed as

KIJ =ZΩ

BT

I(x)BJ(x) dΩ,(45a)

fI=ZΩ

ΨI(x)sdΩ + Z∂Ωh

ΨI(x)hdΓ.(45b)

4.2 Consistent weak form I (CFW I)

For the Consistent weak form I (32), the scalar entries of Kand fin (44) are computed as

KIJ =ZΩ

BT

I(x)BJ(x) dΩ −Z∂Ωg

ΨI(x)nTBJ(x) dΓ,(46a)

fI=ZΩ

ΨI(x)sdΩ + Z∂Ωh

ΨI(x)hdΓ.(46b)

Comparing (46) to (45), it can be seen that only one new term is added to the stiﬀness matrix of the system. Later,

it will be seen that the addition of this one term results in a drastic increase in solution accuracy and is able to

restore optimal convergence rates. Indeed, the main problem with the inability to construct proper subspaces in the

conventional weak formulation is due to the term in (23), which this weak form corrects for.

4.3 Consistent weak form II (CFW II)

For the discretization of consistent weak form II (39), the scalar entries of Kand fin (44) are computed as

KIJ =ZΩ

BT

I(x)BJ(x) dΩ −Z∂Ωg

BT

I(x)nΨJ(x) dΓ −Z∂Ωg

ΨI(x)nTBJ(x) dΓ,(47a)

fI=ZΩ

ΨI(x)sdΩ + Z∂Ωh

ΨI(x)hdΓ −Z∂Ωg

BT

I(x)ngdΓ.(47b)

In the above, it can be seen that compared to (45), both the stiﬀness matrix and the force vector contain new terms.

For the stiﬀness matrix, the two additional terms are the transpose of each other, so that only one of these matrices

needs to be constructed for the analysis (or just the upper triangle of the entire system matrix). In addition, since

the original stiﬀness matrix is symmetric, the resulting system matrix will also be symmetric, and eﬃcient solvers

can be employed with this method.

15

4.4 Enforcement of boundary conditions

Procedurally, due to the nature of the approximations involved, it is uncommon to employ the formal deﬁnitions

of test and trial approximations in (15) and (16) directly in the weak form for meshfree methods. Rather, the full

systems are formed with the RK approximation deﬁned over all nodes (1) leading to (44), and boundary conditions

are applied after. That is to say, the system in (44) represents a statically uncondensed system and cannot be solved

directly.

Instead, two favorable possibilities to enforce boundary conditions on the uncondensed systems are recommended

here: (1) meshfree transformation procedures can be applied—the reader is referred to [13] for more details, where

a simple and convenient row-swap implementation of the transformation method is presented; or (2) straightforward

static condensation with direct enforcement of boundary conditions is possible (equivalent of course to using (15) and

(16) directly in the weak form), provided either singular kernels [12] or shape functions with interpolation property [8]

are introduced for nodes that lie on the essential boundary.

5 Numerical examples

For the following examples, the parameters of the RK approximation and the numerical integration method have

been discussed in Section 2.3.3 in detail, but are brieﬂy recalled here: twenty-by-twenty Gaussian integration per

background cell is employed with cells aligned with uniformly distributed nodes. Cubic B-spline kernels are used in

the RK approximation, with varying nodal spacing denoted h, kernel measures normalized with respect to hdenoted

a, and order of bases denoted p.

Three main methods are compared in terms of the transformation method [12]:

•The transformation method (denoted as T)

•The transformation method with consistent weak form I (denoted as T+CWF I)

•The transformation method with consistent weak form II (denoted as T+CWF II)

Later, the boundary singular kernel method [12] is employed to complete the study to demonstrate the method works

with other types of strong enforcement, with permutations denoted following the same convention:

•The boundary singular kernel method (denoted as B)

•The boundary singular kernel method with consistent weak form I (denoted as B+CWF I)

•The boundary singular kernel method with consistent weak form II (denoted as B+CWF II)

The error in the L2(Ω) norm and the H1(Ω) semi-norm are assessed, computed using the same quadrature rules as

forming the system matrices.

5.1 Patch test for the 2D Poisson equation

Consider the Poisson problem (8) on the domain ¯

Ω = [−1,1] ×[−1,1] with the pure essential boundary condition

∂Ωg=∂Ω. Two cases for the patch test are considered: linear and quadratic.

As previously discussed, the ”overkill” quadrature in the following numerical examples should result, by conven-

tional wisdom, in passing the patch tests. For an in-depth discussion on the Galerkin meshfree formulations and

patch tests see [10], where it was shown that the residual of the error in numerical integration drives the error in

patch tests. Thus, ”overkill” quadrature drives the residual to machine precision in the limit, resulting in the method

being variationally consistent (passing the patch test), to machine precision. However in [10] it was also discussed

that the weak form must also attest to the strong form, which is not the case for the pure transformation method.

5.1.1 Linear solution

Let the prescribed body force and boundary conditions be consistent with an exact linear solution u= 0.1x+ 0.3y

(see (26) for the conditions).

The error in the L2(Ω) and H1(Ω) semi-norm for the three versions of the transformation method with linear

basis are shown in Figure 9. It is seen that both the proposed T+CWF I and T+CWF II are able to pass the linear

patch test (with machine precision error). The transformation method fails to pass the patch test, and meanwhile,

shows error associated with linear accuracy as discussed previously.

16

-1.4 -1.2 -1 -0.8 -0.6

log (h)

-16

-14

-12

-10

-8

-6

-4

-2

log (L2 error)

2.00

T

T+CWF I

T+CWF II

-1.4 -1.2 -1 -0.8 -0.6

log (h)

-16

-14

-12

-10

-8

-6

-4

-2

log (Semi-H1 error)

1.00

T

T+CWF I

T+CWF II

Figure 9: Norms of error for various methods in linear patch test: rates for T indicated.

5.1.2 Quadratic solution

For the quadratic patch test, the following quadratic solution is considered: u= 0.1x+ 0.3y+ 0.8x2+ 1.2xy + 0.6y2

(see (27) for associated prescribed conditions).

Here quadratic bases is introduced into the RK approximations; the L2(Ω) and H1(Ω) semi-norms of error are

shown in Figure 10. And again it is seen that T+CWF I and T+CWF II are able to pass the patch test (with

machine-level error) while the transformation method does not. Again, the error due to the inconsistent week form

tends to manifest as linear, even though quadratic basis is employed.

-1.4 -1.2 -1 -0.8 -0.6

log (h)

-16

-14

-12

-10

-8

-6

-4

-2

log (L2 error)

2.00

T

T+CWF I

T+CWF II

-1.4 -1.2 -1 -0.8 -0.6

log (h)

-16

-14

-12

-10

-8

-6

-4

-2

log (Semi-H1 error)

1.00

T

T+CWF I

T+CWF II

Figure 10: Norms of error for various methods in quadratic patch test of Poisson problem: rates for T indicated.

For both tests, it can be noted that the inability, and ability to pass the patch test by these methods, respectively,

is consistent with mesfhree patch test results reported in [20], where test functions were identically zero on the

essential boundary. Additionally, the ability to pass the patch test by both T+CWF I and T+CWF II, and failure to

pass the patch test by the transformation method alone, is consistent with the orthogonality relations (23), (33), and

(40), where the resulting best approximation properties, or lack thereof, indicate which methods should or should

not pass the patch tests. Thus the results of the patch tests are consistent with the discussions in Section 2.

5.2 Poisson equation with high-order solution

Now consider the poisson problem (8), on ¯

Ω = [0,1] ×[0,1], with source term and the pure essential boundary

condition as the same as (28):

g(x, 0) = sin(πx), g(x, 1) = g(0, y ) = g(1, y) = 0 on ∂Ωg,

s= 0 in Ω.

17

The exact solution of this problem is high order:

u={cosh(πy)−coth(π) sinh(πy)}sin(πx).

In this study, the eﬀect of the three weak forms is examined in terms of convergence rates with respect to varying

the support sizes a, order of basis functions p, and nodal spacing h.

5.2.1 p-reﬁnement and h-reﬁnement

First consider linear, quadratic, cubic, and quartic bases (denoted with p= 1, p= 2, p= 3, and p= 4, respectively),

with normalized support sizes of a=p+ 1. h-reﬁnement is performed for each of the basis, starting with an 11 ×11

uniform node distribution. The solution errors in the L2(Ω) norm and H1(Ω) semi-norm of the various bases are

plotted in Figure 11-12, showing that T+CWF I and T+CWF II can yield optimal convergence rates (p+1 in L2and

pin semi-H1), while the traditional weak form (T) only yields linear rates (2 in L2and 1 in semi-H1), regardless of

the order of basis. Therefore the present approach can yield h-reﬁnement with pth order optimal rates of convergence.

In addition, it can be seen in Figures 11b 11c, 12b, and 12c, that by increasing p, for any given h(with the

exception of one case), more accuracy can be obtained, yielding the ability to also provide p-reﬁnement. These

two features of the present approach are in stark contrast to the results in Figures 11a and 12a, where increasing

pdoes not give consistently more accurate results, and in fact moving from p= 1 to p= 2 provides only marginal

improvement in accuracy, while increasing pfrom two to three and three to four actually provides worse results.

Comparing to Tables 1, 2, and 3, it can be inferred that this is due to the additional error in the representation of

boundary conditions in the test and trial functions, decreasing from p= 1 to p= 2, and increasing from p= 2 to

p= 3.

Finally, it can be noted that both T+CWF I and T+CWF II can provide p-reﬁnement and h-reﬁnement with

pth order optimal rates with nearly the same levels of error, and one may select either based on need or preference

(T+CWF I has only one new term, but yields a non-symmetric system, while T+CWF II yields a symmetric system,

but has three additional terms).

-1.6 -1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-9.0

-8.0

-7.0

-6.0

-5.0

-4.0

-3.0

-2.0

log (L2 error)

Linear : 2.44

Quadratic : 2.08

Cubic : 1.52

Quartic : 1.91

(a) T

-1.6 -1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-9.0

-8.0

-7.0

-6.0

-5.0

-4.0

-3.0

-2.0

log (L2 error)

Linear : 2.50

Quadratic : 3.16

Cubic : 4.25

Quartic : 5.05

(b) T+CWF I

-1.6 -1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-9.0

-8.0

-7.0

-6.0

-5.0

-4.0

-3.0

-2.0

log (L2 error)

Linear : 2.50

Quadratic : 3.17

Cubic : 4.28

Quartic : 5.03

(c) T+CWF II

Figure 11: Convergence with various bases in the L2norm: rates indicated in legend.

18

-1.6 -1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-7.0

-6.0

-5.0

-4.0

-3.0

-2.0

-1.0

0.0

log (semi-H1 error)

Linear : 1.48

Quadratic : 1.15

Cubic : 0.51

Quartic : 0.95

(a) T

-1.6 -1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-7.0

-6.0

-5.0

-4.0

-3.0

-2.0

-1.0

0.0

log (semi-H1 error)

Linear : 1.50

Quadratic : 1.97

Cubic : 3.26

Quartic : 4.09

(b) T+CWF I

-1.6 -1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-7.0

-6.0

-5.0

-4.0

-3.0

-2.0

-1.0

0.0

log (semi-H1 error)

Linear : 1.50

Quadratic : 1.98

Cubic : 3.25

Quartic : 3.87

(c) T+CWF II

Figure 12: Convergence with various bases in the H1semi-norm: rates indicated in legend.

5.2.2 Dilation analysis

The eﬀect of varying normalized support sizes in the proposed method is now examined, since as shown previously,

increased support sizes in the RK approximation can yield diﬀerent behavior on the essential boundary of the domain

for both test and trial functions. In addition, the present test is to show that the previous results were not a special

case—window functions and their measure can have an eﬀect on accuracy and convergence rates [25], and even super-

convergence can be obtained for special values of window functions [22,23]. Thus the current permutations on aand

pwill examine the robustness of the formulation under the variety of free parameters in the RK approximation. For

this study, the discretizations and solution technique for the previous example are employed, reﬁning has before,

while varying aand p.

First, linear basis (p= 1) is tested. The errors in the L2(Ω) norm and H1(Ω) semi-norm are plotted in Figures 13

and 14 respectively, for T, T+CWF I, and T+CWF II. First it can be seen that optimal rates are obtained for all

cases of a, for all methods. Also, when comparing to the results for the transformation alone (T), much lower levels of

error can be obtained with the present approach: nearly an order of magnitude when ais suﬃciently large. The error

also decreases monotonically with increasing a—this point will be revisited. Finally, it be seen that little diﬀerence

in the solution error is observed for T+CWF I and T+CWF II, as in the previous cases.

-1.6 -1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-7.0

-6.0

-5.0

-4.0

-3.0

-2.0

log (L2 error)

a = 1.5 : 2.04

a = 2.0 : 2.09

a = 2.5 : 2.44

a = 3.0 : 1.97

Optimal : 2

(a) T

-1.6 -1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-7.0

-6.0

-5.0

-4.0

-3.0

-2.0

log (L2 error)

a = 1.5 : 2.05

a = 2.0 : 2.50

a = 2.5 : 2.54

a = 3.0 : 2.17

Optimal : 2

(b) T+CWF I

-1.6 -1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-7.0

-6.0

-5.0

-4.0

-3.0

-2.0

log (L2 error)

a = 1.5 : 2.08

a = 2.0 : 2.50

a = 2.5 : 2.55

a = 3.0 : 2.32

Optimal : 2

(c) T+CWF II

Figure 13: Convergence for linear basis (p= 1) with various ain the L2norm: rates indicated in legend.

19

-1.6 -1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-5.0

-4.0

-3.0

-2.0

-1.0

0.0

log (semi-H1 error)

a = 1.5 : 1.06

a = 2.0 : 1.48

a = 2.5 : 1.17

a = 3.0 : 0.98

Optimal : 1

(a) T

-1.6 -1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-5.0

-4.0

-3.0

-2.0

-1.0

0.0

log (semi-H1 error)

a = 1.5 : 1.07

a = 2.0 : 1.50

a = 2.5 : 1.51

a = 3.0 : 1.14

Optimal : 1

(b) T+CWF I

-1.6 -1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-5.0

-4.0

-3.0

-2.0

-1.0

0.0

log (semi-H1 error)

a = 1.5 : 1.07

a = 2.0 : 1.50

a = 2.5 : 1.51

a = 3.0 : 1.15

Optimal : 1

(c) T+CWF II

Figure 14: Convergence for linear basis (p= 1) with various ain the H1semi-norm: rates indicated in legend.

Next, quadratic (p= 2) basis is tested for various values of a; the same error measures are presented in Figure 15

and 16. Here it can be seen that the use of T+CWF I and T+CWF II provides a large improvement in performance

over T alone, regardless of the value of a. The proposed methods provide optimal convergence rates consistently,

and do not depend on the dilation parameter. Meanwhile, with T alone, consistently worse rates are obtained with

increasing the kernel measure a. Finally, from the ﬁgures, it is starkly apparent that the magnitude of error can be

reduced anywhere from one to two orders of magnitude by employing the proposed techniques.

-1.6 -1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-8.0

-7.0

-6.0

-5.0

-4.0

-3.0

-2.0

log (L2 error)

a = 2.5 : 2.62

a = 3.0 : 2.08

a = 3.5 : 1.97

a = 4.0 : 1.44

Suboptimal (linear) : 2

Optimal : 3

(a) T

-1.6 -1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-8.0

-7.0

-6.0

-5.0

-4.0

-3.0

-2.0

log (L2 error)

a = 2.5 : 3.36

a = 3.0 : 3.16

a = 3.5 : 2.88

a = 4.0 : 2.90

Optimal : 3

(b) T+CWF I

-1.6 -1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-8.0

-7.0

-6.0

-5.0

-4.0

-3.0

-2.0

log (L2 error)

a = 2.5 : 3.36

a = 3.0 : 3.17

a = 3.5 : 2.88

a = 4.0 : 2.89

Optimal : 3

(c) T+CWF II

Figure 15: Convergence for quadratic basis (p= 2) with various ain the L2norm: rates indicated in legend.

20

-1.6 -1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-6.0

-5.0

-4.0

-3.0

-2.0

-1.0

log (semi-H1 error)

a = 2.5 : 1.84

a = 3.0 : 1.15

a = 3.5 : 0.99

a = 4.0 : 0.35

Suboptimal (linear) : 1

Optimal : 2

(a) T

-1.6 -1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-6.0

-5.0

-4.0

-3.0

-2.0

-1.0

log (semi-H1 error)

a = 2.5 : 2.25

a = 3.0 : 1.97

a = 3.5 : 1.82

a = 4.0 : 1.81

Optimal : 2

(b) T+CWF I

-1.6 -1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-6.0

-5.0

-4.0

-3.0

-2.0

-1.0

log (semi-H1 error)

a = 2.5 : 2.25

a = 3.0 : 1.98

a = 3.5 : 1.82

a = 4.0 : 1.81

Optimal : 2

(c) T+CWF II

Figure 16: Convergence for quadratic basis (p= 2) with various ain the H1semi-norm: rates indicated in legend.

Finally, cubic (p= 3) basis is tested. The same error measures are presented in Figure 17 and 18 for all

cases. Again, the two proposed methods consistently provide optimal convergence rates regardless of the value of a.

However in this case, it seems that the actual value has little eﬀect on solution accuracy. On the other hand, the

transformation method (T) provides only linear rates, as expected, while the value of aalso has little eﬀect. Similar

to the last example, it is apparent from Figure 17 and 18 that these technique provide the ability to reduce the

solution error by several orders of magnitude, in this case, by three orders, or 99.9%.

-1.6 -1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-9.0

-8.0

-7.0

-6.0

-5.0

-4.0

-3.0

-2.0

log (L2 error)

a = 3.5 : 1.98

a = 4.0 : 1.52

a = 4.5 : 1.94

a = 5.0 : 1.87

Suboptimal (linear) : 2

Optimal : 4

(a) T

-1.6 -1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-9.0

-8.0

-7.0

-6.0

-5.0

-4.0

-3.0

-2.0

log (L2 error)

a = 3.5 : 3.39

a = 4.0 : 4.25

a = 4.5 : 4.06

a = 5.0 : 4.21

Optimal : 4

(b) T+CWF I

-1.6 -1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-9.0

-8.0

-7.0

-6.0

-5.0

-4.0

-3.0

-2.0

log (L2 error)

a = 3.5 : 3.37

a = 4.0 : 4.28

a = 4.5 : 4.34

a = 5.0 : 4.19

Optimal : 4

(c) T+CWF II

Figure 17: Convergence for cubic basis (p= 3) with various ain the L2norm: rates indicated in legend.

21

-1.6 -1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-7.0

-6.0

-5.0

-4.0

-3.0

-2.0

-1.0

log (semi-H1 error)

a = 3.5 : 0.98

a = 4.0 : 0.51

a = 4.5 : 0.96

a = 5.0 : 0.91

Suboptimal (linear) : 1

Optimal : 3

(a) T

-1.6 -1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-7.0

-6.0

-5.0

-4.0

-3.0

-2.0

-1.0

log (semi-H1 error)

a = 3.5 : 2.96

a = 4.0 : 3.26

a = 4.5 : 3.21

a = 5.0 : 3.25

Optimal : 3

(b) T+CWF I

-1.6 -1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-7.0

-6.0

-5.0

-4.0

-3.0

-2.0

-1.0

log (semi-H1 error)

a = 3.5 : 2.96

a = 4.0 : 3.25

a = 4.5 : 3.35

a = 5.0 : 3.09

Optimal : 3

(c) T+CWF II

Figure 18: Convergence for cubic basis (p= 3) with various ain the H1semi-norm: rates indicated in legend.

5.2.3 A new concept: a-reﬁnement

From the previous study, it can be noted that increasing the support size tends to yield lower error. This seems

to run counter-intuitive as reported results in the meshfree community seem to indicate an ”optimal” dilation (e.g.,

see [25]); this contradiction motivates the current study.

Here, a ﬁxed distribution of the nodal spacing h= 1/10 is employed, while varying the normalized support afor

diﬀerent values of p. Figure 19 shows the error for linear basis, where it is seen that by increasing a, lower error

can be obtained with T+CWF I and T+CWF II. On the other hand, with T alone, the optimal value appears to be

a= 2.5, which likely strikes a balance between approximation accuracy, and error due to the inability to construct

proper spaces required of the weak form.

As shown in Figure 20, the trends are similar for quadratic basis. However this time, increasing aconsistently

yields larger errors for the transformation method. Meanwhile, for both T+CWF I and T+CWF II, the error is

generally monotonically reduced by increasing a.

Finally, the results for cubic basis are presented in Figure 21. Here it is seen that the kernel measure has little

eﬀect on solution accuracy, for all three methods. However for the transformation method, increasing the kernel

measure monotonically increases the error. At least, the present method can obtain robust results for any selection

of ain cubic basis.

To conclude, with the transformation method alone, there is an optimal value of afor linear basis. For higher-

order approximations, increasing the kernel measure seems to always increase the solution error. For the proposed

method, increasing afor both linear and quadratic basis very consistently yields lower error. Meanwhile, for cubic

basis, the solution is relatively unaﬀected. In this work, we term the former eﬀect, the ability to decrease the solution

error by increasing the kernel measure, a-reﬁnement. Thus with the proposed method, users may have conﬁdence in

consistent behavior of meshfree approximations in the Galerkin solution.

1.5 2.0 2.5 3.0 3.5

a

-5.0

-4.0

-3.0

log (L2 error)

T

T+CWF I

T+CWF II

1.5 2 2.5 3 3.5

a

-3

-2.5

-2

-1.5

-1

log (semi-H1 error)

T

T+CWF I

T+CWF II

Figure 19: Norms of error of various methods with linear basis and various kernel measures a.

22

2.5 3.0 3.5 4.0 4.5

a

-6

-5.5

-5

-4.5

-4

-3.5

-3

log (L2 error)

T

T+CWF I

T+CWF II

2.5 3.0 3.5 4.0 4.5

a

-4

-3.5

-3

-2.5

-2

-1.5

-1

log (semi-H1error)

T

T+CWF I

T+CWF II

Figure 20: Norms of error of various methods with quadratic basis and various kernel measures a.

3.5 4 4.5 5 5.5

a

-7

-6.5

-6

-5.5

-5

-4.5

-4

-3.5

-3

log (L2 error)

T

T+CWF I

T+CWF II

3.5 4 4.5 5 5.5

a

-5

-4.5

-4

-3.5

-3

-2.5

-2

-1.5

-1

log (semi-H1 error)

T

T+CWF I

T+CWF II

Figure 21: Norms of error of various methods with cubic basis and various kernel measures a.

23

5.3 Boundary singular kernel method

The boundary singular kernel method is another strong type of boundary condition enforcement. The singular kernels

for the reproducing kernel shape functions are introduced for essential boundary nodes, which recovers the properties

(12)-(13). The imposition of boundary conditions in this method is therefore similar to the ﬁnite element method.

However, since (12)-(13) do not imply the weak Kronecker delta property, values imposed may actually deviate

between the nodes, just as in the transformation method.

Here we also consider the Poisson equation with high-order solution given in section 5.2: with the boundary

singular kernel method (B), boundary singular kernel method with consistent weak form one (B+CWF I), and

boundary singular kernel method with consistent weak form two (B+CWF II). h-reﬁnement is performed as before,

varying p, with a=p+ 1 ﬁxed.

Figures 22 and 23 show the errors in the L2(Ω) norm and H1(Ω) semi-norm, respectively. Here it can be seen

that for B alone, the convergence rates are far from optimal, as expected from previous results and the previous

discussions, and are in fact, linear. When CWFs are considered, both B+CWF I and B+CWF II can yield optimal

convergence rates. That is, they allow h-reﬁnement with pth order rates in the boundary singular kernel method.

In addition, since accuracy can be increased monotonically with increasing p(again with one case as an exception),

both B+CWF I and B+CWF II oﬀer the ability to perform p-reﬁnement.

-1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-7.0

-6.0

-5.0

-4.0

-3.0

-2.0

log (L2 error)

Linear : 1.97

Quadratic : 1.99

Cubic : 1.79

Quartic : 1.85

Suboptimal (linear) : 2

(a) B

-1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-7.0

-6.0

-5.0

-4.0

-3.0

-2.0

log (L2 error)

Linear : 1.97

Quadratic : 3.01

Cubic : 3.87

Quartic : 4.61

(b) B+CWF I

-1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-7.0

-6.0

-5.0

-4.0

-3.0

-2.0

log (L2 error)

Linear : 2.53

Quadratic : 3.39

Cubic : 4.40

Quartic : 7.09

(c) B+CWF II

Figure 22: Convergence with various bases in the L2norm for the boundary singular kernel method: rates indicated

in legend.

-1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-5.0

-4.0

-3.0

-2.0

-1.0

0.0

log (semi-H1 error)

Linear : 1.57

Quadratic : 1.09

Cubic : 0.72

Quartic : 0.90

Suboptimal (linear) : 1

(a) B

-1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-5.0

-4.0

-3.0

-2.0

-1.0

0.0

log (semi-H1 error)

Linear : 1.57

Quadratic :2.35

Cubic : 3.46

Quartic : 3.95

(b) B+CWF I

-1.4 -1.2 -1.0 -0.8 -0.6

log (h)

-5.0

-4.0

-3.0

-2.0

-1.0

0.0

log (semi-H1 error)

Linear : 1.53

Quadratic : 2.35

Cubic : 3.32

Quartic : 5.81

(c) B+CWF II

Figure 23: Convergence with various bases in the H1semi-norm for the boundary singular kernel method: rates

indicated in legend.

6 Conclusion

In this work, it has ﬁrst been shown that traditional strong enforcement of boundary conditions at nodal locations

in meshfree methods is inconsistent with the traditional weak formulation of the problem. That is, without the weak

Kronecker delta property, large, non-trivial deviations between the desired conditions on test and trial functions exist

24

between nodes. This was shown to result loss of Galerkin orthogonality, and an O(h) error in the L2(∂Ωg) norm,

which in turn resulted in an O(h) error in the energy norm of the problem at hand. This error was also shown to

be independent of the order of approximation employed. Thus, when solving PDEs, it was expected that this error

would limit the rate of convergence in the numerical solution.

It was then demonstrated through patch tests, and convergence tests, that indeed this O(h) energy norm error

appeared in the solution, limiting the rate of convergence in meshfree methods to that of linear basis. Thus, this

inconsistency resulted in a barrier for meshfree approximations, to solutions with linear accuracy in the energy norm

of the problem.

To remedy this deﬁciency, two new weak forms were introduced. The ﬁrst accounts for the inconsistency in

the test function construction. Here, the weak form relaxes the requirements on the test functions, to include the

approximations introduced in the Galerkin equation under the strong-form enforcement framework. This weak form

attests to the strong form of the problem at hand, and also was shown to restore Galerkin orthogonality and the

best approximation property. Only one new term is required in the matrix formulation, however this results in a

non-symmetric system matrix for self-adjoint systems.

The second weak form introduced relaxes the requirements on both the test and trial functions, to include both

approximations in the strong-form enforcement methods. This weak form also attests to the strong form, and results

in a symmetric system, which is favorable. Interestingly, this method results in an alternate orthogonality relation

related to the boundary conditions, and an alternate best approximation property. The latter feature demonstrates

that the method simultaneously minimizes the error in the energy norm, and the error on the boundary.

In numerical tests, it was ﬁrst shown that the two proposed methods can restore the ability to pass the patch test

to machine precision. It was then demonstrated that pth-order optimal convergence rates under h-reﬁnement could

be obtained, which is in stark contrast to the existing strong-type methods under the conventional weak formulation.

In addition, by increasing pfor a ﬁxed h, it was shown that lower error can be obtained, thus providing the ability to

perform p-reﬁnement for the ﬁrst time under this framework. It was also shown that these results were independent

of the particular dilation achosen, and in fact, lower error can be obtained by increasing a, which was termed a-

reﬁnement. Taken together, the proposed method provides the ability to perform p-reﬁnement, h-reﬁnement with pth

order rates, and a new capability called a-reﬁnement.

Finally, it should be noted that in this work, high-order quadrature was employed, which is atypical of a practical

meshfree implementation. In future work, this aspect should be investigated: for instance, what is the lowest order

quadrature required to maintain these high-order properties? And, with methods such as variationally consistent

integration, which can greatly reduce the burden of quadrature, what would be the order required? It is noteworthy

that the present approach is compatible with the variationally consistent approach, in that the weak forms attest to

the strong form of the problem, which is in contrast to traditional strong enforcement of boundary conditions. Lastly,

this method was tested for the Poisson equation, but can be applied to other boundary value problems as well, as

described in the appendix.

Acknowledgments The authors greatly acknowledge the support of this work by the L. Robert and Mary L.

Kimball Early Career Professorship, and the College of Engineering at Penn State.

Appendix

Consider the following abstract boundary value problem governing a scalar u:

Lu +s= 0 in Ω (49a)

Bu =hon ∂Ωh(49b)

u=gon ∂Ωg(49c)

where Lis scalar diﬀerential operator acting in the domain Ω ⊂Rd,sis a source term, gis the prescribed values of u

on the essential boundary ∂Ωg,Bis a scalar boundary operator acting on the natural boundary ∂Ωh,∂Ωg∩∂Ωh=∅

and ∂Ω = ∂Ωg∪∂Ωh.

Consider the weighted residual of the boundary value problem:

(v, Lu +s)Ω= 0.(50)

Manipulation yields a bilinear form a(·,·) which results from the integration by parts formula (v, Lu)Ω= (v, B u)∂Ω−

a(v, u)Ω, and the following problem statement for (W1

C): ﬁnd u∈Hk

g,Hk

g={u|u∈Hk(Ω), u =gon ∂Ωg}such that

for all v∈Hkthe following equation holds:

a(v, u)Ω−(v, B u)∂Ωg= (v, s)Ω+ (v, h)∂Ωh(51)

25

where Hkis an adequate Sobolev space. The above is a consistent weight residual of (49) as v= 0 on ∂Ωgis not

required to verify (49). Note that this procedure does not require the governing equation to emanate from a potential.

To take a concrete example, consider the equations for elasticity:

∇ · σ+b=0in Ω (52a)

σ·n=hon ∂Ωh(52b)

u=gon ∂Ωg(52c)

where uis the displacement, bis the body force, his the traction, gis the prescribed displacement, nis the unit

normal to the domain, σ=C:∇suis the Cauchy stress tensor; Cis the elasticity tensor and ∇su= 1/2(∇⊗u+u⊗∇)

is the strain tensor.

The following form for (W1

C) can be obtained following the given procedures: ﬁnd u∈ Sg,Sg={u|u∈

H1(Ω), ui=gion ∂Ωgi}such that for all w∈H1the following equation holds:

a(w,u)Ω−(w,n·σ(u))∂Ωg= (w,b)Ω+ (w,h)∂Ωh(53)

where

a(w,u)Ω=ZΩ

∇sw:C:∇sudΩ,(54a)

(w,b)Ω=ZΩ

w·bdΩ,(54b)

(w,h)∂Ωh=Z∂Ωh

w·hdΓ,(54c)

(w,n·σ(u))∂Ωg=Z∂Ωg

w·(n·σ(u)) dΓ.(54d)

For the symmetric weak form of the abstract boundary value problem (49), consider a more general weighted

residual:

(vΩ, Lu +s)Ω+ (vg, u −g)∂Ωg= 0.(55)

Choosing vΩ=vand vg=Bv one obtains the following formulation for (W2

C): ﬁnd u∈Hksuch that for all v∈Hk

the following equation holds

a(v, u)Ω−(v, B u)∂Ωg−(Bv, u)∂Ωg= (w, s)Ω+ (v , h)∂Ωh−(Bv, g)∂Ωg(56)

where Hkis again an adequate Sobolev space. The above veriﬁes (49) without the use of v= 0 on ∂Ωgand

u=gon ∂Ωg. Note that if Lis non-self-adjoint a(·,·) is not symmetric, and the resulting Galerkin system matrix

will not be symmetric.

To take an example, consider the elasticity equations (52) again. The (W2

C) can be derived as: ﬁnd u∈H1, such

that for all w∈H1the following equation holds:

a(w,u)Ω−(w,σ(u)·n)∂Ωg−(σ(w)·n,u)∂Ωg= (w,b)Ω+ (w,h)∂Ωh−(σ(w)·n,g)∂Ωg.(57)

Again, this procedure does not require the governing equation to emanate from a potential, although from the

discussions in the manuscript, it seems that this is likely always possible to do so if the original governing equation

does.

References

[1] S. N. Atluri, J. Y. Cho, and H. G. Kim. Analysis of thin beams, using the meshless local Petrov-Galerkin

method, with generalized moving least squares interpolations. Computational Mechanics, 24(5):334–347, 1999.

[2] I. Babuˇska. The ﬁnite element method with Lagrangian multipliers. Numerische Mathematik, 20(3):179–192,

1973.

[3] T. Belytschko, Y. Krongauz, D. Organ, M. Fleming, and P. Krysl. Meshless methods: An overview and recent

developments. Computer Methods in Applied Mechanics and Engineering, 139(1-4):3–47, 1996.

[4] T. Belytschko, Y. Y. Lu, and L. Gu. Element-free galerkin methods. International Journal for Numerical

Methods in Engineering, 37(2):229–256, 1994.

26

[5] T. Belytschko, D. Organ, and Y. Krongauz. A coupled ﬁnite element-element-free Galerkin method. Computa-

tional Mechanics, 17(3):186–195, 1995.

[6] F. Brezzi. On the existence, uniqueness and approximation of saddle-point problems arising from Lagrangian

multipliers. Publications math´ematiques et informatique de Rennes, (S4):1–26, 1974.

[7] J.-S. Chen, S.-W. Chi, and H.-Y. Hu. Recent developments in stabilized Galerkin and collocation meshfree

methods. Computer Assisted Methods in Engineering and Science, 18(1/2):3–21, 2017.

[8] J.-S. Chen, W. Han, Y. You, and X. Meng. A reproducing kernel method with nodal interpolation property.

International Journal for Numerical Methods in Engineering, 56(7):935–960, 2003.

[9] J.-S. Chen, M. Hillman, and S.-W. Chi. Meshfree methods: Progress made after 20 years. Journal of Engineering

Mechanics, 143(4):04017001, 2016.

[10] J.-S. Chen, M. Hillman, and M. R¨uter. An arbitrary order variationally consistent integration for Galerkin

meshfree methods. International Journal for Numerical Methods in Engineering, 95(5):387–418, 2013.

[11] J.-S. Chen, C. Pan, C.-T. Wu, and W. K. Liu. Reproducing kernel particle methods for large deformation

analysis of non-linear structures. Computer Methods in Applied Mechanics and Engineering, 139(1-4):195–227,

1996.

[12] J.-S. Chen and H.-P. Wang. New boundary condition treatments in meshfree computation of contact problems.

Computer Methods in Applied Mechanics and Engineering, 187(3-4):441–468, 2000.

[13] J.-S. Chen, C.-T. Wu, and T. Belytschko. Regularization of material instabilities by meshfree approximations

with intrinsic length scales. International Journal for Numerical Methods in Engineering, 47(7):1303–1322, 2000.

[14] J.-S. Chen, C.-T. Wu, S. Yoon, and Y. You. A stabilized conforming nodal integration for galerkin mesh-free

methods. International journal for numerical methods in engineering, 50(2):435–466, 2001.

[15] S. Fern´andez-M´endez and A. Huerta. Imposing essential boundary conditions in mesh-free methods. Computer

methods in applied mechanics and engineering, 193(12):1257–1275, 2004.

[16] J. Gosz and W. K. Liu. Admissible approximations for essential boundary conditions in the reproducing kernel

particle method. Computational Mechanics, 19(1):120–135, 1996.

[17] M. Griebel and M. A. Schweitzer. A particle-partition of unity method part V: boundary conditions. In Geometric

analysis and nonlinear partial diﬀerential equations, pages 519–542. Springer, Berlin, Heidelberg, 2003.

[18] F. C. G¨unther and W. K. Liu. Implementation of boundary conditions for meshless methods. Computer Methods

in Applied Mechanics and Engineering, 163(1-4):205–230, 1998.

[19] H. Y. Hu, J. S. Chen, and W. Hu. Weighted radial basis collocation method for boundary value problems.

International journal for numerical methods in engineering, 69(13):2736–2757, 2007.

[20] J. J. Koester and J.-S. Chen. Conforming window functions for meshfree methods. Computer Methods in Applied

Mechanics and Engineering, 347:588–621, 2019.

[21] Y. Krongauz and T. Belytschko. Enforcement of essential boundary conditions in meshless approximations using

ﬁnite elements. Computer Methods in Applied Mechanics and Engineering, 131(1-2):133–145, 1996.

[22] Y. Leng, X. Tian, and J. T. Foster. Super-convergence of reproducing kernel approximation. Computer Methods

in Applied Mechanics and Engineering, 352:488–507, 2019.

[23] S. Li and W. K. Liu. Moving least-square reproducing kernel method part ii: Fourier analysis. Computer Methods

in Applied Mechanics and Engineering, 139(1-4):159–193, 1996.

[24] S. Li and W. K. Liu. Meshfree and particle methods and their applications. Applied Mechanics Reviews,

55(1):1–34, 2002.

[25] W. K. Liu, S. Jun, and Y. F. Zhang. Reproducing kernel particle methods. International Journal for Numerical

Methods in Fluids, 20(8-9):1081–1106, 1995.

[26] Y. Y. Lu, T. Belytschko, and L. Gu. A new implementation of the element free Galerkin method. Computer

Methods in Applied Mechanics and Engineering, 113(3-4):397–414, 1994.

27

[27] Y. X. Mukherjee and S. Mukherjee. On boundary conditions in the element-free Galerkin method. Computational

Mechanics, 19(4):264–270, 1997.

[28] B. Nayroles, G. Touzot, and P. Villon. Generalizing the ﬁnite element method: Diﬀuse approximation and diﬀuse

elements. Computational Mechanics, 10(5):307–318, 1992.

[29] J. Nitsche. ¨

Uber ein Variationsprinzip zur L¨osung von Dirichlet-Problemen bei Verwendung von Teilr¨aumen, die

keinen Randbedingungen unterworfen sind. Abhandlungen aus dem mathematischen Seminar der Universit¨at

Hamburg, 36(1):9–15, 1971.

[30] G. J. Strang, Gilbert and Fix. An analysis of the ﬁnite element method. Prentice-hall, Englewood Cliﬀs, NJ,

1973.

[31] G. J. Wagner and W. K. Liu. Application of essential boundary conditions in mesh-free methods: A corrected

collocation method. International Journal for Numerical Methods in Engineering, 47(8):1367–1379, 2000.

[32] G. J. Wagner and W. K. Liu. Hierarchical enrichment for bridging scales and mesh-free boundary conditions.

International Journal for Numerical Methods in Engineering, 50(3):507–524, 2001.

[33] L. T. Zhang, G. J. Wagner, and W. K. Liu. A parallelized meshfree method with boundary enrichment for

large-scale CFD. Journal of Computational Physics, 176(2):483–506, 2002.

[34] T. L. Zhu and S. N. Atluri. A modiﬁed collocation method and a penalty formulation for enforcing the essential

boundary conditions in the element free Galerkin method. Computational Mechanics, 21(3):211–222, 1998.

28