Content uploaded by Mahmoud Khaled

Author content

All content in this area was uploaded by Mahmoud Khaled on Feb 18, 2019

Content may be subject to copyright.

Synthesis of Symbolic Controllers:

A Parallelized and Sparsity-Aware Approach ?

Mahmoud Khaled1, Eric S. Kim2, Murat Arcak2, and Majid Zamani3,4

1Department of Electrical and Computer Engineering, Technical University of

Munich, Germany

khaled.mahmoud@tum.de

2Department of Electrical Engineering and Computer Sciences, University of

California Berkeley, Berkeley, CA, USA

{eskim,arcak}@berkeley.edu

3Department of Computer Science, University of Colorado Boulder, USA

4Department of Computer Science, Ludwig Maximilian University of Munich,

Germany

majid.zamani@colorado.edu

Abstract. The correctness of control software in many safety-critical

applications such as autonomous vehicles is very crucial. One approach

to achieve this goal is through “symbolic control”, where complex phys-

ical systems are approximated by ﬁnite-state abstractions. Then, using

those abstractions, provably-correct digital controllers are algorithmi-

cally synthesized for concrete systems, satisfying some complex high-

level requirements. Unfortunately, the complexity of constructing such

abstractions and synthesizing their controllers grows exponentially in

the number of state variables in the system. This limits its applicability

to simple physical systems.

This paper presents a uniﬁed approach that utilizes sparsity of the inter-

connection structure in dynamical systems for both construction of ﬁnite

abstractions and synthesis of symbolic controllers. In addition, parallel

algorithms are proposed to target high-performance computing (HPC)

platforms and Cloud-computing services. The results show remarkable

reductions in computation times. In particular, we demonstrate the ef-

fectiveness of the proposed approach on a 7-dimensional model of a BMW

320i car by designing a controller to keep the car in the travel lane unless

it is blocked.

1 Introduction

Recently, the world has witnessed many emerging safety-critical applications

such as smart buildings, autonomous vehicles and smart grids. These applica-

tions are examples of cyber-physical systems (CPS). In CPS, embedded control

software plays a signiﬁcant role by monitoring and controlling several physical

variables, such as pressure or velocity, through multiple sensors and actuators,

?This work was supported in part by the H2020 ERC Starting Grant AutoCPS.

2 M. Khaled, E. S. Kim, M. Arcak, and M. Zamani

and communicates with other systems or with supporting computing servers.

A novel approach to design provably correct embedded control software in an

automated fashion, is via formal method techniques [10, 11], and in particular

symbolic control.

Symbolic control provides algorithmically provably-correct controllers based

on the dynamics of physical systems and some given high-level requirements.

In symbolic control, physical systems are approximated by ﬁnite abstractions

and then discrete (a.k.a. symbolic) controllers are automatically synthesized for

those abstractions, using automata-theoretic techniques [5]. Finally, those con-

trollers will be reﬁned to hybrid ones applicable to the original physical systems.

Unlike traditional design-then-test workﬂows, merging design phases with for-

mal veriﬁcation ensures that controllers are certiﬁed-by-construction. Current

implementations of symbolic control, unfortunately, take a monolithic view of

systems, where the entire system is modeled, abstracted, and a controller is syn-

thesized from the overall state sets. This view interacts poorly with the symbolic

approach, whose complexity grows exponentially in the number of state variables

in the model. Consequently, the technique is limited to small dynamical systems.

1.1 Related Work

Recently, two promising techniques were proposed for mitigating the computa-

tional complexity of symbolic controller synthesis. The ﬁrst technique [2] utilizes

sparsity of internal interconnection of dynamical systems to eﬃciently construct

their ﬁnite abstractions. It is only presented for constructing abstractions while

controller synthesis is still performed monolithically without taking into account

the sparse structure. The second technique [4] provides parallel algorithms tar-

geting high performance (HPC) computing platforms, but suﬀers from state-

explosion problem when the number of parallel processing elements (PE) is ﬁxed.

We brieﬂy discuss each of those techniques and propose an approach that eﬃ-

ciently utilizes both of them.

Many abstraction techniques implemented in existing tools, including SCOTS

[9], traverse the state space in a brute force way and suﬀer from an exponen-

tial runtime with respect to the number of state variables. The authors of [2]

note that a majority of continuous-space systems exhibit a coordinate structure,

where the governing equation for each state variable is deﬁned independently.

When the equations depend only on few continuous variables, then they are

said to be sparse. They proposed a modiﬁcation to the traditional brute-force

procedure to take advantage of such sparsity only in constructing abstractions.

Unfortunately, the authors do not leverage sparsity to improve synthesis of sym-

bolic controllers, which is, practically, more computationally complex. In this

paper, we propose a parallel implementation of their technique to utilize HPC

platforms. We also show how sparsity can be utilized, using a parallel implemen-

tation, during the controller synthesis phase as well.

The framework pFaces [4] is introduced as an acceleration ecosystem for im-

plementations of symbolic control techniques. Parallel implementations of the

abstraction and synthesis algorithms are introduced as computation kernels in

Symbolic Controllers: a Parallelized and Sparsity-Aware Approach 3

pFaces, which were originally done serially in SCOTS [9]. The proposed algo-

rithms treat the problem as a data-parallel task and they scale remarkably well

as the number of PEs increases. pFaces allows controlling the complexity of

symbolic controller synthesis by adding more PEs. The results introduced in [4]

outperform all exiting tools for abstraction construction and controller synthe-

sis. However, for a ﬁxed number of PEs, the algorithms still suﬀer from the

state-explosion problem.

In this paper, we propose parallel algorithms that utilize the sparsity of the

interconnection in the construction of abstraction and controller synthesis. In

particular, the main contributions of this paper are twofold:

(1) We introduce a parallel algorithm for constructing abstractions with a dis-

tributed data container. The algorithm utilizes sparsity and can run on HPC

platforms. We implement it in the framework of pFaces and it shows remark-

able reduction in computation time compared to the results in [2].

(2) We introduce a parallel algorithm that integrates sparsity of dynamical sys-

tems into the controller synthesis phase. Speciﬁcally, a sparsity-aware pre-

processing step concentrates computational resources in a small relevant sub-

set of the state-input space. This algorithm returns the same result as the

monolithic procedure, while exhibiting lower runtimes. To the best of our

knowledge, the proposed algorithm is the ﬁrst to merge parallelism with

sparsity in the context of symbolic controller synthesis.

2 Preliminaries

Given two sets Aand B, we denote by |A|the cardinality of A, by 2Athe

set of all subsets of A, by A×Bthe Cartesian product of Aand B, and by

A\Bthe Pontryagin diﬀerence between the sets Aand B. Set Rnrepresents

the n-dimensional Euclidean space of real numbers. This symbol is annotated

with subscripts to restrict it in the obvious way, e.g., Rn

+denotes the positive

(component-wise) n-dimensional vectores. We denote by πA:A×B→Athe

natural projection map on Aand deﬁne it, for a set C⊆A×B, as follows:

πA(C) = {a∈A| ∃b∈B(a, b)∈C}. Given a map R:A→Band a set A ⊆ A,

we deﬁne R(A) := S

a∈A

{R(a)}. Similarly, given a set-valued map Z:A→2B

and a set A ⊆ A, we deﬁne Z(A) := S

a∈A

Z(a).

We consider general discrete-time nonlinear dynamical systems given in the

form of the update equation:

Σ:x+=f(x, u),(1)

where x∈X⊆Rnis a state vector and u∈U⊆Rmis an input vector. The

system is assumed to start from some initial state x(0) = x0∈Xand the map

fis used to update the state of the system every τseconds. Let set ¯

Xbe a

ﬁnite partition on Xconstructed by a set of hyper-rectangles of identical widths

η∈Rn

+and let set ¯

Ube a ﬁnite subset of U. A ﬁnite abstraction of (1) is a

4 M. Khaled, E. S. Kim, M. Arcak, and M. Zamani

x1

x2

x3

x+

1

x+

2

x+

3

u1

u2

Σ

Fig. 1: The sparsity graph of the vehicle example as introduced in [2].

ﬁnite-state system ¯

Σ= ( ¯

X, ¯

U , T ), where T⊆¯

X×¯

U×¯

Xis a transition relation

crafted so that there exists a feedback-reﬁnement relation (FRR) R ⊆ X×¯

X

from Σto ¯

Σ. Interested readers are referred to [8] for details about FRRs and

their usefulness on synthesizing controllers for concrete systems using their ﬁnite

abstractions.

For a system Σ, an update-dependency graph is a directed graph of verticies

representing input variables {u1, u2,· · · , um}, state variables {x1, x2,· · · , xn},

and updated state variables {x+

1, x+

2,· · · , x+

n}, and edges that connect input

(resp. states) variables to the aﬀected updated state variables based on map

f. For example, Figure 1 depicts the update-dependency graph of the vehicle

case-study presented in [2] with the update equation:

x+

1

x+

2

x+

3

=

f1(x1, x3, u1, u2)

f2(x2, x3, u1, u2)

f3(x3, u1, u2)

,

for some nonlinear functions f1, f2, and f3. The state variable x3aﬀects all

updated state variables x+

1,x+

2, and x+

3. Hence, the graph has edges connecting

x3to x+

1,x+

2, and x+

3, respectively. As update-dependency graphs become denser,

sparsity of their corresponding abstract systems is reduced. The same graph

applies to the abstract system ¯

Σ.

We sometimes refer to ¯

X,¯

U, and Tas monolithic state set, monolithic input

set and monolithic transition relation, respectively. A generic projection map

Pf

i:A→πi(A)

is used to extract elements of the corresponding subsets aﬀecting the updated

state ¯x+

i. Note that A⊆¯

X:= ¯

X1×¯

X2× · · · × ¯

Xnwhen we are interested in

extracting subsets of the state set and A⊆¯

U:= ¯

U1×¯

U2× · · ·× ¯

Umwhen we are

interested in extracting subsets of the input set. When extracting subsets of the

state set, πiis the projection map π¯

Xk1×¯

Xk2×···× ¯

XkK, where kj∈ {1,2,· · · , n},

j∈ {1,2,· · · , K}, and ¯

Xk1×¯

Xk2× · ·· × ¯

XkKis a subset of states aﬀecting the

updated state variable ¯x+

i. Similarly, when extracting subsets of the input set,

πiis the projection map π¯

Up1×¯

Up2×···× ¯

UpP, where pi∈ {1,2,· · · , m},i∈ {1,2,

· · · , P },¯

Up1×¯

Up2× · · · × ¯

UpPis a subset of inputs aﬀecting the updated state

variable ¯x+

i.

For example, assume that the monolithic state (resp. input) set of the system

¯

Σin Figure 1 is given by ¯

X:= ¯

X1×¯

X2×¯

X3(resp. ¯

U:= ¯

U1×¯

U2) such

that for any ¯x:= (¯x1,¯x2,¯x3)∈¯

Xand ¯u:= (¯u1,¯u2)∈¯

U, one has ¯x1∈¯

X1,

¯x2∈¯

X2, ¯x3∈¯

X3, ¯u1∈¯

U1, and ¯u2∈¯

U2. Now, based on the dependency

Symbolic Controllers: a Parallelized and Sparsity-Aware Approach 5

graph, Pf

1(¯x) := π¯

X1×¯

X3(¯x) = ( ¯x1,¯x3) and Pf

1(¯u) := π¯

U1×¯

U2(¯u) = ( ¯u1,¯u2). We

can also apply the map to subsets of ¯

Xand ¯

U, e.g., Pf

1(¯

X) = ¯

X1×¯

X3, and

Pf

1(¯

U) = ¯

U1×¯

U2.

For a transition element t= (¯x, ¯u, ¯x0)∈T, we deﬁne Pf

i(t) := (Pf

i(¯x), P f

i(¯u),

π¯

Xi(¯x0)), for any component i∈ {1,2,· · · , n}. Note that for t, the successor state

¯x0is treated diﬀerently as it is related directly to the updated state variable ¯x+

i.

We can apply the map to subsets of T, e.g., for the given update-dependency

graph in Figure 1, one has Pf

1(T) = ¯

X1×¯

X3×¯

U1×¯

U2×¯

X1.

On the other hand, a generic recovery map

Df

i:Pf

i(A)→2A,

is used to recover elements (resp. subsets) from the projected subsets back to

their original monolithic sets. Similarly, A⊆¯

X:= ¯

X1×¯

X2× · · · × ¯

Xnwhen we

are interested in subsets of the state set and A⊆¯

U:= ¯

U1×¯

U2× · · · × ¯

Umwhen

we are interested in subsets of the input set.

For the same example in Figure 1, let ¯x:= ( ¯x1,¯x2,¯x3)∈¯

Xbe a state. Now,

deﬁne ¯xp:= Pf

1(¯x) = ( ¯x1,¯x3). We then have Df

1(¯xp) := {( ¯x1,¯x∗

2,¯x3)|¯x∗

2∈¯

X2}.

Similarly, for a transition element t:= ((¯x1,¯x2,¯x3),(¯u1,¯u2),(¯x0

1,¯x0

2,¯x0

3)) ∈T

and its projection tp:= Pf

1(t) = ((¯x1,¯x3),( ¯u1,¯u2),(¯x0

1)), the recovered transi-

tions is the set Df

1(tp) = {((¯x1,¯x∗

2,¯x3),(¯u1,¯u2),( ¯x0

1,¯x0∗

2,¯x0∗

3)) |¯x∗

2∈¯

X2, ¯x0∗

2∈

¯

X2, and ¯x0∗

3∈¯

X3}.

Given a subset e

X⊆¯

X, let [ e

X] := Df

1◦Pf

1(e

X). Note that [ e

X] is not necessarily

equal to e

X. However, we have that e

X⊆[e

X]. Here, [ e

X] over-approximates e

X.

For an update map fin (1), a function Ωf:¯

X×¯

U→X×Xcharacterizes

hyper-rectangles that over-approximate the reachable sets starting from a set

¯x∈¯

Xwhen the input ¯uis applied. For example, if a growth bound map (β:

Rn×U→Rn) is used, Ωfcan be deﬁned as follows:

Ωf(¯x, ¯u) = (xlb , xub ) := (−r+f(¯xc,¯u), r +f(¯xc,¯u)),

where r=β(η/2, u), and ¯xc∈¯xdenotes the centroid of ¯x. Here, βis the growth

bound introduced in [8, Section VIII]. An over-approximation of the reachable

sets can then be obtained by the map Of:¯

X×¯

U→2¯

Xdeﬁned by:

Of(¯x, ¯u) := Q◦Ωf( ¯x, ¯u),

where Qis a quantization map deﬁned by:

Q(xlb, xub ) = {¯x0∈¯

X|¯x0∩[[xlb, xub ]] 6=∅},(2)

where [[xlb, xub ]] = [xlb,1, xub,1]×[xlb,2, xub,2]× · · · × [xlb,n, xub,n ].

We also assume that Ofcan be decomposed component-wise (i.e., for each

dimension i∈ {1,2,· · · , n}) such that for any (¯x, ¯u)∈¯

X×¯

U,Of(¯x, ¯u) =

Tn

i=1 Df

i(Of

i(Pf

i(¯x), P f

i(¯u))), where Of

i:Pf

i(¯

X)×Pf

i(¯

U)→2Pf

i(¯

X)is an over-

approximation function restricted to component i∈ {1,2,· · · , n}of f. The same

assumption applies to the underlying characterization function Ωf.

6 M. Khaled, E. S. Kim, M. Arcak, and M. Zamani

Algorithm 1: Serial algorithm for constructing abstractions (SA).

Input: ¯

X, ¯

U , Of

Output: A transition relation T⊆¯

X×¯

U×¯

X.

1T← ∅ ;Initialize the set of transitions

2for all ¯x∈¯

Xdo

3for all ¯u∈¯

Udo

4for all ¯x0∈Of(¯x, ¯u)do

5T←T∪ {(¯x, ¯u, ¯x0)};Add a new transition

6end

7end

8end

Algorithm 2: Serial sparsity-aware algorithm for constructing abstrac-

tions (Sparse-SA) as introduced in [2].

Input: ¯

X, ¯

U , Of

Output: A transition relation T⊆¯

X×¯

U×¯

X.

1T←¯

X×¯

U×¯

X;Initialize the set of transitions

2for all i∈ {1,2,· · · , n}do

3Ti←SA(Pf

i(¯

X), P f

i(¯

U), Of

i) ; Transitions of sub-spaces

4T←T∩Df

i(Ti) ; Add transitions of sub-spaces

5end

3 Sparsity-aware distributed constructions of abstractions

Traditionally, constructing ¯

Σis achieved monolithically and sequentially. This

includes current state-of-the-art tools, e.g. SCOTS [9], PESSOA [6], CoSyMa [7], and

SENSE [3]. More precisely, such tools have implementations that serially traverse

each element (¯x, ¯u)∈¯

X×¯

Uto compute a set of transitions {(¯x, ¯u, ¯x0)|¯x0∈

Of(¯x, ¯u)}. Algorithm 1 presents the traditional serial algorithm (denoted by SA)

for constructing ¯

Σ.

The drawback of this exhaustive search was mitigated by the technique intro-

duced in [2] which utilizes the sparsity of ¯

Σ. The authors suggest constructing

Tby applying Algorithm 1 to subsets of each component. Algorithm 2 presents

a sparsity-aware serial algorithm (denoted by Sparse-SA) for constructing ¯

Σ, as

introduced in [2]. If we assume a bounded number of elements in subsets of each

component (i.e., |Pf

i(¯

X)|and |Pf

i(¯

U)|from line 3 in Algorithm 2), we would

expect a near-linear complexity of the algorithm. This is not clearly the case

in [2, Figure 3] as the authors decided to use Binary Decision Diagrams (BDD)

to represent transition relation T.

Clearly, representing Tas a single storage entity is a drawback in Algorithm

2. All component-wise transition sets Tiwill eventually need to push their results

into T. This hinders any attempt to parallelize it unless a lock-free data structure

is used, which aﬀects the performance dramatically.

On the other hand, Algorithm 2 in [4] introduces a technique for constructing

¯

Σby using a distributed data container to maintain the transition set Twithout

constructing it explicitly. In [4], using a continuous over-approximation Ωfis

Symbolic Controllers: a Parallelized and Sparsity-Aware Approach 7

Algorithm 3: Proposed sparsity-aware parallel algorithm for con-

structing discrete abstractions.

Input: ¯

X, ¯

U , Ωf

Output: A list of characteristic sets: K:=

P

S

p=1

n

S

i=1

Kp

loc,i.

1for all i∈ {1,2,· · · , n}do

2for all p∈ {1,2,· · · , P }do

3Kp

loc,i ← ∅ ;Initialize local containers

4end

5end

6for all i∈ {1,2,· · · , n}in parallel do

7for all (¯x, ¯u)∈Pf

i(¯

X)×Pf

i(¯

U)in parallel with index jdo

8p=I(i, j) ; Identify target PE

9(xlb, xub )←Ωf(¯x, ¯u) ; Calculate characteristics

10 Kp

loc,i ←Kp

loc,i ∪ {(¯x, ¯u, (xlb , xub ))};Store characteristics

11 end

12 end

Pf

1(

¯

X)

Pf

1(

¯

U)

=

¯

X1×

¯

X3

=

¯

U1×

¯

U2

K1

loc;1

K2

loc;1

K3

loc;2

K4

loc;2

K5

loc;3

K6

loc;3

Pf

2(

¯

X)

=

¯

X2×

¯

X3

Pf

3(

¯

X)

=

¯

X3

Pf

2(

¯

U)

=

¯

U1×

¯

U2

Pf

3(

¯

U)

=

¯

U1×

¯

U2

Fig. 2: An example task distributions for the parallel sparsity-aware abstraction.

favored as opposed to the discrete over-approximation Ofsince it requires less

memory in practice. The actual computation of transitions (i.e., using Ofto

compute discrete successor states) is delayed to the synthesis phase and done

on the ﬂy. The parallel algorithm scales remarkably with respect to the number

of PEs, denoted by P, since the task is parallelizable with no data dependency.

However, it still handles the problem monolithically which means, for a ﬁxed P,

it will not probably scale as the system dimension ngrows.

We then introduce Algorithm 3 which utilizes sparsity to construct ¯

Σin

parallel, and is a combination of Algorithm 2 in [4] and Algorithm 2. Function

I:N+\ {∞} × N+\ {∞} → {1,2,· · · , P }maps a parallel job (i.e., lines 9

and 10 inside the inner parallel for-all statement), for a component iand a

tuple (¯x, ¯u) with index j, to a PE with an index p=I(i, j ). Kp

loc,i stores the

characterizations of abstraction of ith component and is located in PE of index p.

Collectively, K1

loc,1,. . . , Kp

loc,i,. . . , K P

loc,n constitute a distributed container that

stores the abstraction of the system.

Figure 2 depicts an example of the job and task distributions for the example

presented in Figure 1. Here, we use P= 6 with a mapping Ithat distributes one

partition element of one subset Pf

i(¯

X)×Pf

i(¯

U) to one PE. We also assume that

the used PEs have equal computation power. Consequently, we try to divide each

8 M. Khaled, E. S. Kim, M. Arcak, and M. Zamani

0 10 20 30 40 50 60 70 80 90 100

Dimension of the state set

0

20

40

60

80

100

120

140

160

180

200

Time (sec.)

Serial algorithm for sparsity-aware abstraction

Distributed algorithm for sparsity-aware abstraction

Fig. 3: Comparison between the serial and parallel algorithms for constructing

abstractions of a traﬃc network model by varying the dimensions.

subset Pf

i(¯

X)×Pf

i(¯

U) into two equal partition elements such that we have, in

total, 6 similar computation spaces. Inside each partition element, we indicate

which distributed storage container Kp

loc,i is used.

To assess the distributed algorithm in comparison with the serial one pre-

sented in [2], we implement it in pFaces. We use the same traﬃc model presented

in [2, Subsection VI-B] and the same parameters. For this example, the authors

of [2] construct Ti, for each component i∈ {1,2,· · · , n}. They combine them

incrementally in a BDD that represents T. A monolithic construction of Tfrom

Tiis required in [2] since symbolic controllers synthesis is done monolithically.

On the other hand, using Kp

loc,i in our technique plays a major role in reducing

the complexity of constructing higher dimensional abstractions. In Subsection

4, we utilize Kp

loc,i directly to synthesize symbolic controllers with no need to

explicitly construct T.

Figure 3 depicts a comparison between the results reported in [2, Figure 3]

and the ones obtained from our implementation in pFaces. We use an Intel Core

i5 CPU, which comes equipped with an internal GPU yielding around 24 PEs

being utilized by pFaces. The implementation stores the distributed containers

Kp

loc,i as raw-data inside the memories of their corresponding PEs. As expected,

the distributed algorithm scales linearly and we are able to go beyond 100 di-

mensions in a few seconds, whereas Figure 3 in [2] shows only abstractions up

to a 51-dimensional traﬃc model because constructing the monolithic Tbegins

to incur an exponential cost for higher dimensions.

Remark 1. Both Algorithms 2 and 3 utilize sparsity of Σto reduce the space

complexity of abstractions from |¯

X×¯

U|to Pn

i=1 |Pf

i(¯

X)×Pf

i(¯

U)|. However,

Algorithm 2 iterates over the space serially. Algorithm 3, on the other hand,

handles the computation over the space in parallel using PPEs.

Symbolic Controllers: a Parallelized and Sparsity-Aware Approach 9

4 Sparsity-aware distributed synthesis of symbolic

controllers

Given an abstract system ¯

Σ= ( ¯

X, ¯

U , T ), we deﬁne the controllable predecessor

map CP r eT: 2 ¯

X×¯

U→2¯

X×¯

Ufor Z⊆¯

X×¯

Uby:

CP r eT(Z) = {(¯x, ¯u)∈¯

X×¯

U|∅6=T(¯x, ¯u)⊆π¯

X(Z)},(3)

where T(¯x, ¯u) is an interpretation of the transitions set Tas a map T:¯

X×

¯

U→2¯

Xthat evaluates a set of successor states from a state-input pair. Sim-

ilarly, we introduce a component-wise controllable predecessor map CP reTi:

2Pf

i(¯

X)×Pf

i(¯

U)→2Pf

i(¯

X)×Pf

i(¯

U), for any component i∈ {1,2,· · · , n}and any

e

Z:= Pf

i(Z) := πPf

i(¯

X)×Pf

i(¯

U)(Z), as follows:

CP r eTi(e

Z) = {(¯x, ¯u)∈Pf

i(¯

X)×Pf

i(¯

U)|∅6=Ti(¯x, ¯u)⊆π¯

Xi(e

Z)}.(4)

Proposition 1. The following inclusion holds for any i∈ {1,2,· · · , n}and any

Z⊆¯

X×¯

U:

Pf

i(CP r eT(Z)) ⊆CP reTi(Pf

i(Z)).

Proof. Consider an element zp∈Pf

i(CP r eT(Z)). This implies that there exists

z∈¯

X×¯

Usuch that z∈CP reT(Z) and zp=Pf

i(z). Consequently, Ti(zp)6=∅

since T(z)6=∅. Also, since z∈CP r eT(Z), then T(z)⊆π¯

X(Z). Now, recall how

Tiis constructed as a component-wise set of transitions in line 2 in Algorithm

2. Then, we conclude that Ti(zp)⊆π¯

Xi(Pf

i(Z)). By this, we already satisfy the

requirements in (4) such that zp= (¯x, ¯u)∈C P reTi(Z).

Here, we consider reachability and invariance speciﬁcations given by the LTL

formulae ♦ψand ψ, respectively, where ψis a propositional formula over a set

of atomic propositions AP . We ﬁrst construct an initial winning set Zψ={(¯x,

¯u)∈¯

X×¯

U|L(¯x, ¯u)|=ψ)}, where L:¯

X×¯

U→2AP is some labeling function.

During the rest of this section, we focus on reachability speciﬁcations for the sake

of space and a similar discussion can be pursued for invariance speciﬁcations.

Traditionally, to synthesize symbolic controllers for the reachability speciﬁ-

cations ♦ψ, a monotone function:

G(Z) := CP r eT(Z)∪Zψ(5)

is employed to iteratively compute Z∞=µZ.G(Z) starting with Z0=∅. Here,

a notation from µ-calculus is used with µas the minimal ﬁxed point operator

and Z⊆¯

X×¯

Uis the operated variable representing the set of winning pairs

(¯x, ¯u)∈¯

X×¯

U. Set Z∞⊆¯

X×¯

Urepresents the set of ﬁnal winning pairs,

after a ﬁnite number of iterations. Interested readers can ﬁnd more details in [5]

and the references therein. The transition map Tis used in this ﬁxed-point

computation and, hence, the technique suﬀers directly from the state-explosion

10 M. Khaled, E. S. Kim, M. Arcak, and M. Zamani

Algorithm 4: Traditional serial algorithm to synthesize Cenforcing

the speciﬁcation ♦ψ.

Input: Initial winning domain Zψ⊂¯

X×¯

Uand T

Output: A controller C:¯

Xw→2¯

U.

1Z∞← ∅ ;Initialize a running win-pairs set

2¯

Xw← ∅ ;Initialize a running win-states set

3do

4Z0←Z∞;Current win-pairs gets latest win-pairs

5Z∞←CP r eT(Z0)∪Zψ;Update the running win-pairs set

6D←Z∞\Z0;Separate the new win-pairs

7foreach ¯x∈π¯

X(D)with ¯x6∈ ¯

Xwdo

8¯

Xw←¯

Xw∪ {¯x};Add new win-states

9C(¯x) := {¯u∈¯

U|(¯x, ¯u)∈D};Add new control actions

10 end

11 while Z∞6=Z0;

problem. Algorithm 4 depicts a traditional serial algorithm of symbolic controller

synthesis for reachability speciﬁcations. The synthesized controller is a map C:

¯

Xw→2¯

U, where ¯

Xw⊆¯

Xrepresents a winning (a.k.a. controllable) set of

states. Map Cis deﬁned as: C(¯x) = {¯u∈¯

U|(¯x, ¯u)∈µj( ¯x)Z.G(Z)}, where

j(¯x) = inf {i∈N|¯x∈π¯

X(µiZ.G(Z))}, and µiZ.G(Z) represents the set of

state-input pairs by the end of the ith iteration of the minimal ﬁxed point

computation.

A parallel implementation that mitigates the complexity of the ﬁxed-point

computation is introduced in [4, Algorithm 4]. Brieﬂy, for a set Z⊆¯

X×¯

U, each

iteration of µZ.G(Z) is computed via parallel traversal in the complete space

¯

X×¯

U. Each PE is assigned a disjoint set of state-input pairs from ¯

X×¯

Uand

it declares whether, or not, each pair belongs to the next winning pairs (i.e.,

G(Z)). Although the algorithm scales well w.r.t P, it still suﬀers from the state-

explosion problem for a ﬁxed P. We present a modiﬁed algorithm that utilizes

sparsity to reduce the parallel search space at each iteration.

First, we introduce the component-wise monotone function:

Gi(Z) := CP r eTi(Pf

i(Z)) ∪Pf

i(Zψ),(6)

for any i∈ {1,2,· · · , n}and any Z∈¯

X×¯

U. Now, an iteration in the sparsity-

aware ﬁxed-point can be summarized by the following three steps:

(1) Compute the component-wise sets Gi(Z). Note that Gi(Z) lives in the set

Pf

i(¯

X)×Pf

i(¯

U).

(2) Recover a monolithic set Gi(Z), for each i∈ {1,2,· · · , n}, using the map Df

i

and intersect these sets. Formally, we denote this intersection by:

[G(Z)] :=

n

\

i=1

(Df

i(Gi(Z))).(7)

Note that [G(Z)] is an over-approximation of the monolithic set G(Z), which

we prove in Theorem 1.

Symbolic Controllers: a Parallelized and Sparsity-Aware Approach 11

(3) Now, based on the next theorem, there is no need for a parallel search in

¯

X×¯

Uand the search can be done in [G(Z)]. More accurately, the search

for new elements in the next winning set can be done in [G(Z)] \Z.

Theorem 1. Consider an abstract system ¯

Σ= ( ¯

X, ¯

U , T ). For any set Z∈

¯

X×¯

U,G(Z)⊆[G(Z)].

Proof. Consider any element z∈G(Z). This implies that z∈Z,z∈Zψor

z∈CP r eT(Z). We show that z∈[G(Z)] for any of these cases.

Case 1 [z∈Z]: By the deﬁnition of map Pf

i, we know that Pf

i(z)∈Pf

i(Z). By the

monotonicity of map Gi,Pf

i(Z)⊆Gi(Z). This implies that Pf

i(z)∈Gi(Z).

Also, by the deﬁnition of map Df

i, we know that z∈Df

i(Gi(Z)). The above

argument holds for any component i∈ {1,2,· · · , n}which implies that z∈

Tn

i=1(Df

i(Gi(Z))) = [G(Z)].

Case 2 [z∈Zψ]: The same argument used for the previous case can be used for this

one as well.

Case 3 [z∈C P reT(Z)]: We apply the map Pf

ito both sides of the inclusion.

We then have Pf

i(z)∈Pf

i(CP r eT(Z)). Using Proposition 1, we know that

Pf

i(CP r eT(Z)) ⊆CP reTi(Z). This implies that Pf

i(z)∈CP r eTi(Pf

i(Z)).

From (6) we obtain that Pf

i(z)∈Gi(Z), and consequently, z∈Df

i(Gi(Z)).

The above argument holds for any component i∈ {1,2,· · · , n}. This, conse-

quently, implies that z∈Tn

i=1(Df

i(Gi(Z))) = [G(Z)], which completes the

proof.

Remark 2. An initial computation of the controllable predecessor is done component-

wise in step (1) which utilizes the sparsity of ¯

Σand can be easily implemented

in parallel. Only in step (3) a monolithic search is required. However, unlike the

implementation in [4, Algorithm 4], the search is performed only for a subset of

¯

X×¯

U, which is [G(Z)] \Z.

Note that dynamical systems pose some locality property (i.e., starting from

nearby states, successor states are also nearby) and an initial winning set will

grow incrementally with each ﬁxed-point iteration. This makes the set [G(Z)]\Z

relatively small w.r.t |¯

X×¯

U|. We clarify this and the result in Theorem 1 with

a small example.

4.1 An Illustrative Example

For the sake of illustrating the proposed sparsity-aware synthesis technique, we

provide a simple two-dimensional example. Consider a robot described by the

following diﬀerence equation:

x+

1

x+

2=x1+τu1

x2+τu2,

12 M. Khaled, E. S. Kim, M. Arcak, and M. Zamani

¯

X1

¯

X2

Pf

1(Z)

G1(Z)

Pf

2(Z)

G2(Z)

Df

2(G2(Z))

Df

1(G1(Z))

[G(Z)]

G(Z)

Fig. 4: A visualization of one arbitrary ﬁxed-point iteration of the sparsity-aware

synthesis technique for a two-dimensional robot system.

¯

X1

¯

X2

target set

Z

Obstacles to be avoided Undescoverd

state-space

Latest winning set Z

[G(Z)] nZ

Orange area:

Search space

Black box:

for next Z

Obstacles to be avoided

¯

X2

¯

X1

Iteration 5 Iteration 228

Fig. 5: The evolution of the ﬁxed-point sets for the robot example by the end of

ﬁxed-point iterations 5 (left side) and 228 (right side). A video of all iterations

can be found in: http://goo.gl/aegznf.

where (x1, x2)∈¯

X:= ¯

X1×¯

X2is a state vector and (u1, u2)∈¯

U:= ¯

U1×¯

U2is an

input vector. Figure 4 shows a visualization of the sets related to this sparsity-

aware technique for symbolic controller synthesis for one ﬁxed-point iteration.

Set Zψis the initial winning-set (a.k.a. target-set for reachability speciﬁcations)

constructed from a given speciﬁcation (e.g., a region in ¯

Xto be reached by

the robot) and Zis the winning-set of the current ﬁxed-point iteration. For

simplicity, all sets are projected on ¯

Xand the readers can think of ¯

Uas an

additional dimension perpendicular to the surface of this paper.

As depicted in Figure 4, the next winning-set G(Z) is over-approximated

by [G(Z)], as a result of Theorem 1. Algorithm 4 in [4] searches for G(Z) in

(¯

X1×¯

X2)×(¯

U1×¯

U2). This work suggests searching for G(Z) in [G(Z)] \Z

instead.

4.2 A sparsity-aware parallel algorithm for symbolic controller

synthesis

We propose Algorithm 5 to parallelize sparsity-aware controller synthesis. The

main diﬀerence between this and Algorithm 4 in [4] are lines 9-12. They cor-

respond to computing [G(Z)] at each iteration of the ﬁxed-point computation.

Symbolic Controllers: a Parallelized and Sparsity-Aware Approach 13

Algorithm 5: Proposed parallel sparsity-aware algorithm to synthesize

Cenforcing speciﬁcation ♦ψ.

Input: Initial winning domain Zψ⊂¯

X×¯

Uand T

Output: A controller C:¯

Xw→2¯

U.

1Z∞← ∅ ;Initialize a shared win-pairs set

2¯

Xw← ∅ ;Initialize a shared win-states set

3do

4Z0←Z∞;Current win-pairs set gets latest win-pairs

5for all p∈ {1,2,· · · , P }do

6Zp

loc ← ∅ ;Initialize a local win-pairs set

7¯

Xp

w,loc ← ∅ ;Initialize a local win-states set

8end

9[G]←¯

X×¯

U;Initialize [G(Z)]

10 for all i∈ {1,2,· · · , n}do

11 [G]←[G]∩Df

i(Gi(Z∞)) ; Over-approximate

12 end

13 for all (¯x, ¯u)∈[G]\Z∞in parallel with index jdo

14 p=I(i) ; Identify a PE

15 P osts ←Q◦Kp

loc(¯x, ¯u) ; Compute successor states

16 if P osts ⊆Z0∪Zψthen

17 Zp

loc ←Zp

loc ∪ {(¯x, ¯u)};Record a winning pair

18 ¯

Xp

w,loc ←¯

Xp

w,loc ∪ {¯x};Record a winning state

19 if ¯x6∈ π¯

X(Z0)then

20 C(¯x)←C( ¯x)∪ {¯u};Record a control action

21 end

22 end

23 end

24 for all p∈ {1,2,· · · , P }do

25 Z∞←Z∞∪Zp

loc ;Update the shared win-pairs set

26 ¯

Xw←¯

Xw∪¯

Xp

w,loc ;Update the shared win-states set

27 end

28 while Z∞6=Z0;

Line 13 is modiﬁed to do the parallel search inside [G(Z)] \Zinstead of ¯

X×¯

U

in the original algorithm. The rest of the algorithm is well documented in [4].

The algorithm is implemented in pFaces as updated versions of the kernels

GBFP and GBFPmin [4]. We synthesize a reachability controller for the robot

example presented earlier. Figure 5 shows an arena with obstacles depicted as

red boxes. It depicts the result at the ﬁxed point iterations 5 and 228. The blue

box indicates the target set (i.e., Zψ). The region colored with purple indicates

the current winning states. The orange region indicates [G(Z)] \Z. The black

box is the next search region which is a rectangular over approximation of the

[G(Z)] \Z. We over-approximate [G(Z)] \Zwith such rectangle as it is straight-

forward for PEs in pFaces to work with rectangular parallel jobs. The synthesis

problem is solved in 322 ﬁxed-point iterations. Unlike the parallel algorithm

in [4] which searches for the next winning region inside ¯

X×¯

Uat each iteration,

the implementation of the proposed algorithm reduces the parallel search by an

average of 87% when searching inside the black boxes in each iteration.

14 M. Khaled, E. S. Kim, M. Arcak, and M. Zamani

Fig. 6: An autonomous vehicle trying to avoid a sudden obstacle on the highway.

5 Case Study: Autonomous Vehicle

We consider a vehicle described by the following 7-dimensional discrete-time

single track (ST) model [1]:

x+

1=x1+τx4cos(x5+x7),

x+

2=x2+τx4sin(x5+x7),

x+

3=x3+τu1,

x+

4=x4+τu2,

x+

5=x5+τx6,

x+

6=x6+τ µm

Iz(lr+lf)lfCS,f (glr−u2hcg )x3+ (lrCS,r (glf+u2hcg )−lfCS,f (glr

−u2hcg))x7−(lflfCS,f (glr−u2hcg ) + l2

rCS,r(glf+u2hcg )) x6

x4,

x+

7=x7+τ µ

x4∗(lf +lr)CS,f (glr−u2hcg )x3−(CS,r (glf+u2hcg ) + CS,f (glr

−u2hcg))x7+ (Cs,r (glf+u2hcg )lr−CS,f (glr−u2hcg)lf)x6

x4−x6,

where x1and x2are the position coordinates, x3is the steering angle, x4is

the heading velocity, x5is the yaw angle, x6is the yaw rate, and x7is the

slip angle. Variables u1and u2are inputs and they control the steering angle

and heading velocity, respectively. Input and state variables are all members

of R. The model takes into account tire slip making it a good candidate for

studies that consider planning of evasive maneuvers that are very close to the

physical limits. We consider an update period τ= 0.1 seconds and the following

parameters for a BMW 320i car: m= 1093 [kg] as the total mass of the vehicle,

µ= 1.048 as the friction coeﬃcient, lf= 1.156 [m] as the distance from the

front axle to center of gravity (CoG), lr= 1.422 [m] as the distance from the

rear axle to CoG, hcg = 0.574 [m] as the hight of CoG, Iz= 1791.0 [kg m2]

as the moment of inertia for entire mass around z axis, CS,f = 20.89 [1/rad]

as the front cornering stiﬀness coeﬃcient, and CS,r = 19.89 [1/rad] as the rear

cornering stiﬀness coeﬃcient.

To construct an abstract system ¯

Σ, we consider a bounded version of the state

set X:= [0,84] ×[0,6] ×[−0.18,0.8] ×[12,21] ×[−0.5,0.5] ×[−0.8,0.8] ×[−0.1,

0.1], a state quantization vector ηX= (1.0,1.0,0.01,3.0,0.05,0.1,0.02), a input

set U:= [−0.4,0.4] ×[−4,4], and an input quantization vector ηU= (0.1,0.5).

We are interested in an autonomous operation of the vehicle on a highway.

Consider a situation on two-lane highway when an accident happens suddenly on

the same lane on which our vehicle is traveling. The vehicle’s controller should

ﬁnd a safe maneuver to avoid the crash with the next-appearing obstacle. Figure

6 depicts such a situation. We over-approximate the obstacle with the hyper-box

[28,50] ×[0,3] ×[−0.18,0.8] ×[12,21] ×[−0.5,0.5] ×[−0.8,0.8] ×[−0.1,0.1].

Symbolic Controllers: a Parallelized and Sparsity-Aware Approach 15

Table 1: Used HW conﬁgurations for testing the proposed technique.

Identiﬁer Description PEs Frequency

HW1Local machine: Intel Xeon E5-1620 8 3.6 GHz

HW2AWS instance p3.16xlarge: Intel(R) Xeon(R) E5-2686 64 2.3 GHz

HW3AWS instance c5.18xlarge: Intel Xeon Platinum 8000 72 3.6 GHz

Table 2: Results obtained after running the experiments EX1and EX2.

EX1(Memory = 22.1 G.B.)

|¯

X×¯

U|= 23.8×109

EX2(Memory = 49.2 G.B.)

|¯

X×¯

U|= 52.9×109

HW Time

pFaces/GBFPm

Time

This work Speedup HW Time

pFaces/GBFPm

Time

This work Speedup

HW22.1 hours 0.5 hours 4.2x HW1≥24 hours 8.7 hours ≥2.7x

HW31.9 hours 0.4 hours 4.7x HW28.1 hours 3.2 hours 2.5x

We run the implementation on diﬀerent HW conﬁgurations. We use a lo-

cal machine and instances from Amazon Web Services (AWS) cloud computing

services. Table 1 summarizes those conﬁgurations. We also run two diﬀerent ex-

periments. For the ﬁrst one (denoted by EX1), the goal is to only avoid the crash

with the obstacle. We use a smaller version of the original state set X:= [0,

50] ×[0,6] ×[−0.18,0.8] ×[11,19] ×[−0.5,0.5] ×[−0.8,0.8] ×[−0.1,0.1]. The

second one (denoted by EX2) targets the full-sized highway window (84 meters),

and the goal is to avoid colliding with the obstacle and get back to the right

lane. Table 2 reports the obtained results. The reported times are for construct-

ing ﬁnite abstractions of the vehicle and synthesizing symbolic controllers. Note

that our results outperform easily the initial kernels in pFaces which itself out-

performs serial implementations with speedups up to 30000x as reported in [4].

The speedup in EX1is higher as the obstacle consumes a relatively bigger vol-

ume in the state space. This makes [G(Z)] \Zsmaller and, hence, faster for our

implementation.

6 Conclusion and Future Work

A uniﬁed approach that utilizes sparsity of the interconnection structure in dy-

namical systems is introduced for the construction of ﬁnite abstractions and

synthesis of their symbolic controllers. In addition, parallel algorithms are de-

signed to target HPC platforms and they are implemented within the framework

of pFaces. The results show remarkable reductions in computation times. We

showed the eﬀectiveness of the results on a 7-dimensional model of a BMW 320i

car by designing a controller to keep the car in the travel lane unless it is blocked.

The technique still suﬀers from the memory ineﬃciency as inherited from

pFaces. More speciﬁcally, the data used during the computation of abstraction

and the synthesis of symbolic controllers is not encoded. Using raw data requires

larger amounts of memory. Future work will focus on designing distributed data-

structures that achieve a balance between memory size and access time.

16 M. Khaled, E. S. Kim, M. Arcak, and M. Zamani

References

1. Althof, M.: Commonroad: Vehicle models (version 2018a). Tech. rep., Techni-

cal University of Munich, 85748 Garching, Germany (October 2018), https:

//commonroad.in.tum.de

2. Gruber, F., Kim, E.S., Arcak, M.: Sparsity-aware ﬁnite abstraction. In: Proceedings

of 56th IEEE Annual Conference on Decision and Control (CDC). pp. 2366–2371.

IEEE, USA (Dec 2017). https://doi.org/10.1109/CDC.2017.8263995

3. Khaled, M., Rungger, M., Zamani, M.: SENSE: Abstraction-based syn-

thesis of networked control systems. In: Electronic Proceedings in The-

oretical Computer Science (EPTCS), 272. pp. 65–78. Open Publishing

Association (OPA), 111 Cooper Street, Waterloo, NSW 2017, Australia

(June 2018). https://doi.org/10.4204/EPTCS.272.6, http://www.hcs.ei.tum.de/

software/sense

4. Khaled, M., Zamani, M.: pFaces: An acceleration ecosystem for symbolic con-

trol. In: Proceedings of the 22nd ACM International Conference on Hybrid Sys-

tems: Computation and Control. HSCC ’19, ACM, New York, NY, USA (2019).

https://doi.org/10.1145/3302504.3311798

5. Maler, O., Pnueli, A., Sifakis, J.: On the synthesis of discrete controllers for timed

systems. In: Mayr, E.W., Puech, C. (eds.) 12th Annual Symposium on Theoretical

Aspects of Computer Science (STACS 95). pp. 229–242. Springer Berlin Heidelberg,

Berlin, Heidelberg (1995). https://doi.org/10.1007/3-540-59042-0 76

6. Mazo, M., Davitian, A., Tabuada, P.: Pessoa: A tool for embedded controller

synthesis. In: Touili, T., Cook, B., Jackson, P. (eds.) Computer Aided Ver-

iﬁcation. pp. 566–569. Springer Berlin Heidelberg, Berlin, Heidelberg (2010).

https://doi.org/10.1007/978-3-642-14295-6 49

7. Mouelhi, S., Girard, A., G¨ossler, G.: Cosyma: A tool for controller synthesis using

multi-scale abstractions. In: Proceedings of 16th International Conference on Hy-

brid Systems: Computation and Control. pp. 83–88. HSCC ’13, ACM, New York,

NY, USA (2013). https://doi.org/10.1145/2461328.2461343

8. Reissig, G., Weber, A., Rungger, M.: Feedback reﬁnement relations for the syn-

thesis of symbolic controllers. IEEE Transactions on Automatic Control 62(4),

1781–1796 (April 2017). https://doi.org/10.1109/TAC.2016.2593947

9. Rungger, M., Zamani, M.: Scots: A tool for the synthesis of symbolic controllers.

In: Proceedings of the 19th International Conference on Hybrid Systems: Compu-

tation and Control. pp. 99–104. HSCC ’16, ACM, New York, NY, USA (2016).

https://doi.org/10.1145/2883817.2883834

10. Tabuada, P.: Veriﬁcation and control of hybrid systems, A symbolic approach.

Springer, USA (2009). https://doi.org/10.1007/978-1-4419-0224-5

11. Zamani, M., Pola, G., Mazo Jr., M., Tabuada, P.: Symbolic models for nonlinear

control systems without stability assumptions. IEEE Transactions on Automatic

Control 57(7), 1804–1809 (July 2012). https://doi.org/10.1109/TAC.2011.2176409