Robust and sustainable schedulability analysis of embedded software.
ABSTRACT For realtime systems, most of the analysis involves efficient or exact schedulability checking. While this is important, analysis is often based on the assumption that the task parameters such as execution requirements and interarrival times between jobs are known exactly. In most cases, however, only a worstcase estimate of these quantities is available at the time of analysis. It is therefore imperative that schedulability analysis hold for better parameter values (Sustainable Analysis). On the other hand, if the task or system parameters turn out to be worse off, then the analysis should tolerate some deterioration (Robust Analysis). Robust analysis is especially important, because the implication of task schedulability is often weakened in the presence of optimizations that are performed on its code, or dynamic system parameters. In this work, we define and address sustainability and robustness questions for analysis of embedded realtime software that is modeled by conditional realtime tasks. Specifically, we show that, while the analysis is sustainable for changes in the task such as lower job execution times and increased relative deadlines, it is not the case for code changes such as job splitting and reordering. We discuss the impact of these results in the context of common compiler optimizations, and then develop robust schedulability techniques for operations where the original analysis is not sustainable.

Article: Distributed, Modular HTL
[Show abstract] [Hide abstract]
ABSTRACT: The Hierarchical Timing Language (HTL) is a realtime coordination language for distributed control systems. HTL programs must be checked for wellformedness, race freedom, transmission safety (schedulability of interhost communication), and time safety (schedulability of host computation). We present a modular abstract syntax and semantics for HTL, modular checks of wellformedness, race freedom, and transmission safety, and modular code distribution. Our contributions here complement previous results on HTL time safety and modular code generation. Modularity in HTL can be utilized in easy program composition as well as fast program analysis and code generation, but also in socalled runtime patching, where program components may be modified at runtime.01/2009;  SourceAvailable from: Aloysius Mok
Conference Paper: Necessary and Sufficient Conditions for Nonpreemptive Robustness.
[Show abstract] [Hide abstract]
ABSTRACT: A realtime scheduler is robust (sustainable) for a certain task set if its schedulability is preserved under lighter system load by the scheduler. The first part of this paper shows that NPr (nonpreemptive) robustness of a zeroconcrete periodic task set against increase in period is sufficient to guarantee NPr robustness for all variants of the task set. This proof includes the corresponding concrete or nonconcrete periodic and sporadic task sets against any kind of reduction in system load. Based on this result, the second part of this paper gives the necessary and sufficient conditions for robustness for both NPr fixedpriority (NPFP) and NPr earliestdeadline first (NPEDF) schedulers under both discrete time and dense time assumption separately.16th IEEE International Conference on Embedded and RealTime Computing Systems and Applications, RTCSA 2010, Macau, SAR, China, 2325 August 2010; 01/2010  SourceAvailable from: psu.edu[Show abstract] [Hide abstract]
ABSTRACT: With an increasing number of applications, realtime embedded systems are gaining in size and complexity. Many of these systems are complex as a whole, but consist of smaller modules interacting with each other. This structure makes them amenable to compositional design. For realtime systems, compositional design is done using models consisting of components arranged in a scheduling hierarchy. Analysis of such systems depends on the choice of the task model and the interfaces used to abstract component timing requirements. Realtime applications have been traditionally modeled either as periodic and sporadic tasks which are easy to analyze but simplistic, or as task graphs and automata models that are very expressive, but complex to analyze, especially with respect to compositional analysis. We develop conditional task models with a view to claim the middle ground. We show that these models, while being expressive enough to capture conditional release of jobs, or dependencies between tasks, also allow for efficient analysis. We establish results for checking schedulability of task sets comprising of conditional tasks, and techniques to compositionally analyze a hierarchical resource sharing system where each component comprises of conditional tasks. Schedulability analysis of tasks is based on the assumption that the task parameters such as execution requirements and interarrival times between jobs are known exactly. In most cases however, only an estimate of these quantities is available at the time of analysis. If the task parameters turn out to be better than those considered, then the analysis should be sustainable with the new parameter values. If the task parameters turn out to be worse off, then the analysis should be robust enough to tolerate some deterioration. With this view, we introduce and address sustainability and robustness questions for analysis with conditional tasks. Finally, the introduced models and compositional analysis techniques are illustrated with an automotive case study. The study clearly demonstrates the utility of introduced techniques over previous approaches for compositional analysis.Dissertations available from ProQuest.
Page 1
University of Pennsylvania
ScholarlyCommons
Departmental Papers (CIS)Department of Computer & Information Science
6122008
Robust and Sustainable Schedulability Analysis of
Embedded Software
Madhukar Anand
University of Pennsylvania, anandm@cis.upenn.edu
Insup Lee
University of Pennsylvania, lee@cis.upenn.edu
Postprint version. Published inProceedings of the ACM SIGPLAN/SIGBED 2008 Conference on Languages, Compilers, and Tools for Embedded Systems
(LCTES 2008), June 2008.
This paper is posted at ScholarlyCommons.http://repository.upenn.edu/cis_papers/372
For more information, please contactrepository@pobox.upenn.edu.
Page 2
Robust and Sustainable Schedulability Analysis of Embedded
Software∗
Madhukar Anand and Insup lee
Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA, 19104
{anandm,lee}@cis.upenn.edu
Abstract
For realtime systems, most of the analysis involves efficient or ex
act schedulability checking. While this is important, analysis is of
ten based on the assumption that the task parameters such as exe
cution requirements and interarrival times between jobs are known
exactly. In most cases, however, only a worstcase estimate of these
quantities is available at the time of analysis. It is therefore imper
ative that schedulability analysis hold for better parameter values
(Sustainable Analysis). On the other hand, if the task or system pa
rameters turn out to be worse off, then the analysis should tolerate
some deterioration (Robust Analysis). Robust analysis is especially
important, because the implication of task schedulability is often
weakened in the presence of optimizations that are performed on
its code, or dynamic system parameters.
In this work, we define and address sustainability and robust
ness questions for analysis of embedded realtime software that is
modeled by conditional realtime tasks. Specifically, we show that,
while the analysis is sustainable for changes in the task such as
lower job execution times and increased relative deadlines, it is not
the case for code changes such as job splitting and reordering. We
discuss the impact of these results in the context of common com
piler optimizations, and then develop robust schedulability tech
niques for operations where the original analysis is not sustainable.
Categories and Subject Descriptors
Organization and Design – Realtime and embedded systems
D.4.7 [Operating Systems]:
General Terms
Design, Theory
Keywords
analysis, Robust schedulability analysis
Schedulability Analysis, Sustainable schedulability
1.
Work on schedulability analysis of tasks is mostly focused on
efficiency of schedulability checking, or exactness of the result.
Most of the techniques proposed make use of worst case estimates
of task parameters such as execution requirements and interarrival
times between jobs. While it is important that the task set be
schedulable in the worst case, it is also important that the analysis
Introduction
∗This research was supported in part by FA95500710216, NSF CNS
0509143, NSF CNS0720703, and NSF CNS0720518
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. To copy otherwise, to republish, to post on servers or to redistribute
to lists, requires prior specific permission and/or a fee.
LCTES’08,June 12–13, 2008, Tucson, Arizona, USA.
Copyright c ? 2008 ACM 9781605581040/08/06...$5.00
be sustainable if the task parameters turn out to be better than those
considered. By sustainability of schedulability analysis, we refer to
the property that a system remains schedulable even if the some of
the parameters of the task or the system turn out to be better than
what the analysis accounted for. For instance, if the task finishes
execution before its worstcase execution time, or if a job in the
task is replaced by one that requires a smaller amount of resource,
then the original analysis should not be affected. Although it may
seem obvious that the analysis should hold if things get better,
it is not always the case, even for simple tasks. For example,
Baruah and Burns (Baruah and Burns 2006; Burns and Baruah
2008) have shown that the Leung and Whitehead test for fixed
priority scheduling of periodic tasks (Leung and Whitehead 1982)
is not sustainable in many cases, including the case where task
periods increase.
Another area of concern with the analysis is tolerating a wors
ening of parameters. Scaling of processor speeds, presence of vari
able jitter between jobs, additional delays induced due to depen
dencies, all affect the analysis. Task parameters could also change
from those considered in the analysis. For example, some classi
cal compiler optimizations, such as those targeted towards reduc
ing execution requirements or memory use in realtime programs,
could cause problems as far as analysis of meeting deadlines is con
cerned. In fact, there are cases where even though the optimiza
tion reduces overall execution requirements, the system may not
be schedulable (as explained in Sec. 3). It is also possible that the
task set remains schedulable after the optimization, but the original
analysis does not hold. Past research effort has focused on ensur
ing that compiler optimizations are safe (e.g.,(Marlowe 1518 Jun
1992; Younis et al. 2125 Oct 1996)), and techniques for reducing
codesizewithoutaffectingtherealtimeconstraints(e.g.,(Leeetal.
2008)) In contrast, we take the view that if the task parameters turn
out to be worse off, then the analysis should be robust enough to
tolerate some deterioration. Obviously, no analysis can permit an
arbitrary deterioration of parameters and consideration of higher
deterioration leads to more pessimistic results in the analysis.
In this paper, we develop a framework that takes as input, the
space of all anticipated changes to interarrival times and deadlines
of tasks due to code changes such as job splitting and reordering,
and that generates as output, the maximum possible load under
those modifications. The idea is that, if this maximum load can be
supported by the resource supply, then the analysis will be robust
and sustainable under any actual modification of the task.
Some of the specific questions in the context of robustness of
analysis include, (1) can some of the jobs in a task change by the
way of increased execution times and still remain schedulable? If
so, what is the maximum change that can be tolerated without af
fecting schedulability. (2) given a set of realtime tasks, what is a
priority assignment for tolerating the maximum amount of addi
tional interference from the system? (3) if some of the task param
eters such as interarrival times between jobs change at runtime,
how can schedulability still be guaranteed? and, (4) how do com
Page 3
piler optimizations affect schedulability of the task? In other words,
can some operations be identified that are deemed safe with respect
to robust schedulability analysis?
In this paper, we address questions of sustainability and ro
bustness of analysis for embedded software modeled as condi
tional realtime tasks. Specifically, we consider the task to be mod
eled as a Recurring branching Task with Control variables (RTC)
(see (Anand et al. 2008)). We expect that the task model, being a
generalization of periodic, sporadic, and multiframe tasks can cap
turerequirementsofrealworldapplicationsmoreclosely.TheRTC
model also extends the recurring branching task model (Baruah
1998) with guarded transitions based on assignments to control
variables.
We first define sustainability and robustness in the context of
analysis with conditional realtime tasks, and later address some of
the concerns. For sustainability of the analysis, we work with the
following definition.
Definition 1 (Sustainability). A schedulability test for a schedul
ing policy is sustainable if any system modeled by a set of con
ditional tasks deemed schedulable by the schedulability test re
mains schedulable when the parameters of one or more individual
task(s) are changed in any combination of the following ways: (i)
decreased execution requirements of individual jobs, (ii) larger in
ter arrival times, (iii) larger relative deadlines, and (iv) structural
optimizations leading to overall lower resource requirements.
The definition of sustainability is adapted from a similar defi
nition for simpler task models by Baruah and Burns (Baruah and
Burns 2006)1
With respect to robustness, we focus on answering questions
(1) and (3) from the list of questions identified earlier. For question
(1), we try to find the maximum possible constant scaling factor
by which all the execution times can be scaled. For question (3),
we formulate a constrained optimization problem to find the task
parameters that present, in some sense, the worst possible load
on the system, and prove that if the requirements of that task are
met, then a task with any assignment of the parameters meeting
the constraints is schedulable. Question (2) above, relates to the
concept of robust priority ordering, which was introduced by Davis
and Burns (Davis 36 Dec. 2007) in the context of periodic tasks.
A similar technique is applicable to robust priority assignment for
conditional tasks. We leave the prospect of answering the final
question as future work.
1.1
Robustness been introduced with various connotations in the liter
ature (e.g., (Davis 36 Dec. 2007; Marlowe 1518 Jun 1992; Rhan
andLiu2124Jun1994;ButtazzoMay2006;Yerraballietal.1416
Jun 1995; Buttazzo and Stankovic 1993)). For example, Davis and
Burns (Davis 36 Dec. 2007) have addressed the question of robust
priority assignment for periodic tasks, so that the system can tol
erate the maximum amount of additional system interference. But
tazzo and Stankovic (Buttazzo and Stankovic 1993) consider the
problem of robust scheduling strategy, i.e., designing a scheduling
strategy that can react the best under system overload. There is also
a large body of work on analysis under scaling of processor speed
(e.g.,(ButtazzoMay2006;Yerraballietal.1416Jun1995)).While
many of these questions are relevant in the context of conditional
task models, our primary contribution in this work is on developing
robust analysis techniques without having to modify the task set,
priority assignment of tasks, or the scheduling algorithm.
Sustainability of schedulability analysis was introduced by
Baruah and Burns (Baruah and Burns 2006). In their work, they
Related work
1There are minor changes in terminology. For instance, while the definition
in (Baruah and Burns 2006) uses “job release jitter”, we refer to that
quantity as job interarrival time.
looked at sustainability of standard scheduling tests for systems
modeled as periodic tasks and its extensions. Ha and Liu (Rhan
and Liu 2124 Jun 1994) define a property of scheduling algo
rithms that they call “predictability”. Informally, a scheduling al
gorithm is predictable if any task system that is scheduled by it
to meet all deadlines will continue to meet all deadlines if some
jobs arrive earlier, or have later deadlines. Other related work on
sustainability of scheduling analysis includes the work by Mok
and Poon (A.K.Mok and Poon 58 Dec. 2005) on nonpreemptive
scheduling of periodic task systems. There is also a lot of work on
scheduling anomalies (e.g.,(Buttazzo May 2006; Andersson 2002;
Chen et al. 2005; Racu and Ernst 2006)), especially in the multi
processor case, under assumptions of variable processor speed, or
increased execution times.
Much of the sustainability of analysis questions stem from the
fact that estimation of many task parameters such as interarrival
times or worst case execution times (WCET) are only approximate.
In fact, many WCET estimation techniques such as those based
on abstract interpretation (Ferdinand et al. 2001) give us an upper
bound. An alternative to using these scheduling tests is to use
model checking based techniques (e.g.,(BenAbdallah et al. 1998;
Altisen et al. 2002)) to ascertain the schedulability of task sets.
As these techniques produce exact analysis (as opposed to worst
case analysis), they do not have sustainability problems. They are,
however, not robust for the same reason. Any change to the task
parameters after the analysis is performed invalidates the analysis.
Finally, in other related work, we would like to mention the
Hierarchical Timing Language (HTL) (Ghosal et al. 2006) which
has been proposed for realtime tasks. The key idea there is that of
task refinement which results in sustainable analysis by design.
The rest of the paper is organized as follows. we introduce the
task models in Section 2. In Section 3, we analyze the sustainability
of schedulability analysis with conditional tasks. In Section 4, we
develop robust schedulability analysis techniques. We conclude in
Section 5.
2.
Embedded realtime programs are typically implemented as some
eventdriven code embedded within an infinite loop. In many ap
plications, the action to be taken upon the occurrence of external
events depends on factors such as the current state of the system,
values of external variables, etc. These systems have been tradition
ally modeled using well known task frameworks such as periodic/s
poradic tasks, task graphs, and timed automata. Periodic/sporadic
tasks have been well studied with respect to schedulability, but lack
the expressivity of task graphs and timed automata when it comes
to modeling embedded realtime programs. On the other hand, task
graphs and automata models are more expressive than periodic/s
poradic tasks but schedulability analysis for them is hard. In this
work, we use a subclass of task graphs, called recurring branching
tasks with control variables. These models are expressive enough to
model conditional release of jobs within embedded programs, but
at the same time allow efficient demand computation for schedula
bility checking.
System model and definitions
2.1
In this section, we define our task model and its execution seman
tics. Our system consists of multiple realtime components sharing
a global resource (e.g., CPU, shared network, etc.) under a hier
archical scheduling policy. The shared resource demand of each
component can be represented by a set of tasks, each comprising of
multiple simple tasks as the basis for demand.
The resource supply to the tasks is assumed to be provided
according to a resource model (e.g., (Shin and Lee 2003; Lipari and
Bini 2003; Feng and Mok 2002)) For example, a periodic resource
model (see (Shin and Lee 2003; Lipari and Bini 2003)) Γ = (Π,Θ)
isapartitionedresourcesupplysuchthatitguaranteesΘallocations
Recurring branching Task with Control variables (RTC)
Page 4
of time units every Π time units, where a resource period is a
positive integer and a resource allocation time is a real number
in (0,Π]. For the resource model, the minimum resource supply
provided by it in an interval t is measured by the supply bound
function, sbf. For a periodic model Γ, its supply bound function
sbfΓ(t) is defined to compute the minimum resource supply for
every interval length t as follows:
?
???t−(Π−Θ)?/Π?,1
A simple task T = (e,d) requires e time units of the resource
within d time units of its release. Informally, a RTC model is a
structure consisting of nodes and transitions between these nodes,
where each node defines a release of a simple task and each transi
tion identifies the minimum jitter between successive task releases.
Definition 2 (RTC Model). A RTC model Ω is defined by a tuple
?V,v0,VF,E,τ,ρ? where,
• V is a set of nodes,
• v0∈V is the start node,
• VF⊆V is a set of final nodes called leaves,
• E ⊆V ×V = ET∪ERis a set of transitions where ERis a set of
resets,
• τ :V →T is a function from nodes to simple tasks,
• ρ : E → R×G×A is a function from a transition to minimum
jitter, an enabling condition G, and a variable assignment A
ER= {?v,v0?v ∈ VF} and ET= E \ERsuch that the underlying
graph (V,ET) is a directed tree.
a ∈ A consists of assignment for variables in V and g ∈ G is
any decidable function over the variables V. For this model, we
assume that any node releases one simple task. Multiple task re
leases can be handled by transitions with zero jitter. We make the
following assumptions for a RTC model (1) the set of enabling con
ditions g1,...,gmon transitions leaving a node must be exhaustive,
i.e.,Wm
sumption simplifies presentation of the paper, and the overhead can
be easily integrated into our analysis, and (3) the set of leaf nodes
VFis nonempty, and every other node has a run to one of the leaf
nodes.
The execution semantics of a RTC model may be described as
follows. The execution starts at node v0where the task τ(v0) is re
leased. After the release, a transition from v0that is enabled to one
of the descendant nodes of v0(say, v) is taken after a minimum de
lay as specified on ρ(?v0,v?), and this process of task release con
tinues from node v. The enabling conditions/variable assignments
on a transition from viare assumed to be evaluated/executed imme
diately after the release of task τ(vi), which is instantaneous.
We note that a RTC task model generalizes many known task
models such as the periodic, sporadic, multiframe, and recurring
branching task model (Baruah 2003). The 3TS system models
(Figure2)givesanexampleofRTCmodel.Wenowintroducesome
more definitions related to the RTC model.
Definition 3 (Run). A run r ≡ run(vi,vi+j,t) is a sequence
of progression of nodes from vito vi+jof a RTC model Ω =
?V,v0,VF,E,τ,ρ? : vi ei+1
[1, j], ei+l= ?vi+l−1,vi+l? ∈ E and t = ∑l
ρ(e)1is the projection of transition function onto the first com
ponent, i.e., its jitter value. We also denote by γ(r) the duration
t of run r. Also, the resource demand of the run r is defined as
∆(r) =∑j
sbfΓ(t) =
t −(k+1)(Π−Θ)
(k−1)Θ
if t ∈ [(k+1)Π−2Θ,(k+1)Π−Θ],
otherwise,
?
(1)
where k = max
supply function is simply sbf(t) =t.
. For the full processor, the
j=1gj= true (progress). (2) the enabling conditions and as
signments have no overhead in terms of space and time. This as
−→ vi+1 ei+2
−→ ...
ei+j
−→ vi+jwhere ∀l ∈
k=1ρ(ei+k)1, where
l=0τ(vi+l).e.
In this definition τ(vi+l).e represents the execution requirement
of task τ(vi+l).
RTC model example. Consider the example of the Three Tanks
System (3TS) (Iercan 2005) shown in Figure 1. The plant consists
of interconnected water tanks, where each tank has evacuation taps
for simulating perturbation. Two of the tanks (T1 and T2) can
pump water into the respective tanks via the pumps P1 and P2.
The plant is nonlinear and hence it uses three different controllers
for each pump. (1) A controller P (proportional) is used for the
case when there is no perturbation (no water leaves the tank). (2)
Two controllers PI (proportional integrator) are used when there is
some perturbation (water drains out of the tank). When the control
error is large, a controller with fast integration speed is used and
otherwise, a controller with slow integration speed is used.
Tap13Tap23
P1
P2
T2
T3T1
Tap1 Tap3
Tap2
Figure 1. Overview of 3TS
We show the partial code for the system implemented using
HTL in Listing 1. Although the original HTL model is hierar
chical, we have expanded the internal modes and created a non
hierarchical RTC model. It can be seen that, for the purposes of
computing the resource requirement, these are equivalent. The
model for the other pump is similar. While the HTL language does
not explicitly mention jitter between job triggering, it allows for
such jitter and is mainly concerned with ensuring order of logical
release of jobs. We have therefore included some jitter between
release of jobs in the example.
Figure 2 shows the conditional models for modules in 3TS
(see HTL code (Iercan 2005)). The nodes and transitions between
nodes for the model are illustrated in the figure. The mapping from
transitions to enabling conditions and assignments is listed beside
the figure. The reset transitions are indicated by dashed lines. The
set of leaves are the nodes which have the dashed transitions out
of them. Each node is also annotated with the simple task that is
released at that node. A run of the task would begin at the start node
R and would follow the transition that is enabled. In the case of the
controller, depending on the physical conditions, the controller P or
PI would be released, and the appropriate control variables would
be set.
1
module M_T1 start m_T1_control_P{
task t_T1_P input (c_double h1)
output (c_double u1) function f_P_1 wcet 100;
task t_T1_PI input (c_double h1)
output (c_double u1) wcet 150;
state ()
state ()
6
mode m_T1_control_P period 500{
invoke t_T1_P input ((h1,1))
switch (isP_2_PI1(e1, e3, s1)) m_T1_control_PI;
}
mode m_T1_control_PI period 500 program P_T1_PI_ref{
invoke t_T1_PI input ((h1,1))
switch (isPI_2_P1(e1, e3, s1)) m_T1_control_P;
}
output ((u1 , 4 ) ) ;
11
output ((u1 , 4 ) ) ;
16
}
program P_T1_PI_ref{
module M_T1_PI_ref start m_T1_PI_fast{
task t_T1_PI_fast input (c_double h1)
output (c_double u1)
state ()
21
Page 5
function f_PI_fast_1 wcet 150;
task t_T1_PI_slow input (c_double h1)
output (c_double u1)
function f_PI_slow_1 wcet 100;
state ()
26
mode m_T1_PI_fast period 500{
invoke t_T1_PI_fast input ((h1,1))
parent t_T1_PI;
switch (isSlow_PI_T1(h1)) m_T1_PI_slow;
}
mode m_T1_PI_slow period 500{
invoke t_T1_PI_slow input ((h1,1))
parent t_T1_PI;
switch (isFast_PI_T1(h1)) m_T1_PI_fast;
}
output ((u1,4))
31
output ((u1,4))
36
}
}
Listing 1. Partial HTL Code for the Three Tanks System
Demand computation. The resource demand bound function
(dbfΩ(t)) of a RTC model Ω upper bounds the amount of compu
tational resource required to meet the deadlines of all the released
jobs in an interval t. This computation is done over tasks that are
both released and have their deadlines within the interval. The re
quest bound function (rbfΩ) of a RTC model Ω, upper bounds the
amount of resource demand released in a time interval. The rbf
computation takes into account the demand of all the tasks that are
released in the interval, including those tasks whose deadlines are
outside the interval.
We first introduce the following class of RTC model and explain
how efficient demand computation is possible for the class.
Definition 4 (Isochronicity). A RTC model Ω is isochronous
if ∀vi,vj∈ VF,γ(run(v0,vi,t)) + ρ(vi,v0) = γ(run(v0,vj,t)) +
ρ(vj,v0). In this case, the smallestt for which this condition is true,
is called the period of Ω. In all other cases, Ω is anisochronous.
For isochronous RTC tasks, the following technique can be used
to compute the dbf value. First, we enumerate and tabulate the
dbfΩvalues for every run which has at most one instance of root
location in it. In the general case, i.e., for runs with more than one
instance of root in them, the run can be broken down into three
phases. (1) a run(vi,v0,t1) which ends in a root (v0) s.t. t1< PΩ,
(2) a phase consisting of repeated instances of a run from v0to
a leaf and back with maximum demand and total duration t2, and
(3) a run(v0,vj,t3) such that the run begins at v0s.t. t3< PΩand
t1+t2+t3=t.
Givent,thedurationofthemiddlephaseisatleast
?
the latter case, either t1= 0 or t3= 0. We can therefore compute
the maximum demand for the overall interval as,
??
maxEΩ+dbfΩ
t −
PΩ
??
t
PΩ
?
−1
?
PΩ
and at most
t
PΩ
?
PΩwhere PΩis the period of recurrence of Ω. In
dbfΩ(t) =
t
PΩ
?
−1
?
?
EΩ+
??
t
?
PΩ
?
,dbfΩ
?
t −
?
t
PΩ
?
PΩ+PΩ
? ?
(2)
where EΩ= argmaxr≡run(v0,v0,PΩ)∆(r). A similar result also holds
for rbfΩ. We would also like to mention that the above demand
computation technique is similar in flavor to that proposed for for
the recurring branching task model (see (Baruah 2003; Anand et al.
2008)) .
For anisochronous RTC tasks, computing dbfΩ(t)can be proved
to be NPhard via a reduction from the Integer Knapsack problem
in the general case. In this case, an upper bound of the demand can
be computed using approximation algorithms. For the rest of the
paper, unless otherwise stated, we will assume that the tasks are all
isochronous to simplify the presentation.
2.2
Consider a system consisting of n RTC tasks along with their pri
orities. The problem is to determine if the system can be sched
uled using a fixed priority scheduler. An approach similar to that
used for recurring branching tasks (Baruah 2003) can be used here.
First, given a priority assignment for tasks, schedulability can be
decided by considering n problems of determining whether a task
is lowest priority viable. The following theorem can then be used
for checking whether a task is lowest priority viable.
Theorem 1. Let T = {Ω1,...,Ωn} be a system of RTC tasks that
arepreemptivelyscheduledonauniprocessorusingstaticpriorities
with the resource being provided according to a resource model Γ.
Task Ωiis lowest priority viable in T if
??
?
maximum resource demand along a loop with largest demand start
ing from v0, ending at v0and passing through exactly one leaf, i.e.,
EΩ= argmaxr∆(r) where r ≡ run(v0,v0,PΩ). PΩis the period of
recurrence of the task.
Schedulability analysis
∀t ∈ TS : ∃t?≤t :sbfΓ(t?)−
∑
Ω∈T\{Ωi}
rbfΩ(t?)
?
?
≥ dbfΩi(t)
?
(3)
where t?≥ 0 and TS =
t  0 ≤t <
3·∑Ω∈TEΩ
1−∑Ω∈TEΩ/PΩ
, EΩbeing the
Proof. Similar to the proof of Theorem 3, (Baruah 2003), using
sbfΓ(t) for the supply in t.
The following theorem states the schedulability condition under
dynamic priority scheduling.
Theorem 2. Let T = {Ω1,...,Ωn} be a system of RTC tasks
that are preemptively scheduled on a uniprocessor using dynamic
priorities with a resource supply model Γ. System T is feasible if
and only if,
∀t ∈ TS :∑
Ωi∈T
where t?> 0 and TS = {t  0 < t <
as defined in Theorem 1.
dbfΩi(t) ≤ sbfΓ(t)
2·∑Ω∈TEΩ
(4)
1−∑Ω∈TEΩ/PΩ}, EΩand PΩare
Proof. Similar to the proof of Theorem 1, (Baruah 2003), using
sbfΓ(t) for the supply in t.
3.
3.1
In this section, we discuss the sustainability of schedulability anal
ysis for a task set comprising of RTC tasks Ω1,...,Ωn. We assume
that the task set is executed using a fixed priority algorithm. The
sustainability of analysis performed assuming dynamic priorities is
similar in flavor, and we omit it due to space constraints.
Before we present the analysis, we introduce some notation.
For the remainder of the section, we denote the tasks with mod
ified values of parameters by Ω?i, the task set with these tasks by
T?, and the limit to which we have to check the schedulability
condition as given by Theorem 1,
istence of such a limit is important to bound the schedulability
checking of a task set. We also denote the schedulability condi
tion in Theorem 1,sbfΓ(t?)−∑Ω∈T\{Ωi}rbfΩ(t?)
by SC(Γ,T,Ωi,t?,t).
In the remainder of this section, we seek to prove that ∀t,∃t?≤
t : SC(Γ,T,Ωi,t?,t) ⇒ ∀t : ∃t?≤ t : SC(Γ,T?,Ω?i,t?,t) under dif
ferent modifications to the task set. The viability of task Ωi
then follows from these series of implications. ∀t ≤ B : ∃t?≤ t :
SC(Γ,T,Ωi,t?,t) ⇒ ∀t : ∃t?≤ t : SC(Γ,T,Ωi,t?,t) ⇒ ∀t : ∃t?≤ t :
SC(Γ,T?,Ω?i,t?,t) The last inequality implies that Ω?iis lowest pri
ority viable.
Sustainability of schedulability analysis
Results on sustainability of analysis
3·∑Ω∈TEΩ
1−∑Ω∈TEΩ/PΩ, by B. The ex
???
≥ dbfΩi(t)
?
,
Page 6
P
R
L1
L3 L4
L2
L5 L6
PI
(100,500)
gRPI
gRP
(0,0)
(0,0)(100,450)
(10,50)
(0,0)(0,0)
PIf
PIs
(150,450)
gPL1
gPIPIs
Controller Task
gRP≡ (10,mode == m T1 control P,∅)
gRPI≡ (5,mode == m T1 control PI,∅)
gPL1≡ (5,isP 2 PI1(e1,e3,s1),mode = m T1 control PI)
gPL2= ¬gPL1
gPIPIf≡ (15,isFast PI T1(h1),∅)
gPIPIs≡ (8,isSlow PI T1(h1),∅)
gPIfL3≡ (10,isPI 2 P1(e1,e3,s1),mode = m T1 control P)
gPIfL4= ¬gPIfL3
gPIsL5= gPIfL3
gPIsL6= gPIfL4
P
R
Sensor Task
PI
(20,500)
gRPI
gRP
(0,0)
(20,450)
(5,50)
PIf
PIs
gPIPIs
(40,450)
A
(10,250)
Actuator task
Figure 2. RTC task models the 3TS system
Lemma 1. Schedulability analysis of conditional realtime tasks
with the processor demand criteria under fixed priority scheduling
is sustainable with respect to lower execution times.
Proof. We prove the result considering that the execution times of
some tasks in the T decrease, while others remain the same. If the
execution times are smaller than what is considered in the analysis,
thedbfandrbfvalueswillbelower,i.e.,∀t :dbfΩi(t)≤dbfΩ?
∀t : rbfΩj(t) ≤ rbfΩ?j(t). If the execution times of Ωidecrease, then
in Equation 3, the RHS will be smaller than what was considered
in the analysis, and the inequality would still hold. If the execution
times of any other task decreases, the rbf could be smaller and the
LHS would increase. In either case, if Equation 3 was true for some
t?, it will still be true with a task set involving decreased execution
times, therefore proving the existence of such a t?for every t in the
modified task set. We conclude that the analysis is sustainable with
respect to execution times.
i(t)and
Lemma 2. Schedulability analysis of conditional realtime tasks
with the processor demand criteria under fixed priority scheduling
is sustainable with respect to extended relative deadlines.
Proof. We omit the proof due to space constraints. The idea is
similar to the proof of Lemma 1 i.e., to show that dbf cannot
increase when relative deadlines are extended.
Lemma 3. Schedulability analysis of conditional realtime tasks
with the processor demand criteria under fixed priority scheduling
is sustainable with respect to greater job inter arrival times.
Proof. If the interarrival times increase, in the general case, the
model could become anisochronous as a result of different length
paths. We show that the analysis is still sustainable by proving that
if a task is modified by increasing the interarrival times, the new
dbf and rbf values for the modified task is upper bounded by that of
the original task and the modified system is lowest priority viable,
if the original system is lowest priority viable.
We prove the result considering that the interarrival times are
extended for some task Ω ∈ T, such that the overall period of re
currence changes, while other tasks remain the same. The general
result of a set of tasks changing their interarrival times can be ob
tained by repeatedly applying the result of just one task changing
its interarrival times, one at a time. Let us denote the original and
modifiedRTCmodelbyΩandΩ?respectively.Consideraninterval
of length t?> 0. Let run r?correspond to dbfΩ?(t?). Since Ω and Ω?
differ only in the increased interarrival times in the latter, consider
a run r in Ω which has all the locations of run r?. Let the length of
r be t. Observe that t ≤t?and the demand of the both r and r?is the
same. Therefore, we can say that dbfΩ(t?)≥dbfΩ?(t?) by the mono
tonicity of dbf function (i.e., dbf(t1) ≥ dbf(t2), whenever t1≥ t2).
A similar property can also be established in the case of rbfΩ. This
implies that the dbf and rbf values for the modified task are upper
bounded by those of the original task.
Now we prove thatthe modified system is lowest priorityviable,
if the original system is lowest priority viable. If for every value
of t, there exists a t?< t such that the Equation 3 holds for the
original system, it holds for the modified system with the same t?.
This is so because, as shown above, the dbf and rbf values in the
modified system are upper bounded by those of the original system.
This could only mean that the LHS of Equation 3 is only greater,
and the RHS smaller in the modified system, and therefore if the
inequality holds true for some t?, in the original task set, it holds
true in the modified system as well. We can therefore conclude
that if the original task is schedulable, so is the modified task.
Therefore, analysis in this case is sustainable.
Lemma 4. Schedulability analysis of conditional realtime tasks
with the processor demand criteria under fixed priority scheduling
is not sustainable with reordering of jobs in the task.
Proof. We prove nonsustainability of reordering of jobs by giving
a counterexample. Consider a system with one task Ω :→ (1,2)
(3,5)
Now let us reorder a path in the task to make Ω?:→ (3,5)
(1,2)
the dbf increases, we can no longer be sure that the Equation 3
holds, and the analysis is therefore not sustainable.
2→
3→. For this task, dbfΩ(2) = 1,dbfΩ(5) = 3 and dbfΩ(8) = 4.
3→
2→. In the reordered task, we can see that dbfΩ(5) = 4. As
v1
v4
v2
v0
v3
v5
v24
v5
v3
v0
v1
(a)
(1,5)
5
(2,4)
4
5
(2,5)
(2,5)
(3,10)
(4,10)
6
10
105
(b)
(2,4)
(2,5)
6
(3,10)
(4,10)
(1,5)
5
4
10
10
5
Figure 3. Figure illustrating nonsustainability of job hoisting.
Lemma 5. Schedulability analysis of conditional realtime tasks
with the processor demand criteria under fixed priority scheduling
is not sustainable with respect to hoisting (i.e., moving a common
job out of conditional branches).
Proof. Moving a common job out of conditional branches imme
diately outside does not introduce or eliminate runs. If the inter
arrival times change, the dbf values could increase and the analysis
would not be sustainable with respect to this operation. As an ex
ample, consider that task in Figure 3(a). For the task, dbf(9) = 3.
let us say job v2and v4are identical and are hoisted out as shown in
Fig. 3(b). For the modified task, dbf(9) = 4. The analysis is there
fore, not sustainable.
Page 7
If, however, the interarrival times are the same, then the dbf
values cannot increase, and the analysis would be sustainable. In
Sec 4, when we develop robust techniques for schedulability, we
discuss how to handle hoisting, including when the model becomes
anisochronous as in Fig.3(b).
Lemma 6. Schedulability analysis of conditional realtime tasks
with the processor demand criteria under fixed priority scheduling
is not sustainable with splitting of a job for optimizing it.
Proof. We prove nonsustainability of job splitting by giving a
counterexample. Consider a system with one task Ω : → (6,10)10
Observe that dbfΩ(5) = 0,dbfΩ(10) = 6. Let us say the job is split
to become Ω?:→ (2.5,5)
lower than before, we now have dbfΩ(5) = 2.5 which may break
Equation 3. Figure 4 shows the impact of job splitting for the
example with a supply that schedules Ω but not Ω?. We conclude
→.
5→ (2.5,5)
5→. Although dbfΩ(10) = 5 is
0
5
10
15
20
25
0 5 10 15 20
time
25 30 35 40
demand bound function
dbfΩ(t)
dbfΩs(t)
sbfΓ(t)
Figure 4. Figure illustrating nonsustainability of job splitting.
that the analysis is not sustainable.
We summarize the results of all the above lemmas into the
following theorem.
Theorem 3. Processor demand based analysis for conditional
realtime task models under fixed priority scheduling is sustainable
with respect to lower execution times, extended relative deadlines
and greater interarrival times, but is not sustainable with respect
to structural optimizations such as job hoisting, reordering of jobs
or job splitting.
Proof. Direct, from Lemmas 16.
3.2
Compiler optimizations on actual code that affect execution times
or memory accesses can be mapped to one or more categories con
sidered for sustainability analysis. The sustainability of the oper
ations, however, depend on correlation between the operation and
its impact on the task model. We briefly discuss sustainability for
some of the common optimizations below.
Common Subexpression Elimination (CSE). In this operation,
the compiler searches for instances of identical expressions (i.e., all
evaluate to the same value), and analyses whether it is worthwhile
replacing them with a single variable holding the computed value.
CSE comes in two flavors – local and global. If this operation
is local to a job, then the original analysis is sustainable, as this
only reduces the execution times and the analysis is sustainable
by Lemma 1. Global CSE may render the original analysis not
sustainable as it usually involves introducing some instructions
initially which can be seen as a form of job splitting (Lemma 6).
Copy/Constant propagation. In this case, the occurrences of
targets of direct assignments are replaced by their values. Typically,
this results in an reduction of execution times for individual jobs,
which implies that the original analysis is sustainable.
Dead code elimination. In this process, the program size is
reduced by removing code which does not affect the program.
If this is internal to a job, then the analysis is not affected. If
Discussion: compiler optimizations and sustainability.
one or more jobs get eliminated in the model due to dead code
elimination (such as removal of Nop() instructions), then there are
a few cases to consider. If the variable assignment is considered in
computation of the dbf, i.e., path feasibility given an assignment
of variables, then the operation has no effect on the analysis. If
feasibility of a path was not considered in the dbf computation,
then elimination of jobs could potentially affect the dbf values. If
interarrival times between jobs are unaffected by this operation,
then the original analysis is sustainable by Lemma 3. The analysis
will not be sustainable if the interarrival times decreases.
Code hoisting. This operation involves moving computations
outside of conditional branches to reduce code size. Code hoisting
also refers to moving computation outside of a loop so as to save
on computing time. In the first case, if the operation does not result
in jobs being reordered, and the reordering does not decrease the
interarrival times between jobs, then the analysis is sustainable
(Lemma 5). For the latter definition of code hoisting, it depends
on how the jobs are modeled. If the operation is modeled within a
job, then it only results in decreased execution times for that job,
and therefore the original analysis is sustainable. If code hoisting
involves introducing a new job, then the analysis is not sustainable
(Lemma 6).
Reduction in strength. Under this optimization, a function of
some changing variable is calculated more efficiently by using pre
vious values of the function. If this operation is modeled within a
job, then the analysis is sustainable (Lemma 1). If it involves intro
ducing a new job (for initialization statements) then the analysis is
not sustainable (Lemma 6).
Loop peeling. In this operation, a loop is either simplified or
dependencies eliminated by breaking it into multiple loops which
iterate over different contiguous portions of the index range. The
operation can be analyzed in the same manner as code hoisting.
4.
In this section, we focus on robust schedulability analysis tech
niques. First, we discuss the problem intuitively, then we formulate
the robust analysis as an optimization problem with fixed execution
times, and then finally, we consider robust analysis with increased
execution times.
Robust schedulability analysis
4.1
We have seen in the previous section that task optimizations such as
job reordering and job splitting are not sustainable, i.e., if the anal
ysis is performed, and then the task is subject to such optimizations,
then the original analysis need not necessarily hold. We remedy the
problem here by developing a framework that takes as input, the
space of all anticipated changes to the task and outputs the maxi
mum possible task load under those modifications. The idea is that,
if this maximum load can be supported by resource supply, then
the analysis will be sustainable under any actual modification that
happens to the task.
To model changes in a task, we assume that the task parameters
can take values from within a constraint set. For instance, we con
sider values to be specified over a range of real numbers with some
additional affine constraints. We expect that this model represents
manyrealisticscenarios,suchasataskwherethereissomevariable
jitter between the arrival of jobs, or a task subject to optimizations
such as job splitting or reordering. Given such a task, the problem
is to analyze schedulability so that no matter what the actual values
of task parameters are, so long as they are within the constraints
specified, the task set will remain schedulable.
For example, consider the task specified in Fig. 5(a). Let us say
that the jobs at nodes v2and v3have a segment that can be split into
a job with execution requirement 1 unit, and hoisted beyond node
v1. Further, suppose the deadline and interarrival times for the new
job are not known at the time of analysis, but it is known that they
preserve the overall timing behavior of the task. In this case, we
Motivation for robust analysis
Page 8
associate the split job (1,d4)
the constraints on interarrival times as j4+ j3= 15, j4+ j2= 15
and j4≥ 0, j2≥ 0, j3≥ 0. As the application preserves the overall
timing behavior, we also expect that deadlines d2, and d3are upper
bounded by the old deadline values, and the deadline d4is also
upper bounded by the maximum of deadlines of jobs released at v2
and v3. We therefore have, 10 ≥ d4≥ 3,10 ≥ d2≥ 4,10 ≥ d3≥ 5.
The scenario is explained in Fig. 5(b).
j4
→ to a new node v4, and identify
(b)
(a)
v1
v3
v2
v0
(1,5)
5
(2,5)
5
5
(3,10)
(4,10)
15
15
v1
v4
v4
v0
(1,5)
v3
v2
(2,5)
5
5
5
(1,d4)
(1,d4)
(2,d2)
(3,d3)
j2
j3
j4
j4
Figure 5. Example task which undergoes optimization
4.2
In this section, we discuss the robust analysis with fixed execution
times. The case where execution times change is dealt in the next
section.
Our approach to address the robust analysis problem in the
case when execution times remain the same, is to compute the
maximum task load, i.e, LΩ= maxt
anticipated changes to interarrival times and deadlines of tasks
due to code changes such as job splitting and reordering. After
we get this maximum load, we will use it to upper bound dbfΩ
for the task with modified parameter values, i.e., dbfΩ(t) ≤ LΩ·
t. Specifically, we obtain the maximum task load by solving an
optimization problem on the parameters of Ω. We show that under
the formulation, the optimization problem can be solved as a series
of convex optimization subproblems. We now present each of the
these steps in detail.
Setting up the optimization problem. For any interval t, the
load presented by task Ω is defined to bedbfΩ(t)
the optimization problem is to find an assignment to the parameter
values from within the constraint set so thatdbfΩ(t)
for any t. The first problem we have is to compute the interval up
to which we have to check for obtaining the maximum task load.
Given a RTC task Ω, Lemma 7 gives us the bound on the load.
?EΩ
RTC task Ω, where r is a run of Ω with at most one instance of root
in it, EΩ= argmaxr≡run(v0
recurrence of task Ω.
Robust analysis with fixed execution times
dbfΩ(t)
t
from the space of all
t
. The objective of
t
is maximized
Lemma 7. maxt
dbfΩ(t)
t
=max
PΩ,maxr≡run(vi,vj,t?)
∆(r)
t?
?
for any
i,v0
i,PΩ)∆(r), and PΩbeing the period of
Proof. First observe that
max
t≤PΩ
dbfΩ(t)
t
= max
?
EΩ
PΩ,
max
r≡run(vi,vj,t?)
∆(r)
t?
?
(5)
is true as r includes all intervals t ≤ PΩ. Now recall the dbf compu
tation procedure for the general case from Sec. 2.1. For t ≥ PΩ,
dbfΩ(t) =
PΩ
?
where EΩ= argmaxr≡run(v0,v0,PΩ)∆(r). Denoting t −
t1, we can observe that,
max{EΩ+dbfΩ(t1),dbfΩ(t1+PΩ)}
t1+PΩ
??
t
?
−1
?
?
EΩ+
maxEΩ+dbfΩ
t −
?
t
PΩ
?
PΩ
?
,dbfΩ
?
t −
?
t
PΩ
?
PΩ+PΩ
?
∆(r)
? ?
PΩas
(6)
t
PΩ
?
≤ max
?
EΩ
PΩ,
max
r≡run(vi,vj,t?)
t?
?
(7)
This is true because the run r on the RHS considers all intervals
which have one instance of the root in it, and this includes the
interval of duration t1+PΩin the dbf computation. The result then
follows by using EΩ≤ max
inequality in Eq. 7 in Eq. 6.
?EΩ
PΩ,maxr≡run(vi,vj,t?)
∆(r)
t?
?
and the
Lemma 7 gives us the bound to compute the maximum task
load. Based on the observations, the optimization problem can be
formulated as follows.
?
max
∆(r)
max{di,...,∑j−1
k=ijk+dj}
?
r≡r(vi,vj,t)
(8)
subject to,
∀k : jk≥ 0, ∀i : di> Ei,
∑
r1
j0∈ [L01,U01]... jn∈ [Ln−1 n,Un−1 n],
d1∈ [L1,U1]...dn∈ [Ln,Un]
where a job at node viis taken to be (Ei,di)
denote the execution requirements, and deadline for the job, and ji
the minimum time before releasing the next job. In the formulation,
we have adopted the notation that all variables are denoted by small
letters and all constants by capital letters. We also use the index
r(vi,vj,t) to represent a run of Ω which has at most one instance
of the root location in it or a run from the root to a leaf and back.
The index rigoes over runs of the Ω which involve the variable
interarrival times. At the most, they could go over all runs that
either does not involve a reset transition, or if they do, then the run
terminates at the root location.
Here are a few observations about the problem formulation. (1)
If there are V locations in Ω, we would have at most V3terms in
the objective function. This is so because runs with at most one root
location can be uniquely determined by specifying a start location,
leaf, and a terminal location. (2) We consider deadlines greater than
execution times. Otherwise, the system can be in full load, and the
optimization problem would trivially return by choosing this as the
maximumload.(3)Weassumethattheexecutiontimesareknowna
priori, and cannot be left as variables. This assumption is necessary
to keep the objective function (Eq.8) convex. We will elaborate
on this when we discuss the technique to solve the optimization
problem. (4) Although we consider the deadlines to take values
from an interval, we are not dealing with soft realtime systems.
It is assumed that once an assignment of deadline is made, it is a
hard deadline.
The constraints expressed in Eq. 9 capture for instance, variable
interarrival times or deadlines. In addition, it can also capture
operations such as job splitting and reordering of jobs.
Job splitting. Let us say that a job (Ei,Di)
to location vican be split into m subjobs, with execution times
Ei1,...,Eim. We assume Ei1,...,Eimto be known constants. For this
case, we introduce m new and consecutive locations vi1,...,vimand
remove the existing location vifrom the task. In other words, we
introduce the job sequence (Ei1,di1)
the task instead of (Ei,Di)
up the optimization problem as described and add the following
additional constraints. ∀k : dik> Eikand ∀k : dik≤ Di for the
deadlines, and ∀k: jik≥0 and∑kjik=Jifor the interarrival times.
Reordering of jobs. Let us say that job (Ei,Di)
to m possible new locations during the reordering of jobs in the
task. We create m locations vi1,...,vimin addition to the original
one vi. The job assigned for the new locations are (ei1,di1)
jk= J1, (0 ≤ J1≤ PΩ),...,∑
rm
jk= Jm, (0 ≤ Jm≤ PΩ),
(9)
ji→, where Ei, and di
Ji→ corresponding
ji1
→ ...
jim−1
→ (Eim,dim)
jim
→ into
Ji→. Once this is specified, we can set
Ji→ can be moved
ji1
→
Page 9
,...,(eim,dim)
dik=Di.Theconstraintscannowbesetupasdescribedbefore.The
objective function (Eq. 8) in this case is modified in the following
manner. The function considers all the runs as before, including the
runs with each of the locations vi1,...,vim. A run involving two or
more of the locations in Vi= {vi1,...,vim}, however, considers the
impact of only the last one. This means that we consider a demand
of only Eiin a run involving one or more locations from the set
Vi, and an interarrival time of Jifor the last one. The rationale
behind this is that, when a job is moved, we want to consider its
worst possible impact on the load. We account for the extra demand
introduced due a potential move by considering all runs involving
locationsinVi.Forrunsinvolvingmorethanonepotentiallocation,
we only need to account this extra demand once, as only one job is
beingmoved.Thechoiceofthelastonetobeconsideredisbecause,
in a run involving more than one location from Vi, the interarrival
time of last one will impact the deadlines of subsequent jobs the
least, thereby posing the maximum load.
jim
→ respectively, where ∀k : eik= Ei, jik= Ji, and
v1
v′
3
v′
2
v′
4
v0
v4
(2,5)
5
5
j2
j3
(2,d2)
(3,d3)
(1,5)
5
(1,d4)
j4
j4
(1,d4)
Figure 6. Modeling the job split and reordering
As an example, consider the task described earlier in Fig. 5. The
job split and reordering could result in the split job being moved to
either before v0, or after v0. We model these scenarios in Fig. 6 by
adding two locations, i.e., v4and v?
optimization problem for the example as discussed above. Listed
below is the objective function with runs involving up to three
locations.
?1
2
max{d4, j4+5},
3
max{d2, j2+d4},
subject to,
j4+ j3= 15, j4+ j2= 15
10 ≥ d4≥ 3,10 ≥ d2≥ 4,10 ≥ d3≥ 5
j4≥ 0, j2≥ 0, j3≥ 0
Common job hoisting. If a common job can be hoisted out of
the conditional branch, and the minimum inter arrival time for that
job is different on different conditions, considering this minimum
would result in an anisochronous model. For example, hoisting job
in location v2and v4makes the task in Fig. 3 anisochronous. To
make the analysis robust, it is sufficient to model the task after
hoisting, and reduce the interarrival time between the common
job moved out and the subsequent job so that the transformation
preserves isochronicity. For the example in Fig. 3 (b), the inter
arrival times between jobs v24and v3to be 5.
Solving the optimization problem. The second step of the
analysis involves solving for the maximum load posed by the task
as a convex optimization problem. This is achieved by considering
the reciprocal of objective function in Equation 8. Observe that
the objective function is maximized whenever the reciprocal of
objective function attains the minimum. This is true because all the
variables in the system are in the denominator of Equation 8. The
4in the figure. We can write the
max
d4,1
5,2
5,3
d3,2
d2,
(10)
2
5+d4,
3
max{d4, j4+5},
3
max{d2, j2+d4},
4
5+d2,
4
max{d3, j3+d4},...,
?
(11)
new objective function is therefore,
?
min
max{di,...,∑j−1
k=ijk+dj}
∆(r)
?
r(vi,vj,t)
(12)
Since the affine function ax+b for a,b,x ∈ R is convex, and
the sum of two convex function is convex, all the terms inside the
maximum of the objective function (e.g., ∑j−1
vex. Further, the point wise maximum of convex functions is also
convex, making the objective functionmax{di,...,∑j−1
The constraints in Eq. 9 are also convex making the overall prob
lem of finding the maximum load, a standard convex optimiza
tion problem. The problem can therefore be solved using stan
dard techniques (see Chebyshev approximation, Boyd and Vanden
berghe (Boyd and Vandenberghe 2004), pg 293).
We would like to emphasize that the assumption that execution
times are fixed is important for solving the problem. If the execu
tion times are left as variables, then the objective function would
have been of the linear fractional form, i.e., f(x) =aTx+b
cTx+d > 0. The linear fractional function is, however, only quasi
convex. A quasiconvex function optimization problem can have
local optima that are not globally optimal, a property that is im
portant for us to be able to find the maximum system load. The
assumption that execution times are constant is consistent with the
worst case upper bounds of execution times used in practice. Ar
guments similar to those in Lemma 1 can be used to prove that
robust schedulability analysis is sustainable with lower execution
times, thereby establishing that using the worst case upper bounds
is sound.
With the objective function set up as in Eq. 12, it can be solved
by solving a series of minimax subproblems, and then taking the
minimum over all of them. This follows from Lemma 8.
Lemma 8. Given functions f1(x) and f2(x) over x ∈ Rn, then
minx{f1(x), f2(x)} = min{minxf1(x),minxf2(x)}.
Once we have solved for the maximum load in Eq. 8, we can
relate it to schedulability of task as follows.
Theorem 4. Given a RTC task Ω, where the task parameters can
take values subject to the affine system of constraints in Eq. 9, we
have ∀t,dbfΩ(t)
returned by the solution of system Eq. 8.
k=ijk+dj) are con
k=ijk+dj}
∑r(vi,vj,t)ei
convex.
cTx+d, with
t
≤ LΩ, where LΩis the optimal value of the load as
Proof. Direct, from Lemma 7.
We add that the result in Theorem 4 is tight in the sense that
there exists an assignment to the task parameters satisfying the
constraints of Eq. 9 so that the load of uΩis achieved exactly.
For the example task in Fig.5, we have formulated the opti
mization problem in Eq. 10. Taking the reciprocal of the objective
function and reducing, we get the following set of minimax ob
jective functions to be solved subjects to the constraints in Eq. 11:
minmax{d4, j4+5, j4+5+d2},minmax{d4, j4+5, j4+5+d3},
minmax{d2, j2+d4, j2+ j4+5}, and minmax{d3, j3+d4, j3+
j4+5} Since we have constraints j4+ j3= 15, j4+ j2= 15,
the minimum value of last function above is 20. Now solv
ing for all the variables, we find that the optimal assignment is
d4= 3,d2= 4,d3= 5 and j4= 0, j2= 15, j3= 15. The maximum
task load is3
can support at least this much load at all times, irrespective of the
splitting and reordering of jobs within the task.
We conclude this section by discussing robust schedulability
analysis of the task set using the above framework. Let us say
that for tasks Ω1,...,Ωn, we compute maximum values of load
LΩ1,...,LΩn, as per their constraints. A similar result can also be
computed with rbf’s instead of dbf?s. Let us denote maxt
5. Therefore, the task will be schedulable if the supply
rbfΩi(t)
t
Page 10
thus computed by RΩi, for i =1,...,n. The robust schedulability of
the task set can then be defined using these quantities.
Theorem 5. Let T = {Ω1,...,Ωn} be a system of RTC tasks that
arepreemptivelyscheduledonauniprocessorusingstaticpriorities
with the resource being provided according to a resource model Γ.
Task Ωiis lowest priority viable in T if
??
∀t ∈ TS : ∃t?≤t :sbfΓ(t?)−
∑
Ω∈T\{Ωi}
RΩ·t?
?
≥ LΩi·t
?
(13)
where the EΩ,PΩand TS are as defined before.
The proof of the above theorem is direct from Theorem 1 and
the observation that dbfΩ(t)≤LΩ·t and rbfΩ(t)≤RΩ·t. The anal
ysis in Theorem 5 is sustainable with any values of task parameters
constrained as in Eq. 9. A similar result can also be stated for the
dynamic priority case.
4.3
Thus far, we have discussed a technique to compute the maximum
task load in the case where task parameters other than execution
times can take values from within a real space with affine con
straints. Here we consider the case where the execution time in
creases by a common factor for the task set, while other parameters
are fixed. This view is consistent with many practical scenarios, in
cluding applications running on systems which use dynamic volt
age scaling of processors, and applications which are meant to be
ported on systems with different target processors. In fact, Yerra
balli et al (Yerraballi et al. 1416 Jun 1995) have discussed at length
about mapping many different questions on schedulability of sys
tems to the problem of finding the maximum scaling factor for the
task set.
The main issue in trying to address the problem for the RTC
task set is that the schedulability criteria depends on the average
task load. If the execution times increase, the interval up to which
we need to check for schedulability also increases, and therefore
the original analysis cannot be directly used. This is the case under
both fixed priority, and dynamic priority scheduling (Theorems 1
and 2). Rest of the section discusses how to get around this problem
for scheduling feasibility with scaled execution times.
Let the common scaling factor be denoted by α. For scheduling
feasibility, Eq. 4, Theorem 2 gives the necessary and sufficient
condition. If the execution times increase by a constant factor, the
dbf for any interval is also scaled up by that factor, i.e., dbfΩ?(t) =
α·dbfΩ(t), where Ω is the original task, and Ω?is the task with
scaled execution times. We now present a series of results that we
can use to compute the maximum common scaling factor.
For the purposes of analysis, we define the following bound on
a resource supply, and prove a few properties of the task model.
Definition 5. We define the linear lower bound function (lsbf)
of a resource supply Γ as a linear function of t which has the
following property : ∀t ≥ 0 : lsbfΓ(t) ≤ sbfΓ(t), and ∃ 0 ≤ t1<
t2: lsbfΓ(t1) = sbfΓ(t1) and lsbfΓ(t2) = sbfΓ(t2).
A linear upper bound of the supply function (usbf) can be
defined similarly.
Lemma 9. Let T ={Ω1,...,Ωn} be a system of RTC tasks that are
preemptively scheduled on a uniprocessor using dynamic priorities
with the resource being provided according to a resource model Γ.
Robust analysis with increased execution times
1. argmaxt
with the scaled execution times.
2. Let t∗is such that lsbfΓ(t∗) =
argmaxt
t
deadline in any task of T, then T is schedulable with Γ.
∑ΩidbfΩi(t)
t
= argmaxt
∑Ω?idbfΩ?i(t)
t
where Ω?iis the task
∑ΩidbfΩi(tm)
tm
·t∗, where tm=
∑ΩidbfΩi(t)
.Ift∗≤dmin,dminbeingthesmallestrelative
Proof. (1) Proposition 9.1 follows directly from the following ob
servation. ∀t > 0 :
and the fact that dbfΩ?
(2) Observe that for t < dmin, ∑ΩidbfΩi(t) = 0 and sbfΓ(t) ≥
∑ΩidbfΩi(t) holds trivially. Observe that t∗is the point of inter
section of lsbfΓand the line
through the origin, and lsbfΓ(0) ≥ 0. Therefore, in either case, for
t >t∗, sbfΓ(t) ≥ lsbfΓ(t) ≥∑ΩidbfΩi(tm)
t < t∗and t ≥ t∗, we have sbfΓ(t) ≥ ∑ΩidbfΩi(t) and T is schedu
lable by Theorem 2.
∑ΩidbfΩi(tm)
tm
i(t) = α·dbfΩi(t), α being a constant,
≥∑ΩidbfΩi(t)
t
∑ΩidbfΩi(tm)
tm
·t. The latter line passes
·t ≥∑ΩidbfΩi(t)
tm
t
·t. For both
Lemma 9 sets us up for finding the common scaling factor.
Proposition 9.1 says that the point in time which represents the
highest load is preserved during scaling. Therefore, the scaling
factor for the task set is quite simply how much the execution times
of the point with maximum load in the task set (tm) can be scaled
without affecting schedulability.
The next question to be answered is how to check for schedu
lability efficiently in the scaled task set. Observe that the limit to
which we have to check schedulability depends on load (Theo
rem 2), therefore, if the execution times scale, the limit also in
creases. Proposition 9.2 tries to get around this problem by giving a
sufficient criteria for schedulability that can be checked efficiently.
The proposition says that if the slope of the lsbfΓis greater than the
slope of the demand function at the point of highest load of the task
set, then the task set is schedulable. More specifically, denoting the
point of intersection of the lsbfΓwith the line∑dbfΩi(tm)
proposition says that if t∗< dmin, then the task set is schedulable.
If t∗happens to be greater than dmin, then we need to check ∀t <t∗
that sbfΓ(t)≥∑dbfΩi(t). Clearly, sbfΓ(t)≥∑dbfΩi(t) holds for all
t ≥t∗. These concepts are illustrated in Figure 7.
tm
t by t∗, the
t
tm
?dbf
lsbf
?dbf(tm)
tm
t
sbf
usbf
t∗∗
t∗
Figure 7. Figure illustrating various concepts of Theorem 9
The final question that remains to be answered is that of com
puting the point of maximum load (tm) for a task set. Using Theo
rem 2 with full supply (sbf(t) =t), we get tm≤
original task set is schedulable.
For a general resource supply Γ, the following theorem then
gives a method to compute the scaling factor.
Theorem 6. Let T = {Ω1,...,Ωn} be a system of RTC tasks
that are preemptively scheduled on a uniprocessor using dynamic
priorities with a resource supply Γ. The scaling factor of the system
T is at least α, where α is defined as,
?
where tmis the point with the maximum task load.
2·∑Ω∈TEΩ
1−∑Ω∈TEΩ/PΩif the
α = minmin
0<t<t∗
sbfΓ(t)
∑ΩidbfΩi(t),
usbfΓ(dmin)·tm
dmin∑ΩidbfΩi(tm)
?
(14)
Proof. For t > t∗, lsbfΓ(t) ≥ ∑Ωiα·dbfΩi(t). To ensure schedu
lability, we need to ascertain that the inequality holds for t < t∗.
However, t∗depends on the choice of α. To get the result of the
Page 11
corollary, we set the point of intersection between the usbfΓand
the line u =∑Ωiα·dbfΩi(tm)
tm
factor is then simply the minimum of the slope of the line u the
value obtained by checking each of the points t <t∗.
·t as dmin, thereby fixing t∗. The scaling
For the full processor supply, t∗= 0, and the supply function is
itself its linear lower bound. Therefore, we can get the exact scaling
factor, as
poses the maximum load on the system.
tm
∑ΩidbfΩi(tm), where tmis is the point where the task set
5.
In this paper, we have introduced and addressed sustainability and
robustness of schedulability analysis for systems modeled as recur
ring branching tasks. We have noted that the analysis is sustainable
with respect to many parameters such as lower job execution times,
increased job interarrival times, and relaxed deadlines, Structural
changes such as job splitting and reordering are not sustainable,
even though they can result in lower overall execution times. For
such operations, we have developed a robust schedulability analy
sis framework, that can be used to model and analyze the schedu
lability, and the results of this analysis are sustainable.
Conclusions
References
A.K.Mok and WingChi Poon. Nonpreemptive robustness under reduced
system load. RealTime Systems Symposium, 2005. RTSS 2005. 26th
IEEE International, pages 10 pp.–, 58 Dec. 2005. ISSN 10528725.
doi: 10.1109/RTSS.2005.31.
K. Altisen, G. G¨ ossler, and J. Sifakis. Scheduler modeling based on the
controller synthesis paradigm. RealTime Syst., 23(12):55–84, 2002.
ISSN 09226443.
Madhukar Anand, Arvind Easwaran, Sebastian Fischmeister, and Insup
Lee. Compositional feasibility analysis for conditional task models. In
Proceedings of the Eleventh IEEE International Symposium on Object
Oriented RealTime Distributed Computing (ISORC’08), Washington,
DC, USA, 2008. IEEE Computer Society.
J. Andersson, B.; Jonsson. Preemptive multiprocessor scheduling anoma
lies. Parallel and Distributed Processing Symposium., Proceedings In
ternational, IPDPS 2002, Abstracts and CDROM, pages 12–19, 2002.
doi: 10.1109/IPDPS.2002.1015483.
Sanjoy Baruah and Alan Burns. Sustainable scheduling analysis. In RTSS
’06: Proceedings of the 27th IEEE International RealTime Systems
Symposium, pages 159–168, Washington, DC, USA, 2006. IEEE Com
puter Society. ISBN 0769527612.
RTSS.2006.47.
Sanjoy K. Baruah. Dynamic and staticpriority scheduling of recurring
realtime tasks. RealTime Systems, 24(1):93–128, 2003. ISSN 0922
6443. doi: http://dx.doi.org/10.1023/A:1021711220939.
Sanjoy K. Baruah. A general model for recurring realtime tasks.
RTSS, pages 114–122, 1998. URL citeseer.ist.psu.edu/
baruah98general.html.
Hanene BenAbdallah, JinYoung Choi, Duncan Clarke, Young Si Kim,
Insup Lee, and HongLiang Xie. A process algebraic approach to the
schedulability analysis of realtime systems. RealTime Syst., 15(3):
189–219, 1998. ISSN 09226443.
1008047130023.
Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge
University Press, New York, NY, USA, 2004. ISBN 0521833787.
Alan Burns and Sanjoy Baruah.Sustainability in realtime scheduling.
Journal of Computing Science and Engineering, Vol 2(No. 1):72–94,
2008.
G. Buttazzo. Achieving scalability in realtime systems. Computer, 39(5):
54–59, May 2006. ISSN 00189162. doi: 10.1109/MC.2006.148.
G. C. Buttazzo and J. Stankovic. Red: A robust earliest deadline scheduling
algorithm. In Proceedings of Third International Workshop on Respon
sive Computing Systems, 1993. URL citeseer.ist.psu.edu/
buttazzo93red.html.
doi: http://dx.doi.org/10.1109/
In
doi: http://dx.doi.org/10.1023/A:
YaShu Chen, LiPin Chang, TeiWei Kuo, and Aloysius K. Mok. Real
time task scheduling anomaly: observations and prevention. In SAC ’05:
Proceedings of the 2005 ACM symposium on Applied computing, pages
897–898, New York, NY, USA, 2005. ACM. ISBN 1581139640. doi:
http://doi.acm.org/10.1145/1066677.1066881.
A. Davis, R. I.; Burns. Robust priority assignment for fixed priority real
time systems. RealTime Systems Symposium, 2007. RTSS 2007. 28th
IEEE International, pages 3–14, 36 Dec. 2007. ISSN 10528725. doi:
10.1109/RTSS.2007.43.
X. Feng and A. Mok. A model of hierarchical realtime virtual resources. In
Proc. of IEEE RealTime Systems Symposium, pages 26–35, December
2002.
Christian Ferdinand, Reinhold Heckmann, Marc Langenbach, Florian Mar
tin, Michael Schmidt, Henrik Theiling, Stephan Thesing, and Reinhard
Wilhelm. Reliable and precise wcet determination for a reallife proces
sor. In EMSOFT ’01: Proceedings of the First International Workshop
on Embedded Software, pages 469–485, London, UK, 2001. Springer
Verlag. ISBN 3540426736.
Arkadeb Ghosal, Alberto SangiovanniVincentelli, Christoph M. Kirsch,
Thomas A. Henzinger, and Daniel Iercan. A hierarchical coordination
language for interacting realtime tasks. In EMSOFT ’06: Proceedings
ofthe6thACM&IEEEInternationalconferenceonEmbeddedsoftware,
pages 132–141. ACM Press, 2006. ISBN 1595935428.
Daniel Iercan. TSL Compiler. Master’s thesis, Politehnica University of
Timisoara, September 2005.
Sheayun Lee, Insik Shin, Woonseok Kim, Insup Lee, and Sang L. Min.
A design framework for realtime embedded systems with code size
and energy constraints. (To appear) ACM Transactions on Embedded
Computing Systems (TECS), 2008.
J.Y.T.Leungand J.Whitehead. Onthecomplexity offixedpriorityschedul
ing of periodic realtime tasks.
1982.
G. Lipari and E. Bini. Resource partitioning among realtime applications.
In Proc. of Euromicro Conference on RealTime Systems, July 2003.
S.P. Marlowe, T.J.; Masticola. Safe optimization for hard realtime pro
gramming. Systems Integration, 1992. ICSI ’92., Proceedings of the
Second International Conference on, pages 436–445, 1518 Jun 1992.
doi: 10.1109/ICSI.1992.217244.
Razvan Racu and Rolf Ernst. Scheduling anomaly detection and optimiza
tion for distributed systems with preemptive tasksets. In RTAS ’06:
Proceedings of the 12th IEEE RealTime and Embedded Technology
and Applications Symposium (RTAS’06), pages 325–334, 2006. ISBN
0769525164.
Ha Rhan and J.W.S. Liu. Validating timing constraints in multiproces
sor and distributed realtime systems. Distributed Computing Systems,
1994., Proceedings of the 14th International Conference on, pages 162–
171, 2124 Jun 1994. doi: 10.1109/ICDCS.1994.302407.
I. Shin and I. Lee. Periodic resource model for compositional realtime
guarantees. In Proc. of IEEE RealTime Systems Symposium, pages 2–
13, December 2003.
R. Yerraballi, R. Mukkamala, K. Maly., and H.A. Wahab. Issues in schedu
lability analysis of realtime systems. RealTime Systems, 1995. Pro
ceedings., Seventh Euromicro Workshop on, pages 87–92, 1416 Jun
1995. doi: 10.1109/EMWRTS.1995.514297.
M.F. Younis, Marlowe T.J., Tsai G., and Stoyenko A.D. Toward compiler
optimization of distributed realtime processes. Engineering of Com
plex Computer Systems, 1996. Proceedings., Second IEEE International
Conference on, pages 35–42, 2125 Oct 1996. doi: 10.1109/ICECCS.
1996.558328.
Performance Evaluation, 2:37–250,