Project

A Preview of "Introduction to Fundamental Principles of Dynamic Real-Time Systems"

Goal: This is a condensed preview of my work-in-progress monograph "Introduction to Fundamental Principles of Dynamic Real-Time Systems." The preview provides brief edited extracts about some keystone principles of timeliness and predictability in time-constrained systems. See the PDF of the Prologue and Chapter 1 as of 17 April 2017.

Updates
0 new
11
Recommendations
0 new
0
Followers
0 new
15
Reads
2 new
815

Project log

E. Douglas Jensen
added an update
The primary objective of the book and this preview is to introduce a scholarly yet practical foundation for timeliness and predictability of timeliness. Those are the core properties of all real-time systems. Here they are the primary ones which distinguish the general case of dynamic real-time systems from the conventional subset of static ones—and thus are the basis for this foundation.
Such a foundation is important (and this one has been proven to be invaluable) for generalizing conventional real-time system models and practices from their historical focus on the narrow special case of predominately static periodic-based (informally termed “hard”) real-time systems.
That generalization encompasses all real-time systems—defined in this Introduction (temporarily informally) to be systems for which timeliness and predictability of timeliness are part of the logic of the system.
Although the real-time computing field erroneously dismisses non-hard systems as “soft” in various ad hoc and ambiguous ways, this foundation reveals non-hard (soft, as will be precisely defined herein) to be not just the general, but also the most common, case of real-time systems.
Informally, timeliness of a “happening”—an event (e.g., task completion) or information (e.g., data change)—is a measure of the extent to which its occurrence time can be related to the happening’s usefulness in specific circumstances.
In the conventional real-time computing system special case, timeliness metrics are ordinarily binary: whether a task completion deadline is met; whether the latency of an interrupt response or system call response can be less than a least upper bound.
However, in principle (cf. scheduling theory) and real world practice, the logic of a real-time system includes relationships between: happenings’ lateness (earliness and tardiness) with respect to a deadline or least upper bound; and the consequent usefulness of the happening when it is manifest.
The book’s foundation furthermore provides for the frequent cases of even richer expressive relationships evident in an application—e.g.: whose lateness is not necessarily linear; and whose completion time constraints are not deadlines or least upper bounds—using Jensen’s time/utility functions and utility accrual optimality criteria. The time/utility function methodology’s major challenge has been predictability of timeliness, which is addressed with this foundation.
Continuing informally, something (notably here, timeliness) is predictable in the manner and to the extent that it can be known á priori. In conventional (i.e., “hard”) real-time computing system models, predictability is static and almost exclusively binary—i.e., it is presumed that all (especially task) completion time constraints will always be met (unless there is a failure). In that field, predictability is often mistakenly thought to mean “deterministic.”
That mistake can be recognized and remedied by considering predictability to be a continuum. In that type of model, the maximum predictability end-point is deterministic á priori knowledge, and the minimum predictability end-point is total absence of á priori knowledge. Everywhere on the predictability continuum in-between those end points is degrees of non-determinism. Typically, happenings in dynamic real-time systems have predictability between the two end points of that continuum, according to some formal model- and context-specific metric. A simplified example, using a statistical continuum for predictability is: the maximum predictability (deterministic) end-point is represented by the constant distribution; the minimum predictability (maximum non-determinism) end-point is represented by the uniform distribution; the metric is in terms of some model instantiation properties of probability distributions, such as shape, dispersion, etc.
Predictability is a vastly deeper and more complex topic than that intuitive continuum model.
Predictability consists of reasoning about kinds and degrees of uncertainty. To non-experts, that usually first brings to mind probability theory, but there are multiple different interpretations and theories of “probability.”
The familiar frequentist interpretation may seem applicable to predicting timeliness (e.g., task completion times) in static periodic-based real-time systems, but even there it can be misleading—and it is not at all applicable to dynamic real-time systems.
The term “dynamic” in the context of the book encompasses two distinct departures from almost all conventional static real-time systems. Both departures are common for timeliness and predictability of timeliness in the real world.
First, the system models’ defaults include that parameters such as task arrivals, task execution durations, task time constraints, task resource conflicts, etc. are not necessarily static. Instead, parameters may be aleatoric—i.e., statistically uncertain ones which may differ (e.g., stochastically) during system execution and for each time the system is run. In most static real-time computing systems, modern processor and computer architectures force task execution times to be a dynamic parameter. Fitting that as “worst-case execution time” into the conventional static orthodoxy is very challenging.
Second, there may be inherent epistemic uncertainties (i.e., ignorance) about aspects of the system and its environment. They are not objectively statistical in nature, in the sense that statistical characterizations do not capture their values and meaning. Reasoning about these uncertainties requires employing an appropriate, often even application-specific, theory of uncertainty from the various extant ones.
The Bayesian theory of probability and inference for reasoning about uncertainty is popular in many fields. Bayesian probability can be assigned to any statement whatsoever, even when no random process is involved, to represent its subjective plausibility or the degree to which the statement is supported by the available evidence. However, it has major limitations—most notably that it cannot model ignorance, uncertain data, conflicting data, and scarce data. Those notable limitations make it unsuitable for predictability of timeliness in many dynamic real-time systems (and for various other applications). They are overcome by several more expressive theories to which the book broadly applies the term belief function theories.
Very simply: Belief function theories for reasoning about the uncertainty of a hypothesis combine evidence from different sources to arrive at an updated degree of belief represented by a mathematical object called a belief function. Many different rules for combining evidence and updating beliefs have been devised to better accommodate varied uncertainty characterizations (e.g., conflicting and contradictory evidence, defeasible reasoning, evidence source trust). Belief function theories began with Dempster-Shafer theory. Evolved theories currently include (in roughly increasing order of expressiveness and generality) the Theory of Hints, the Transferable Belief Model, and Dezert-Smarandache theory.
Belief function theories are natively expensive in computation time (#P-complete) and memory space, especially in comparison with reasoning based on classical probability theory and even Bayesian conditional probability. Reasoning time frames can be much longer than those in predominately static systems (recall that time frame magnitudes are not part of the definition of real-time).
Fortunately, the attractiveness of belief function theories for more comprehensively reasoning about uncertainty has engendered an active field of research and development on efficient algorithms, data structures, and even hardware (GPU, FPGA) implementations, for reducing those costs. Even so, the theories are usually best instantiated in application-specific ways for maximal cost-effectiveness. Any specific application may have kinds and degrees of uncertainties (e.g., of dynamic system model parameters) that outreach all resource-feasible formal reasoning approaches. Ultimately there will always be cases for which the only viable approach is to restructure an application so that uncertainty is manageable—just as is true for simple static real-time systems (e.g., worst-case execution times may be infeasible to ascertain or tolerate).
Belief function theories are increasingly used in a broad range of different fields, despite initial misunderstandings and mistakes in both research and practice.
The framework in the book is novel in accommodating Bayesian and belief function theories, in addition to classical probability theory, for predicting timeliness (i.e., scheduling) of dynamic real-time systems. In particular, it addresses the historical challenge of predicting the individual and collective completion times and consequent utilities for time/value functions.
The book includes examples of employing each of those theories in an important context of military combat systems, namely battle management. These have real-time behaviors with very high innately dynamic, kinds and degrees of uncertainty.
[The rest of the Introduction about the book chapters is unchanged.]
 
E. Douglas Jensen
added an update
This is a new (much shorter) Abstract for my
Work-in-progress preview of a work-in-progress book:
Introduction to Fundamentals of Timeliness in Dynamically Real-Time Systems [Jensen 2018].
 
E. Douglas Jensen
added an update
This is a work-in-progress, highly condensed and less formal, preview of my work-in-progress book An Introduction to Fundamental Principles of Timeliness in Dynamically Real-Time Systems [Jensen 2018]. The preview is intended to stimulate further research in this field.
This Introduction first briefly summarizes the keystone ideas that the book and its preview discuss, and why I am writing about them. Then it describes the topic of each chapter. It is prerequisite reading for the rest of the preview.
The book is primarily a scholarly monograph focused on conceptual foundations for timeliness and predictability of timeliness, especially when they are unavoidably dynamic due to intrinsic epistemic uncertainties—e.g., ignorance, imprecision, non-determinism, conflicts—in the system and its application environment. These are the two keystone properties of real-time systems—including, but not limited to, real-time computing systems. The archetypal use of those properties in real-time computing systems is for scheduling tasks or exogenous physical behaviors.
Such dynamically real-time systems are common in (but not limited to) the defense domain, where they are often exceptionally mission- and safety-critical. They require the strongest feasible assurance of satisfactory timeliness and predictability of timeliness despite those adverse uncertainties. Guaranteed ideal timeliness and predictability of timeliness in dynamically real-time systems is a physical impossibility (for example, in the defense domain, cf. The Fog of War).
Traditional real-time computing system concepts and techniques are for a narrow (albeit important) special case, based on presumed a' priori omniscience of a predominately static periodic-based system model. They are not applicable to the general case of dynamically real-time systems which dominate the real world.
There is a strong need for deeper and more general thinking about timeliness and predictability of timeliness, based on first principles (i.e., axiomatically). That is particularly necessary in the context of traditional real-time computing systems where habitual misconceptions of those properties (and, indeed, of "real-time" per se) are ad hoc and very limited (the book provides numerous examples).
The first primary limitation of those misconceptions is that (at least almost) every aspect which may affect system timeliness and predictability of timeliness--usually of a group of tasks--is certain and known a' priori.
One of those aspects, task execution times, receives extra attention because even if a task is simple enough that its execution time per se can be established, its actual execution time on most computers is typically quite variable. The real-time computing community seeks to eliminate that uncertainty by determining a (usually approximate) worst case upper bound so one or more such tasks can be treated as having static execution times (cf. Procrustean).
The second primary limitation is that moreover, those static aspects--e.g., task arrivals, task periods, task execution durations, task deadlines--are confined to predominately periodic instances, and consequently overloads and task resource conflicts are presumed to be avoidable.
That highly constrained system model, referred to in various imprecise ways by the real-time computing community as being "hard real-time," is a special case.
That community has even less understanding of concepts and vocabulary for the general, and more frequent and difficult, case of "not hard real-time"--notably "soft real-time." “Firm real-time” is a misnomer for a subset of soft real-time systems, as shown in the chapter on real-time.
Analogously, categorizing the spectrum of paint colors to be only "black" and "not black" is a very myopic perspective on the real world. Regardless of how much one might be able to say about the special case of the color "black," that categorization lacks the concepts and vocabulary for reasoning about the whole paint color spectrum.
Concepts and vocabulary for hard real-time and soft real-time are rigorously defined in the book, and summarized in this preview.
Most real-time systems in the general sense are inevitably subject to kinds and degrees of inherent uncertainties. Uncertainty can be understood as a condition of limited knowledge--e.g., due to ignorance, randomness--in which it is impossible to exactly describe the state of the world or its future evolution. Uncertainties necessarily cause timeliness and predictability of timeliness to be dynamic--e.g., variable, unknown, or non-deterministic, before and during system operation.
Hence the book emphasizes fundamental principles of timeliness and predictability of timeliness for that general case of dynamically real-time systems subject to uncertainties. Those principles scale down to encompass the niche static special case as well.
(There are aspects of systems other than timeliness and predictability of timeliness that make them dynamic in various ways and degrees—e.g., being adaptable. They are outside the scope of this book.)
Timeliness and predictability of timeliness are the topics of book and preview Chapters 2 and 3, respectively.
Timeliness and predictability of timeliness are the basis for what "real-time" means.  They are integral to the logic of a real-time action (e.g., computational task) or system.
“Integral to” includes, but is not limited to, the correctness (whether binary or not) of that logic. Thus, in principle, an action or system of actions is not confined to either being a real-time one or not being a real-time one.
Herein it is often convenient to refer to that general class of actions and systems simply as “real-time” ones.
Note that the time frame(s)—e.g., whether microseconds or megaseconds—of a real-time action or system are encompassed in “integral to,” and are not per se part of the definition of “real-time.” Imagining that "real-time" is the same as "real fast" is an instance of being confused about one of the semantic aspects of timeliness.
This book provides a unique scholarly exposition of one set of timeliness foundations for general real-time systems, based on explicit first principles, not on historical anecdotal artifacts as is normal in the real-time computing field. That necessarily requires correcting the babel about concepts and terminology which is endemic in that field. Such confusion and inconsistency impedes the theory and practice of real-time systems, not only dynamic ones but even many static ones.
The purpose of this preview is to provide early access to some abbreviated and simplified content of the book, to encourage research on its topics.
Fortunately, there is a large body of widely used theory and practice for dynamically real-time systems outside the computing field. Consequently, meeting the demands for dynamically real-time computing systems with inherent uncertainties has been addressed by:
  • application domain (military, industrial automation, telecommunications, etc.) subject matter experts but who usually lack substantive computer science, real-time computing, and uncertainty expertise;
  • researchers in the disparate theoretical topics, such as mathematical uncertainty models, that enable dynamic and particularly dynamically real-time computing systems—but who too usually lack the same substantive computer science and real-time computing expertise as do the application domain experts, and also usually lack substantive exposure to dynamic application domains.
The corporate application domain experts almost always hold their dynamically real-time experiences to be proprietary, and the military experiences are usually classified.
The theoreticians write in the jargon of their fields and publish in forums of their fields. Few computer scientists and engineers, and application domain experts, have the time and inclination to even discover these results, much less attempt to understand and apply them.
This book seeks to help bridge at least part of that gulf using a combination of both innovative and extant principles from both sides—together with decades of hard-won personal experience with various extreme instances of wickedly dynamic and uncertain (e.g., military combat) real-time systems.
It provides a unique scholarly exposition of one set of timeliness and predictability foundations for general real-time systems, based on explicit first principles, not on historical anecdotal artifacts as is normal in the real-time computing field.
That necessarily requires correcting the babel about concepts and terminology which is endemic in that field. Such confusion and inconsistency impede the theory and practice of real-time systems, not only dynamic ones but even many static ones.
"Typically, we think reproductively—that is, on the basis of similar problems encountered in the past. When confronted with problems, we fixate on something in our past that has worked before. We ask, "What have I been taught in life, education or work on how to solve the problem?" Then we analytically select the most promising approach based on past experiences, excluding all other approaches, and work within a clearly defined direction towards the solution of the problem. Because of the soundness of the steps based on past experiences, we become arrogantly certain of the correctness of our conclusion. In contrast, [one should] think productively, not reproductively. When confronted with a problem, [one should] ask "How many different ways can I look at it?", "How can I rethink the way I see it?", and "How many different ways can I solve it?" instead of "What have I been taught by someone else on how to solve this?" [Then people] tend to come up with many different responses, some of which are unconventional and possibly unique. In order to creatively solve a problem, the thinker must abandon the initial approach that stems from past experience and re-conceptualize the problem. By not settling with one perspective, [people] do not merely solve existing problems, they identify new ones."
—Michael Michalko, creativitypost.com
To make this preview readily accessible to an intelligent non-specialist audience, three things reduce its breadth and depth.
First, the preview length is drastically limited from the book length—e.g., from an estimated 350 to approximately 50 pages. That entails judicious selection, and concise re-writing, of only the most fundamental and easily understandable topics.
Second, Chapter 1 of the preview is relatively comprehensive and self-contained, given that it is just the first part of a preview (which may lead readers to a sense of déjà vu in subsequent chapters). The intent is that for some readers, Chapter 1 alone might suffice, at least initially.
Thirdly, the prerequisite mathematical and other knowledge is reduced by using language that is a compromise between informality and accuracy.
The book itself is somewhat more demanding in parts than this preview. It requires a modest amount of cognitive maturity for abilities such as: learning unfamiliar concepts; reasoning about abstractions; problem solving; and making both conceptual and engineering trade-offs between algorithms and efficient implementations of them.
Some of the book’s references provide remedial coverage of formal topics that readers may need, such as for optimization (e.g., non-deterministic scheduling algorithms) and subjective probability theories. Other references provide optional additional background outside the main scope of the book. In this preview (but not the book), no-cost on-line references are chosen as much as suitable.
The readers of the preview and the book are expected to have experience with some non-trivial systems which are considered to be real-time ones according to some plausible definition. Examples include: real-time applications such as industrial automation control systems, robotics, military combat platform management, etc.; and real-time operating systems, including substantive ones such as Linux kernel based or UNIX/POSIX based (not just minimal embedded ones), at the level of their internals.
Many years of experience teaching and applying the material in this book have shown that: in some cases, the less that people know about traditional static real-time computing systems, the more natural and easy they find much of this material; while in other cases, the more that people know about traditional static real-time computing systems, the more foreign and difficult it is for them to assimilate this material. Chapter 1 discusses that dichotomy, and the sociology and psychology behind it—but that is not in this preview.
The book’s Chapters 1, 2, and 3 focus on dynamically real-time at the level of individual actions, such as tasks or threads executing in computing systems, and physical behaviors performed by exogenous non-computational (e.g., mechanical) devices.
Chapter 1 is about the general necessity for the concept of real-time actions and their properties, especially dynamic ones, to be based on first principlesmental models, and conceptual frameworks. The minimal basics of epistemic uncertainty and alternative probability interpretations are introduced. As noted above, Chapter 1 also deliberately seeks to serve as an extended abstract for the first three Chapters. (Attempted inclusion of Chapter 4 in Chapter 1’s abstract role has thus far made Chapter 1 too long and difficult, but an eventual revision of Chapter 1 may yet accomplish that inclusion.)
Chapter 2 discusses action completion time constraints, such as deadlines and time/utility functions, and expressing those in ubiquitous priority-based execution environments (e.g., operating systems).
Most of the concepts and techniques in Chapters 3 and 4 have a vast body of deep mathematical and philosophical theory, , and a great deal of successful use in a variety of applications outside of real-time systems. The point of these chapters is to briefly introduce readers to using those concepts and techniques in dynamically real-time systems. Innate complexity, divergent viewpoints by probability theory and uncertainty researchers, and unsolved problems make these chapters somewhat difficult.
Chapter 3 is focused primarily on the very complex topic of predictability, for resolving dynamic resolution of contention for shared resources (e.g., scheduling) and predicting the outcomes, particularly timeliness—in the presence of uncertainties.
The book provides an overview of some important probability interpretations for that purpose, and some predictability theories and metrics—especially for acceptably satisfactory predictions of acceptably satisfactory action and schedule completion times. It begins with the familiar viewpoint of traditional probability theory. Then it re-focuses on alternative predictability theories unfamiliar to the book's presumed readership. These are needed for reasoning about richer (e.g., epistemic) uncertainties. They include mathematical theories of evidence (i.e., belief theories), which use application-specific rules for combining imprecise and potentially contradictory multi-source beliefs about the system and its environment to form new beliefs.
As a side effect, Chapter 3 provides the correct definitions of the essential terms predictable and deterministic that are ubiquitously misunderstood in the real-time computing practitioner and even research communities.
Chapter 4 addresses dynamically real-time systems, including computing systems and operating systems, that are based on dynamically real-time actions under uncertainties.
A system is characterized in large part by its system model. That defines properties such as of its actions, and of how the system’s resource management agents (such as a scheduler) handle actions according to specified optimality criteria including (but not limited to) actions’ timeliness and predictability of timeliness.
The essence of system resource management (especially by operating systems) is acceptably satisfactory resolution of concurrent contending action requests to access shared hardware and software resources (e.g., scheduling actions’ use of processor cycles and synchronizers) according to application-specific optimality criteria.
In dynamically real-time systems, that resolution must be acceptably satisfactory despite the number and kinds and degrees of epistemic uncertainties. Uncertainties must be accommodated by the system resource management agent(s) (e.g., scheduler) obtaining and refining credence of beliefs about the current and time-evolution state of the system, using an appropriate mathematical theory. On that basis, resource management decision (e.g., action schedules) are made.
Because almost no operation environments (e.g., operating system) have first order abstractions for action time constraints (e.g., deadlines) and timeliness predictability, the scheduled actions must then be mapped onto the operational environment priority mechanism.
Embedded systems also are briefly described in Chapter 4 because of their close relationship to real-time systems: most real-time computing systems are embedded; and many embedded systems are real-time to some degree.
Chapter 5 summarizes research on dynamically real-time computing systems by my Ph.D. students and co-advisor colleagues and I, and identifies some of the future work needed to expand on that. Those research results in their published form may be analytically challenging for some readers, but the chapter summarizes their important contributions.
Chapter 6 illustrates dynamically real-time computing systems and applications with examples inspired by my professional work, using material (re-)introduced in this book. It also shows some examples of how the paradigm is being employed outside the field of real-time systems (e.g., high performance computing, clouds, web server farms, transaction systems, etc.).
Chapter 7 has a brief survey of some important open problems to be solved for dynamically real-time systems. It also notes relevant work by researchers inside and outside the field of real-time systems.
Chapter 8 describes the need for software tools appropriate for thinking about, demonstrating, and designing dynamically real-time computing systems. It describes one open source program for doing so using material from this book [ ]. It mentions generic tools, such as MATLAB [ ], R [ ], Stan [ ], JASP [ ], etc., which can be very useful.
Chapter 9 is a annotated list of selected references, primarily by members of my research teams, plus relevant papers and books by other authors.
 
E. Douglas Jensen
added an update
This document is a discourse about some of my innovative research on real-time (including, but not limited to, computing) systems. It is a highly condensed and simplified work-in-progress preview of a work-in-progress book about that research: "An Introduction to Fundamental Principles of Timeliness in Dynamically Real-Time Systems" [Jensen 2018].
This book is based on my career's extensive experience performing research and development on real-time systems whose timeliness and predictability of timeliness are dynamic due to intrinsic epistemic uncertainties-e.g., ignorance, imprecision, non-determinism, conflicts-in the system and its application environment.
Such systems are common in (but not limited to) the defense domain, where they are exceptionally mission- and safety-critical. They require the strongest feasible assurance of satisfactory timeliness and predictability of timeliness-the keystones of real-time systems- despite those adverse uncertainties.
The book and this preview show that most real-time systems are beneficially understood to be in that category, contrary to traditional real-time computing systems orthodoxy which assumes few if any uncertainties.
Traditional real-time computing system concepts and techniques are for a narrow (albeit important) special case, based on presumed a' priori omniscience of a predominately static periodic-based system model. They are not applicable to the general case of dynamically real-time systems which dominate the real world.
The benefits of quantifying and exploiting uncertainties are very widely recognized in a great many fields of endeavor-e.g., in defense, industrial automation, medicine, finance, economics, transportation, logistics, data networking, and many more. An example of dealing with uncertainties in real-time systems which has recently become familiar to everyone, is semi-autonomous navigation of automobiles.
This book provides a principled perspective for the general case of real-time systems. It has been proven uniquely effective for artificial reasoning about timeliness and the predictability of timeliness despite uncertainties. Yet the perspective scales down to not just encompass but also improve the traditional static special case.
An informal dictionary definition of "timeliness" is "The fact or quality of being done or occurring at a favorable or useful time." Obviously, in general there needs to be a formalism for specifying, measuring, and reasoning about "favorable" or "useful" with respect to a completion time.
The traditional static real-time system model has no such formalisms except for its special case of simply either meeting or missing deadlines of periodic actions (e.g., using rate-monotonic scheduling).
This book and site employ the time/utility (sometimes called time/value) function paradigm [Jensen 77, Jensen 85] as the basis for formalizing timeliness. Although that paradigm has been discussed in many papers and dissertations, much more detail about it is provided here. The expressiveness of this paradigm has proven to be instrumental in a variety of deployed dynamically real-time systems. A case study based on an important class of actual applications is provided.
Time/utility functions also appear in various other contexts, including high performance computing [ ], cloud computing [ ], web services [ ], and others.
Predictability of actions' completion times and thus utilities and accrued system utility for that paradigm under uncertainties is its most challenging and promising opportunity for continued research and development of theory and engineering. Thus, it is a core focus for the research introduced in this book.
Formalizing predictability of timeliness in dynamically real-time systems requires use of an appropriate mathematical approach for dealing with epistemic uncertainties.
There exists a variety of widely used candidate approaches, each having application-specific properties.
Probability theory typically comes first to mind when considering predictability. However, there are various different interpretations of "probability" (i.e., probability theories), and that is an active field of research. The best known to non-specialists in probability is the frequentist interpretation (cf. tossing dice). It is shown herein to be the least appropriate for predictability in dynamically real-time systems, because it is about the outcome of a sequence of identical events-those sequences are absent in such systems.
Also well-known and widely used is the Bayesian probability theory, but it has several drawbacks. One of the strongest criticisms is its inability to distinguish between ignorance and randomness, which is overcome in other subsequent theories for dealing with epistemic uncertainty.
Several of those subsequent theories, particularly the popular belief function and mathematical evidence theories, (e.g., Dempster-Shafer theory, the Transferable Belief Model, Dezert-Smranche theory) are extensively employed in various application contexts subject to uncertainties [ ]. These make predictions by combining beliefs or evidence from multiple sources in the system and its operational context, according to specific rules, to form new beliefs.
This book describes pertinent advantages and disadvantages of several belief function and mathematical evidence-based theories in the context of predictability of timeliness under epistemic uncertainties-specifically, for predicting the completion times and consequent utilities of scheduled real-time actions (e.g., computational tasks, exogenous physical behaviors). A notional case study is used as an exemplar application for comparing these approaches in that common class of systems.
Unsurprisingly, greater epistemic uncertainty (e.g., ignorance) leads to greater computation costs for accommodating it. Fortunately, there is a rich body of literature on efficient algorithms for using belief function and evidence theory [ ], which trades off different aspects of the solution space to accelerate computations. Implementing algorithms in either graphical processing units or silicon has also been done.
The examples show how the principles summarized in this book can be employed for resolving dynamic contention for shared resources in the presence of uncertainties (e.g., scheduling). Experience has demonstrated that this generalized understanding of real-time systems resource management can provide cost-effective system operational mission effectiveness which is not achievable at all by using traditional real-time approaches.
 
E. Douglas Jensen
added an update
This Introduction first briefly summarizes the keystone ideas that the book and its preview are about, and why I am writing about them. Then it describes the topic of each chapter. It is prerequisite reading for the rest of the Preview.
The book is primarily a scholarly monograph focused on conceptual foundations for timeliness and predictability of timeliness, which are the essential properties of real-time systems--including, but not limited to, real-time computing systems. The archetypal use of those properties in real-time computing systems is for scheduling tasks.
There is a strong need for deeper and more general thinking about those two properties, based on first principles. That is particularly needed in the context of traditional real-time computing systems where habitual misconceptions of those properties (and, indeed, of "real-time" per se) are ad hoc and very limited (the book provides numerous examples).
The first primary limitation of those misconceptions is that (at least almost) every aspect which may affect system timeliness and predictability of timeliness--usually of a group of tasks--is certain and known a' priori.
The second primary limitation is that moreover, those static aspects--e.g., task arrivals, task periods, task execution durations, task deadlines--are confined to predominately periodic special case instances, and consequently overloads and task resource conflicts are presumed to be avoidable.
That highly constrained system model, referred to in various imprecise ways by the real-time computing community as being "hard real-time," is a special case.
That community has even less understanding of concepts and vocabulary for the general and more frequent case of "not hard real-time"--notably "soft real-time."
Analogously, categorizing the spectrum of paint colors to be only "black" and "not black" is a very myopic perspective on the real world. Regardless of how much one might be able to say about the special case of the color "black," that categorization lacks the concepts and vocabulary for reasoning about the whole color spectrum.
Concepts and vocabulary for soft real-time, whose timeliness and predictability of timeliness are much more analytically complex than those for hard real-time, are rigorously defined in the book and summarized in this preview.
Most real-time systems in the general sense are inevitably subject to kinds and degrees of inherent uncertainties. Uncertainty can be understood as a condition of limited knowledge--e.g., due to ignorance, randomness--in which it is impossible to exactly describe the state of the world or its future evolution. 
Uncertainties necessarily cause timeliness and predictability of timeliness to be dynamic--e.g., variable, unknown, or non-deterministic, before and during system operation.
Hence the book emphasizes fundamental principles of timeliness and predictability of timeliness for that general case of dynamically real-time systems subject to uncertainties. Those principles scale down to encompass the niche static special case as well.
(There are aspects of systems other than timeliness and predictability of timeliness that make them dynamic in various ways and degrees—e.g., being adaptable. They are outside the scope of this book.)
Timeliness and predictability of timeliness are briefly introduced next (and are the topics of book and preview Chapters 2 and 3, respectively).
An informal dictionary definition of "timeliness" (say, of a happening) is "The fact or quality of being done or occurring at a favorable or useful time."
Obviously, in general there needs to be a formalism for specifying, measuring, and reasoning about "favorable" or "useful" with respect to a happening's completion time. The traditional static real-time system model has no such formalisms except for its special case ("hard"). The real-time computing academic research community's formalism is based on the deadline construct.
The deadline construct is far richer than the intuitive special case which is commonly used, that limits "favorable" and "useful" to "meet" vs."miss."
Moreover, the real-time computing community further limits a deadline by adding the special case interpretation of "miss" to be "failure" (often even "system failure"), and of "meet" to be "not failure," thus ignoring the most frequent general case of completion times being more or less "favorable" or "useful."
Even when lateness is recognized and exploited (e.g., in scheduling theory), deadlines still have severe limitations--especially regarding expressiveness of the essential characteristic "favorable" or "useful" time. Lateness (and its two cases of tardiness and earliness) are linear, while "favorable" and "useful" are routinely non-linear metrics.
These limitations of deadlines can be overcome by a more general construct for completion time constraints, of which a deadline is a special case. This book and preview provide the most detailed discussion ever published of such a construct, called time/utility functions (formerly time/value functions) [Jensen 77, Jensen 85]. It has a deployment record of high cost-effectiveness in encompassing a broader than traditional scope of real-time systems--especially (but not limited to) real-time ones whose timeliness is dynamic instead of conventionally static.
The need to specify, measure, and reason about "favorable" or "useful" with respect to a happening's completion time always includes dealing with the predictability of timeliness.
Predictability is ubiquitously misunderstood within the field of real-time computing.
Something (e.g., a happening) being predictable means only that it is not unpredictable, it provides no information about how that happening's predictability is defined, measured, and reasoned about.
In particular, "predictable" does not mean deterministic, which is one of the formal representations for the maximum end-point on a scale of predictability.
The logic of most real-time systems includes some intrinsic kind(s) and degree(s) of non-deterministic timeliness and predictability of timeliness.
To reason about predictability in the presence of uncertainties, probability theory may first come to mind. However, probability theory is only one approach of several for predictability; and there are multiple alternative theories (interpretations) of probability (e.g., frequentist, Bayesian, Dempster-Shafer); and probability is notoriously difficult and non-intuitive for most humans (without at least minimal knowledge about probability) to understand (cf. the Monty Hall problem).
The book addresses predictability (especially of timeliness) from a number of probabilistic and non-probabilistic perspectives. Those include predictability for cases of epistemic uncertainty--e.g., due to ignorance, non-determinism. Such cases lend themselves to mathematical treatment in terms of belief functions--a class of functions each of which represents a body of evidence--and rules for combining belief functions (evidence from different sources) to produce new ones.
Timeliness and predictability of timeliness are the basis for what "real-time" means. 
Timeliness and predictability of timeliness are integral to the logic of an action (e.g., computational task) or system.
“Integral to” includes, but is not limited to, the correctness (whether binary or not) of that logic. Thus, in principle, an action or system of actions is not confined to either being a real-time one or not being a real-time one—any more than paint colors are confined to being either black or not black.
Herein it is often convenient to refer to that general class of actions and systems simply as “real-time” ones.
Note that the time frame(s)—e.g., whether microseconds or megaseconds—of a real-time action or system are encompassed in “integral to,” and are not per se part of the definition of “real-time.” Imagining that "real-time" is the same as "real fast" is an instance of being confused about one of the semantic aspects of timeliness.
This book provides a unique scholarly exposition of one set of timeliness foundations for general real-time systems, based on explicit first principles, not on historical anecdotal artifacts as is normal in the real-time computing field. That necessarily requires correcting the babel about concepts and terminology which is endemic in that field. Such confusion and inconsistency impedes the theory and practice of real-time systems, not only dynamic ones but even many static ones.
The purpose of this preview is to provide early access to some abbreviated and simplified content of the book. Writing the book is arduous, and must compete for time with my consulting tasks (and personal life).
Early access to some keystone content of the book is important to the real-time computing community and to society, because there is great demand in various civilian and military domains for assistance with real-time computing systems that have dynamic timeliness and predictability of timeliness, especially because of uncertainties. However, there is a negligible supply of real-time computer scientists and engineers with the requisite knowledge and experience (or interest) to satisfy those demands--thus creating valuable R&D opportunities.
Instead, almost all practitioners and researchers in real-time computing systems have limited themselves to the minority of predominantly or entirely static ones.
Fortunately, there is a large body of widely used theory and practice for dynamically real-time systems outside the computing field. Consequently, meeting the demands for dynamically real-time computing systems with inherent uncertainties has been addressed by:
  • application domain (military, industrial automation, telecommunications, etc.) subject matter experts but who usually lack substantive computer science, real-time computing, and uncertainty expertise;
  • researchers in the disparate theoretical topics, such as mathematical uncertainty models, that enable dynamic and particularly dynamically real-time computing systems—but who too usually lack the same substantive computer science and real-time computing expertise as do the application domain experts, and also usually lack substantive exposure to dynamic application domains.
The corporate application domain experts almost always hold their dynamically real-time experiences to be proprietary, and the military experiences are usually classified.
The theoreticians write in the jargon of their fields and publish in forums of their fields. Few computer scientists and engineers, and application domain experts, have the time and inclination to even discover these results, much less attempt to understand and apply them.
Unsurprisingly, most of the resulting dynamically real-time computing systems have left much to be desired functionally and non-functionally in comparison with dynamically  real-time systems outside the computing field.
This book seeks to help bridge at least part of that gulf using a combination of both innovative and extant principles from both sides—together with decades of hard-won personal experience contributing to the creation of dynamically real-time (including, but not limited to, computing) systems.
It takes a systematic (with limited formalisms) approach, using first principles, mental models, and a conceptual framework, to facilitate reasoning about dynamically real-time systems and epistemic uncertainties.
That approach is has been successfully applied to various extreme instances of wickedly dynamic and uncertain  (e.g., military combat) real-time systems for many years. It has also been applied in a number of computing fields outside of conventional real-time computing systems (e.g., cloud computing, web services, high performance computing, transactional systems).
To make this preview readily accessible to an intelligent non-specialist audience, three things reduce its breadth and depth.
First, the preview length is drastically limited from the book length—e.g., from an estimated 350 to approximately 50 printed letter size pages. That entails judicious selection, and concise re-writing, of only the most fundamental and easily understandable topics.
Second, Chapter 1 of the preview is relatively comprehensive and self-contained, given that it is just the first part of a preview (which may lead readers to a sense of déjà vu in subsequent chapters). The intent is that for some readers, Chapter 1 alone might suffice, at least initially.
Thirdly, the prerequisite mathematical and other knowledge is reduced by using language that is a compromise between informality and accuracy.
The book itself is somewhat more demanding in parts than this preview. It requires a modest amount of cognitive maturity for abilities such as: learning unfamiliar concepts; reasoning about abstractions; problem solving; and making both conceptual and engineering trade-offs between algorithms and efficient implementations of them.
Some of the book’s references provide remedial coverage of formal topics that readers may need, such as for optimization (e.g., non-deterministic scheduling algorithms) and subjective probability theories. Other references provide optional additional background outside the main scope of the book. In this preview (but not the book), no-cost on-line references are chosen as much as suitable.
The readers of the preview and the book are expected to have experience with some non-trivial systems which are considered to be real-time ones according to some plausible definition. Examples include: real-time applications such as industrial automation control systems, robotics, military combat platform management, etc.; and real-time operating systems, including substantive ones such as Linux kernel based or UNIX/POSIX based (not just minimal embedded ones), at the level of their internals.
Many years of experience teaching and applying the material in this book have shown that: in some cases, the less that people know about traditional static real-time computing systems, the more natural and easy they find much of this material; while in other cases, the more that people know about traditional static real-time computing systems, the more foreign and difficult it is for them to assimilate this material. Chapter 1 discusses that dichotomy, and the sociology and psychology behind it—but that is not in this preview.
The book’s Chapters 1, 2, and 3 focus on dynamically real-time at the level of individual actions, such as tasks or threads executing in computing systems, and physical behaviors performed by exogenous non-computational (e.g., mechanical) devices.
Chapter 1 is about the general necessity for the concept of real-time actions and their properties, especially dynamic ones, to be based on first principlesmental models, and conceptual frameworks.
The minimal basics of epistemic uncertainty and alternative probability interpretations are introduced.
As noted above, Chapter 1 also deliberately seeks to serve as an extended abstract for the first three Chapters. (Attempted inclusion of Chapter 4 in Chapter 1’s abstract role has thus far made Chapter 1 too long and difficult, but an eventual revision of Chapter 1 may yet accomplish that inclusion.)
Chapter 2 discusses action completion time constraints, such as deadlines and time/utility functions, and expressing those in ubiquitous priority-based execution environments (e.g., operating systems).
Most of the concepts and techniques in Chapters 3 and 4 have a vast body of deep mathematical and philosophical theory, and a great deal of successful use in a variety of applications outside of real-time systems. The point of these chapters is to briefly introduce readers to using those concepts and techniques in dynamically real-time systems. Innate complexity, divergent viewpoints by probability theory and uncertainty researchers, and unsolved problems make these chapters somewhat difficult.
Chapter 3 is focused primarily on the very complex topic of predictability, particularly of timeliness, for making dynamic scheduling predictions and decisions.
The book provides an overview of some important probability interpretations for that purpose, and some predictability theories and metrics—especially for acceptably satisfactory predictions of acceptably satisfactory action completion times.
It begins with the familiar viewpoint of traditional probability theory. Then it re-focuses on alternative predictability theories unfamiliar to the book's presumed readership. These are needed for reasoning about richer (e.g., epistemic) uncertainties. They include mathematical theories of evidence (i.e., belief theories), which use application-specific rules for combining imprecise and potentially contradictory multi-source beliefs about the system and its environment.  
The mathematical theories of evidence discussed begin with Bayesian, and successively generalize to Dempster-Shafer theory, the Transferable Belief Model, and Dezert–Smarandache theory.
As a side effect, Chapter 3 provides the correct definitions of the essential terms predictable and deterministic that are ubiquitously misunderstood in the real-time computing practitioner and even research communities.
Chapter 4 addresses dynamically real-time systems, including computing systems and operating systems, that are based on dynamically real-time actions under uncertainties.
A system is characterized in part by its system model. That defines properties such as of its actions, and of how the system’s resource management agents (such as a scheduler) handle actions according to specified optimality criteria including (but not limited to) actions’ timeliness and predictability of timeliness.
The essence of system resource management (especially by operating systems) is acceptably satisfactory resolution of concurrent contending action requests to access shared hardware and software resources (e.g., scheduling actions’ use of processor cycles and synchronizers) according to application-specific optimality criteria. Most scheduling optimization is NP-hard, so algorithmic scheduling heuristics and action dispatching rules are widely used.
In dynamically real-time systems, that resolution must be acceptably satisfactory despite the number and kinds and degrees of epistemic uncertainties. Uncertainties must be accommodated by the system resource management agent(s) (e.g., scheduler) obtaining and refining credence of beliefs about the current and time-evolution state of the system, using an appropriate mathematical theory. On that basis, resource management decision (e.g., action schedules) are made.
Because almost no operation environments (e.g., operating system) have first order abstractions for action time constraints (e.g., deadlines) and timeliness predictability, the scheduled actions must then be mapped onto the operational environment priority mechanism (which in general is also NP-hard).
Real-time computing practitioners most often do an ad hoc manual assignment of O(10) priorities to actions (e.g., tasks) based on application action semantics, instead of algorithmically scheduling the actions. The actions are then dispatched by the OS in schedule order by simply indexing the actions with semantic-free priorities. The priority indexing can be performed by the scheduler, but that choice (aside from related OS design issues) tends to cause confusion of scheduling with dispatching, and hence of ends vs. means.
The primary distinction of real-time systems resource management is that it is performed according to the application and system’s logic in terms of action (e.g., task) timeliness (e.g., deadlines, time/utility functions) and predictability of timeliness.
For example (here using the conventional interpretation of probability and avoiding the actual mathematical descriptions): that (say) no more than 10% of the most important actions will be more than 25% tardy; or that (say) the actions' accrued utility will be normally distributed with a mean at least 75% of its theoretical maximum, while no more than 50% of the actions will attain less than 80% of their individual utilities. An example using (say) a mathematical theory of evidence requires more prerequisite preparation than can be provided in this Introduction.
Embedded systems also are briefly described in Chapter 4 because of their close relationship to real-time systems: most real-time computing systems are embedded; and many embedded systems are real-time to some degree.
Chapter 5 summarizes research on dynamically real-time computing systems by my Ph.D. students and co-advisor colleagues and I, and identifies some of the future work needed to expand on that. Those research results in their published form may be analytically challenging for some readers, but the chapter summarizes their important contributions.
Chapter 6 illustrates dynamic real-time computing systems and applications with examples inspired by my professional work, using material (re-)introduced in this book. It also shows some examples of how my time/utility function paradigm is being employed outside the field of real-time systems (e.g., high performance computing, clouds, web server farms, transaction systems, etc.).
Chapter 7 has a brief survey of some important open problems to be solved for dynamic real-time systems. It also notes relevant work by researchers inside and outside the field of real-time systems.
Chapter 8 describes the need for software tools appropriate for thinking about, demonstrating, and designing dynamic real-time computing systems. It describes one open source program for doing so using material from this book [ ]. It mentions generic tools, such as MATLAB [ ], R [ ], Stan [ ], JASP [ ], etc., which can be very useful.
Chapter 9 is a annotated list of selected references, primarily by members of my research teams, plus relevant papers and books by other authors.
 
E. Douglas Jensen
added a research item
Work-in-progress preview of a work-in-progress book: An Introduction to Fundamental Principles of Timeliness in Dynamically Real-Time Systems [Jensen 2020] E. Douglas Jensen ABSTRACT This book is a discourse about some of my novel research on real-time (including, but not limited to, computing) systems. It is a highly condensed and simplified work-in-progress preview of a work-in-progress book about that research: "An Introduction to Fundamental Principles of Timeliness in Dynamically Real-Time Systems" [Jensen 2018]. This research is based on my career's extensive experience performing research and development on real-time systems which have dynamic timeliness and predictability of timeliness resulting from intrinsic uncertainties--e.g., ignorance, imprecision, non-determinism, conflicts--in the system and its application environment. (My consulting practice web site is time-critical-technologies.com but I have retired from consulting except in certain cases of classified national security.) The book and this preview show that most real-time systems are beneficially recognized to be in that category, contrary to traditional real-time computing systems orthodoxy which presumes no uncertainties. Traditional real-time computing system concepts and techniques are for a narrow (albeit important) special case, based on presumed a' priori omniscience of a predominately static periodic-based system model. They are not applicable to the general case of real-time systems which dominate the real world. The benefits of quantifying and exploiting uncertainties are very widely recognized in a great many fields of endeavor, not only real-time systems (e.g., in defense, medicine, finance, geo-science, economics, and many more). An example of dealing with uncertainties in real-time systems which has recently become familiar to everyone, is semi-autonomous augmented driving of automobiles. An informal dictionary definition of “timeliness” is “The fact or quality of being done or occurring at a favorable or useful time.” Obviously, in general there needs to be a formalism for specifying, measuring, and reasoning about “favorable” or “useful” with respect to a completion time. The traditional static real-time system model has no such formalisms except for its special case (“hard”). This book provides a principled perspective for general real-time systems which has been proven uniquely effective for artificial reasoning about timeliness and the predictability of timeliness, despite uncertainties—and yet which scales down to not just encompass but improve the traditional static special case. Reasoning about dynamically real-time actions and systems requires formalization of “timeliness” and “predictability.” This book and preview employ the time/utility (sometimes called time/value) function paradigm [Jensen 77, Jensen 85] as the basis for formalizing timeliness. Although that paradigm has been discussed in many papers and dissertations, much more detail about it is provided here. The expressiveness of this paradigm has proven to be instrumental in a variety of deployed dynamically real-time systems. A case study based on a class of actual applications is provided. Predictability of actions' completion times and thus accrued utility for that paradigm remains its most challenging and promising opportunity for continued research and development of theory and engineering. Formalizing predictability of timeliness in dynamically real-time systems requires use of an appropriate mathematical approach to dealing with uncertainties. There exists a variety of widely used candidate approaches, each having specific conceptual and practical properties for different applications. The book describes advantages and disadvantages of some approaches in the context of predictability of timeliness under epistemic uncertainties--specifically, for predicting the completion times and consequent utilities of scheduled real-time actions (e.g., computational tasks, exogenous mechanical motions). A notional case study is used as an exemplar application for comparing these approaches. Probability theory typically comes first to mind when considering predictability. However, there are a number of different interpretations of "probability" (i.e., probability theories), and that is an active field of research. The best known to non-specialists in probability is the frequentist interpretation (cf. tossing dice). It is shown herein to be the least appropriate for predictability in dynamically real-time systems, because it is about the outcome of a sequence of identical events--those are absent in such systems. Also well known and widely used is the Bayesian probability theory, but it has several drawbacks. One of the strongest criticisms of the Bayesian theory is its inability to distinguish between ignorance and randomness, which is overcome in other subsequent theories for dealing with uncertainty. Several of those, particularly the popular belief function and evidence theories, (e.g., Dempster-Shafer theory, the Transferable Belief Model, Dezert-Smranche theory) are candidates for being employed--including by schedulers--in certain dynamically real-time systems. These predict action and schedule timeliness and thus utilities by combining beliefs or evidence from multiple sources in the system and operational context. Unsurprisingly, greater epistemic uncertainty (e.g., ignorance) leads to greater computation costs for accommodating it. Fortunately, there is a rich body of literature on efficient algorithms for using belief function and evidence theory, which trade off different aspects of the solution space to accelerate computations. Implementing algorithms in either graphical processing units or silicon has also been done. The book includes examples of how using these fundamental principles of timeliness and predictability of timeliness for dynamically resolving contention for shared resources can provide increased cost-effective operational mission effectiveness in certain interesting dynamically real-time systems.
E. Douglas Jensen
added an update
FAQ Q3: "What is the difference between dynamic real-time systems and embedded systems?"
A3: This question is off-topic but since it has been asked three times, I will take the opportunity to address a ubiquitous misunderstanding about embedded systems.
By “embedded system” we normally (but not necessarily) mean “digital computer system.”
First, it is very important to know about a common misunderstanding of the term “embedded system.” There is the literal meaning, and the typical meaning, of “embedded (computing) system.” The distinction between them has a major impact on the practice of embedded computing, as we will see in this answer, so we need a very brief tutorial.
The literal meaning is exactly what the term says: a computing system that is embedded in an application system. An “embedded” system is essentially an invisible black box that performs some functionality for the application system. The users of the application system might be aware of the embedded computing system, but they have no direct visibility or control of it—as far as they know, the embedded computer’s functionality could theoretically be performed by relays or trained monkeys or even by magic. The users of the application system see and use the human interface provided for them by the application system. The literal meaning is the general case.
Most often, an embedded system’s functionality includes both computation and reaction. Generically, the functionality can be thought of as performing some mix of closed loop and open loop control of devices that are part of or exogenous to the application system. For closed loops, the embedded computer receives data and signals from the application system and the application system’s environment, performs some computations, and outputs data and signals to the application system and the application system’s environment. The operation of open loops follows self-evidently. This is similar to the familiar interaction between a regular desktop or laptop PC and devices inside it (e.g., drives) and external to it (e.g., scanner, camera). Clearly, in some cases, the PC user is involved in some reactive behavior, and in other cases the user is not. However, standalone computers are ordinarily considered as primarily computational or transformational instead of reactive.
I have been asked on RG how embedded systems relate to real-time systems. My reply is posted elsewhere, but here suffice it to say that “real-time system” is not as “all-or-nothing” or “hard” vs. “soft” as popularly imagined, systems are real-time in various ways and degrees—cf. Introduction to Fundamental Principles of Dynamic Real-Time Systems (https://www.researchgate.net/project/A-Preview-of-Introduction-to-Fundamental-Principles-of-Dynamic-Real-Time-Systems). Many application systems include functionality and devices that are considered to be real-time in the usual sense that correctness requires satisfaction of constraints on the timeliness and predictability of the application and hence on the reactive interactions by the embedded system. Other application systems with embedded systems have the different objective of maximal performance (e.g., throughput, non-starvation).
A major distinction between an embedded system and a non-embedded one is that the developers of embedded systems hardware and software usually need (potentially quite a lot of) understanding about the application system functionality and design, and of the devices therein that it is interacting with. Indeed, often that embedded system hardware and software must be co-designed at a low level of detail (ordinarily the application system’s hardware and software are a given), calling for the education and experience of computer engineering as well as of computer science.
All of that having been said, what is the wide-spread misunderstanding about embedded systems? It is what was *not *said.
The typical meaning of “embedded system” is a special case of the literal meaning. It adds a lot of explicit and even implicit assumptions—about the embedded computer, and about the application system, and about the application system’s devices. Most notably, the use of the term “embedded system” almost always incorrectly assumes that the embedded computer is very small scale in size/weight/power, that the embedded computer software is small size, and that the application system is also relatively small scale and simple. Those assumptions are often correct, and everyone can immediately think of many examples.
(As a partial digression: it can be (and is) argued that a smartphone is an application system that has an embedded system—and conversely, that it is not but instead a handheld computer like a desktop or laptop PC. I leave that as a RG discussion, if it has not already been adequately covered.)
The typical meaning of “embedded system” overlooks the fact that they may be, and are, not just only small scale, but of various scales in capabilities and size/weight/power consumption—all the way up to real-time supercomputers and distributed real-time systems having nodes of various complexities. They are embedded in application systems of various scales, including very large complex ones—e.g., aircraft carriers, jumbo jets. Such application systems almost always include multiple embedded systems of different scales. Most embedded system developers have not been exposed to large complex real-time embedded systems and application systems.
That background was needed for me to give my answer to the question, because my experience and expertise in embedded systems is mostly in the literal sense, especially including large, complex, dynamic real-time ones. Those are primarily for military application systems, which are the most wickedly challenging ones due to being engaged in combat and to the “fog of war.” In addition, such systems are unlikely to be mentioned in other replies (which will use the typical meaning of embedded system) to the question.
A real but simplified example from personal experience is modern AESA multi-mission multi-mode radars on large surveillance aircraft and large warships. Such radars can be expected to have embedded multiprocessor/multicore computers in the multi-million US$ price range, running multi-million lines of software to control the radar, and consume last amounts of power. The radar (on-board and ground-based) users certainly know there is an embedded computer in their radar system, but they do not directly see or control it—they see the application system (radar) user interfaces.
Another example I am very familiar with is are application systems for cruise and ballistic missile defense. Some of these have real-time embedded systems that are >US$million supercomputers, that the users do not directly interface with.
People probably will be surprised to hear that some of the companies that make large scale real-time embedded computers are familiar only in the non-real-time non-embedded contexts (e.g., HP, IBM). Probably people have have not heard of most others (e.g., Mercury).
 
E. Douglas Jensen
added an update
FAQ Q2: A group of similar questions has become the most common one I have received. They are to the effect of "What is the relationship between "dynamic real-time computing systems" and "soft real-time computing systems?"
A2: This topic is discussed in detail beginning in Chapter 4 of the book, and is treated briefly in the Preview of Chapter 4.
There is a brief and thus necessarily incomplete answer: real-time computing systems are soft to the extent that they are not entirely static; and conversely, real-time computing systems are dynamic if they are not hard real-time. That begs the question [begthequestion.info] because it presumes the terms "hard" and "soft" each have precise scientific definitions.
But, sadly, there are almost as many "definitions" of those words as there are sources (people, publications, etc.) for them. Those multifarious "definitions" are ad hoc and imprecise. The worst example is saying a "soft real-time system" is one that is not "hard"---which is a useless tautology, like saying all colors (in the context of paint not light) are either black or not black. Those two examples fail because they provide no concepts and vocabulary for reasoning about the great many soft real-time computing systems and all the non-black colors.
This book derives precise definitions of "hard" and "soft" scientifically from first principles, first for actions and then for systems, thus providing a basis for cogent reasoning about them, and a formal answer to the question at hand.
Until Chapter 4, to a crude first approximation, the terms "dynamic real-time computing system" and "soft real-time computing system" can temporarily used as being synonymous.
 
E. Douglas Jensen
added an update
I receive numerous questions about the book I am previewing here. This is one of them, and my answer:
Q: One of the questions I receive about the book I am previewing here is to the effect that the prerequisites seem much more formally arduous (theory of algorithms, discrete mathematics, probability theory, non-deterministic (e.g., stochastic) scheduling, and more) than those for users and programmers of most real-time computing systems.
A: Correct. The first reason is that dynamic real-time systems can be vastly more complex than are the (almost completely) static ones focused on by both the research and the practitioner real-time computing communities. The complexity arises from the fact that everything that is static in the latter systems is dynamic in the former systems -- and moreover, dynamic real-time systems have principles, properties, and parameters for which there are no familiar static equivalents. The second reason is that most dynamic real-time systems necessarily need many of those dynamic characteristics to be tailored both at configuration time and run time to be application-specific--obviously far more so than do static real-time systems--which requires at least some of the users to have the formal (and informal) skills to accomplish that. Most dynamic real-time systems are developed and used in application domains outside the conventional real-time computing field, and tend to be publically invisible (proprietary, DoD classified).
 
E. Douglas Jensen
added a project goal
This is a condensed preview of my work-in-progress monograph "Introduction to Fundamental Principles of Dynamic Time-Critical Systems." The preview provides brief edited extracts about some keystone principles of timeliness in time-constrained systems. Real-time systems are a static special case of time-critical systems.