Introduction to Fundamentals of Timeliness and Predictability in Dynamically Real-Time Systems
Work-in-progress preview of a work-in-progress book--replaces previous articles. Updated 15 January 2019.

Introduction to Fundamentals of Timeliness and Predictability in Dynamically Real-Time Systems

Abstract (for the preview)

Informally, a system is a “real-time” one if its core properties of timeliness and predictability of timeliness are integral to its logic, not just performance measures. In general, those properties are dynamic due to intrinsic aleatory and epistemic uncertainties about parameters of the system and its application environment; this book refers to "dynamically real-time systems." Despite such uncertainties, dynamically real-time systems have mixed application-specific kinds and degrees of criticality—including even the most extreme safety-critical systems (e.g., for warfare)—which can be expressed in terms of application qualities of services. Traditional real-time computing systems are a narrow special case whose parameters and core properties are predominantly static and periodic—almost all their parameters' values and time evolution are presumed to be known á priori. Those systems have very limited (albeit potentially important) applicability. Many dynamically real-time systems exist, created by application domain experts outside of, and unseen by, the real-time computing field. These experiences are often held as enterprise proprietary or government classified. Most real-time systems usually suffer from the lack of a coherent foundation for "real-time" per se. The design, implementation, and application of real-time systems (both static and dynamic) can be extended and strengthened by creating such a foundation based on first principles for those core properties. This book introduces one approach to that. Timeliness is dynamically expressive using my time/utility (née time/value) functions and utility accrual (née value-based) scheduling paradigm [Jensen 77] [Jensen+ 85]. Predictability of dynamic timeliness requires reasoning about it with some formal theory of uncertainty. This book surveys some popular theories, and focuses on applying mathematical theories of evidence (i.e., Dempster-Shafer Theory and its subsequent evolutions such as the Transferable Belief Model and the Theory of Hints). The foundation introduced here has been successfully employed in various real-time contexts having wickedly dynamic parameters and core properties, where traditional static real-time perspectives and traditions are inadequate and counter-productive.

Introduction (to the preview)

 "Sometimes shifting your perspective is much better than being smart."

-- Astro Teller, TED Talk

This is a work-in-progress, highly condensed and less formal, preview of my work-in-progress book Introduction to a Foundation for Timeliness and Predictability in Dynamic Real-Time Systems [Jensen 201x].

This Introduction first briefly summarizes the keystone ideas that the book and its preview discuss, and why I am writing about them. Then it describes the topic of each chapter. It is prerequisite reading for the rest of the preview.

The primary objective of the book and this preview is to introduce a scholarly yet practical foundation for timeliness and predictability of timeliness at both the action (e.g., task) and system levels. Those are the core properties of all real-time systems. Here they are the primary ones which distinguish the general case of dynamic real-time systems from the conventional subset of static ones—and thus are the basis for this foundation.

Such a foundation is important (and this one has been proven in practice to be invaluable) for generalizing conventional real-time system models and practices from their historical focus on the narrow special case of predominately static periodic-based (informally termed “hard”) real-time systems.

That generalization encompasses all real-time systems—defined in this preview’s Introduction (temporarily informally) to be systems for which timeliness and predictability of timeliness are part of the logic of the system.

Although the real-time computing field erroneously dismisses non-hard systems as “soft” in various ad hoc and ambiguous ways, this foundation reveals non-hard (soft, as will be precisely defined herein) to be not just the general, but also the most common, case of real-time systems.

Informally, timeliness of a “happening”—an event (e.g., task completion) or information (e.g., data change)—is a measure of the extent to which its occurrence time can be related to the happening’s usefulness in specific circumstances.

Timeliness is orthogonal to relative importance. For example: an action can have a short deadline but be relatively unimportant because missing that deadline can be compensated for (e.g., missed input data can be extrapolated, missed outputs can be idempotent); an action can have a long deadline but be relatively important because missing the deadline is a critical mission failure (e.g., mission management phases for defense from hostile missiles can be computationally intensive for up to O(10^[1,2]) minutes with minimal slack time); the other two cases are often erroneously assumed to be the normal ones.

This also reveals that the practice by some in the real-time computing community of designating actions' (e.g., tasks') criticality by their relative worst case execution times or deadlines is a special case at best and inappropriate in general. The metric for mixed criticality real-time actions and systems is most generally application-specific.

In the conventional real-time computing system special case, timeliness metrics are ordinarily binary: whether a task completion deadline is met; whether the latency of an interrupt response or system call response can be less than a least upper bound.

However, in principle (cf. scheduling theory) and real world practice, the logic of a real-time system includes relationships between: happenings’ lateness (earliness and tardiness) with respect to a deadline or least upper bound; and the consequent usefulness of the happening when it is manifest.

The book’s foundation furthermore provides for the frequent cases of even richer expressive relationships evident in an application—e.g.: whose lateness is not necessarily linear; and whose completion time constraints are not deadlines or least upper bounds—using Jensen’s time/utility functions.

Continuing informally, something (notably here, timeliness) is predictable in the manner and to the extent that it can be known á priori. In conventional (i.e., “hard”) real-time computing system models, predictability is static and almost exclusively binary—i.e., it is presumed that all (especially task) completion time constraints will always be met (unless there is a failure). In that field, predictability is often mistakenly thought to mean “deterministic.”

That mistake can be recognized and remedied by considering predictability to be a continuum. In that type of model, the maximum predictability end-point is deterministic á priori knowledge, and the minimum predictability end-point is total absence of á priori knowledge. Everywhere on the predictability continuum in-between those end points is degrees of non-determinism. Typically, happenings in dynamic real-time systems have predictability between the two end points of that continuum, according to some formal model- and context-specific metric. A simplified example, using a statistical continuum for predictability is: the maximum predictability (deterministic) end-point is represented by the constant distribution; the minimum predictability (maximum non-determinism) end-point is represented by the uniform distribution; the metric is in terms of some model instantiation properties of probability distributions, such as shape, dispersion, etc. 

Predictability is a vastly deeper and more complex topic than that intuitive continuum model.

Predictability consists of reasoning about kinds and degrees of uncertainty. That usually first brings to mind probability theory, but there are multiple different interpretations and theories of “probability.”

The familiar frequentist interpretation may seem applicable to predicting timeliness (e.g., task completion times) in static periodic-based real-time systems, but even there it can be misleading—and it is not at all applicable to dynamic real-time systems.

The term “dynamic” in the context of the book encompasses two distinct departures from almost all conventional static real-time systems. Both departures are common for timeliness and predictability of timeliness in the real world.

First, the system models’ defaults include that parameters (in particular, which may be used for scheduling) such as task arrivals, task execution durations, task time constraints, task resource conflicts, etc. are not necessarily static. (Such system models are sometimes called “open” or "open world" ones.)

Instead, parameters may be aleatoric—i.e., statistically uncertain ones which may differ (e.g., stochastically) during system execution and for each time the system is run. In most static real-time computing systems, modern processor and computer architectures force task execution times to be a dynamic parameter. Fitting that as “worst-case execution time” into the conventional static orthodoxy is very challenging.

Second, there may be inherent epistemic uncertainties (i.e., ignorance) about aspects of the system and its environment. They are not objectively statistical in nature, in the sense that statistical characterizations do not capture their values and meaning. Reasoning about these uncertainties requires employing an appropriate, often application-specific, theory of uncertainty from the extant ones.

The Bayesian theory of probability and inference for reasoning about uncertainty is popular in many fields. Bayesian probability can be assigned to any statement whatsoever, even when no random process is involved, to represent its subjective plausibility or the degree to which the statement is supported by the available evidence. However, it has major limitations—most notably that it cannot model ignorance, uncertain data, conflicting data, and scarce data. Those notable limitations make it unsuitable for predictability of timeliness in many dynamic real-time systems (and for various other applications). They are overcome by several more expressive theories based on mathematical entities called belief functions.

Very simply: Belief function theories for reasoning about the uncertainty of a hypothesis combine evidence—usually from different sources—to arrive at an updated degree of belief represented by a mathematical object called a belief function. Different rules for combining evidence and updating beliefs have been devised to better accommodate varied uncertainty characterizations (e.g., conflicting and contradictory evidence, number of evidence sources, fuzzy numbers). Belief function theories began with Dempster-Shafer theory, also called the mathematical theory of evidence. Evolved theories include the Theory of Hints, the Transferable Belief Model, and Dezert-Smarandache Theory. Each of these theories also has its own limitations which necessitate application-specific usage. Belief function theories are now well established as a general framework for reasoning with uncertainty, and have well understood connections to other frameworks such as probability, possibility and imprecise probability theories. The field of application of these theories is very large.

Belief function theories can be natively expensive in computation time and memory space, especially in comparison with some of the (simplest) reasoning based on classical probability theory, and even in comparison with with Bayesian inference. Depending on the uncertainties and computational resources used, belief function reasoning time frames may be much longer than those in predominately static systems (recall that time frame magnitudes are not part of the definition of real-time). For example, very complex belief function reasoning for uncertainty-laden applications (e.g., military defense) takes place in sub-second time frames using supercomputer class machines—in some cases a commercial ground-based product, and in other cases custom-made mobile (e.g., surveillance aircraft) products.

Fortunately, the attractiveness of belief function theories for more comprehensively reasoning about uncertainty has engendered an active field of research and development on efficient algorithms, data structures, and even hardware (GPU, FPGA) implementations, for reducing those costs. Even so, the theories are usually instantiated in application-specific ways for maximal cost-effectiveness.

Any specific application may have kinds and degrees of uncertainties (e.g., of dynamic system model parameters) that outreach all resource-feasible formal reasoning approaches. Ultimately there will always be cases for which the only viable approach is to restructure an application so that uncertainty is manageable—just as is true for simple static real-time systems (e.g., worst-case execution times may be infeasible to ascertain or tolerate).

Belief function theories are increasingly used in a broad range of different fields, despite initial misunderstandings and mistakes in both research and practice.

The foundation in the book is novel in accommodating Bayesian and belief function theories, in addition to classical probability theory, for predicting timeliness (i.e., schedules’ outcomes) of dynamic real-time systems. In particular, it provides for innovative approaches to the historical challenge of predicting non-deterministic individual and collective completion times and consequent utilities for time/utility functions.

Generically, scheduling employs system model parameters which may be numerical constants (in static systems), or numeric variables (whether random or not). The scheduling algorithm produces a schedule (before or during system operation) according to one or more optimality criteria, while respecting constraints such as dependencies (e.g., task precedence, resource access conflict resolution rules). The optimality criteria for real-time scheduling explicitly or implicitly express the desired timeliness and predictability of timeliness of the scheduled entities (e.g., tasks) individually and collectively. In general, sequencing (scheduling, dispatching) is an FNP-complete problem, so application-specific heuristics are normally used, which are devised to be sufficiently feasible and acceptably sub-optimal.

Real-time scheduling with belief functions involves algorithms which formulate mathematical beliefs about timeliness and predictability of timeliness based on combining beliefs about uncertain system model parameters with any available certain parameter information. The beliefs about those uncertain scheduling parameters are based on combining evidence from one or more pertinent (application, system) sources. Beliefs are combined and updated according to a (usually) application-specific belief updating rule. Beliefs are updated as evidence is updated.

When belief functions are used with time/utility functions, the utility functions may be belief functions, and the utility accrual procedure is normally a belief function.

The book includes dynamic real-time application case studies with examples of employing and comparing Bayesian and belief function theories, in addition to classical probability theory, in those applications’ contexts.

For case studies (and in practice), the choice of application contexts should balance their need—e.g., relative (e.g., societal, economic, etc.) importance and timeliness predictability difficulty due to uncertainties—against their time frames’ adequacy for the complexity of reasoning about schedules’ timeliness. Available application subject matter expertise is also essential.

The book’s primary example case study’s context is a military combat system one, namely battle management (e.g., missile defense). These have a variety of real-time behaviors with dynamic kinds and degrees of uncertainty (plus low-level subsystem static behaviors). Despite that, the missions can be as human safety-critical as imaginable, having constituent real-time functionalities whose individual criticalities vary innately and circumstantially in terms of mission operational effectiveness. The time frames for dynamic real-time scheduling include ones on the order of seconds and minutes. Subject matter expertise is gained from the book author and other (collaborator and independent) persons’ work in that application context—including from the successful use of the book’s foundation in that context.


To make this preview readily accessible to an intelligent non-specialist audience, three things reduce its breadth and depth.

First, the preview length is drastically limited from the book length—e.g., from an estimated 350 to approximately 50 pages. That entails judicious selection, and concise re-writing, of only the most fundamental and easily understandable topics.

Second, Chapter 1 of the preview is relatively comprehensive and self-contained, given that it is just the first part of a preview (which may lead readers to a sense of déjà vu in subsequent chapters). The intent is that for some readers, Chapter 1 alone might suffice, at least initially.

Thirdly, the prerequisite mathematical and other knowledge is reduced by using language that is a compromise between informality and accuracy.

The book itself is somewhat more demanding in parts than this preview. It requires a modest amount of cognitive maturity for abilities such as: learning unfamiliar concepts; reasoning about abstractions; problem solving; and making both conceptual and engineering trade-offs between algorithms and efficient implementations of them.

Some of the book’s references provide remedial coverage of formal topics that readers may need, such as for optimization (e.g., non-deterministic scheduling algorithms) and set theory (e.g., probability theories). Other references provide optional additional background outside the main scope of the book. In this preview (but not the book), no-cost on-line references are chosen as much as suitable.

The readers of the preview and the book are expected to have experience with some non-trivial systems which are considered to be real-time ones according to some plausible definition. Examples include: real-time applications such as industrial automation control systems, robotics, military combat platform management, etc.; and real-time operating systems, including substantive ones such as Linux kernel based or UNIX/POSIX based (not just minimal embedded ones), at the level of their internals.

Many years of experience teaching and applying the material in this book have shown that: in some cases, the less that people know about traditional static real-time computing systems, the more natural and easy they find much of this material; while in other cases, the more that people know about traditional static real-time computing systems, the more foreign and difficult it is for them to assimilate this material. Chapter 1 of the book discusses that dichotomy, and the sociology and psychology behind it—but that is not in this preview.

The book’s Chapters 1, 2, and 3 focus on dynamically real-time at the level of individual actions, such as tasks or threads executing in computing systems, and physical behaviors performed by exogenous non-computational (e.g., mechanical) devices.

Chapter 1 is about the general necessity for the concept of real-time actions and their properties, especially dynamic ones, to be based on productive thinkingfirst principlesmental models, and conceptual frameworks. As noted above, Chapter 1 also deliberately seeks to serve as an extended abstract for the first three Chapters. (Attempted inclusion of Chapter 4 in Chapter 1’s abstract role has thus far made Chapter 1 too long and difficult, but an eventual revision of Chapter 1 may yet accomplish that inclusion.)

Chapter 2 discusses action completion time constraints, such as deadlines and time/utility functions, and expressing those in ubiquitous priority-based execution environments (e.g., operating systems).

Most of the concepts and techniques in Chapters 3 and 4 draw on a vast body of deep mathematical and philosophy of probability theories, and a great deal of their successful use in a variety of applications outside of real-time systems. The point of these chapters is to briefly introduce readers to using those concepts and techniques in dynamically real-time systems. Even though the book's mathematics is avoided, innate complexity, divergent viewpoints by probability theory and uncertainty researchers, and unsolved problems make these chapters somewhat difficult.

Chapter 3 is focused primarily on the very complex topic of predictability, for dynamic resolution of contention for shared resources (e.g., scheduling) and predicting the outcomes, particularly timeliness—in the presence of uncertainties.

The book provides an overview of some important probability interpretations for that purpose, and some predictability theories and metrics—especially for acceptably satisfactory predictions of acceptably satisfactory action and schedule completion times. It begins with the familiar viewpoint of traditional probability theory. Then it re-focuses on alternative predictability theories unfamiliar to the book's presumed readership. These are needed for reasoning about richer (e.g., epistemic) uncertainties. They include mathematical theories of evidence (i.e., belief theories), which use application-specific rules for combining imprecise and potentially contradictory multi-source beliefs about the system and its environment to form new beliefs.

As a side effect, Chapter 3 provides the correct definitions of the essential terms predictable and deterministic that are ubiquitously misunderstood in the real-time computing practitioner and even research communities.

Chapter 4 addresses dynamically real-time systems, including computing systems and operating systems, that are based on dynamically real-time actions under uncertainties.

A system is characterized in large part by its system model. That defines properties such as of its actions, and of how the system’s resource management agents (such as a scheduler) handle actions according to specified optimality criteria including (but not limited to) actions’ timeliness and predictability of timeliness.

The essence of system resource management (especially by operating systems) is acceptably satisfactory resolution of concurrent contending action requests to access shared hardware and software resources (e.g., scheduling actions’ use of processor cycles and synchronizers) according to application-specific optimality criteria.

In dynamically real-time systems, that resolution must be acceptably satisfactory despite the number and kinds and degrees of epistemic uncertainties. Uncertainties must be accommodated by the system resource management agent(s) (e.g., scheduler) obtaining and refining credence of beliefs about the current and time-evolution state of the system, using an appropriate mathematical theory. On that basis, resource management decision (e.g., action schedules) are made.

Because almost no operation environments (e.g., operating systems) have first order abstractions for action time constraints (e.g., deadlines) and timeliness predictability, the scheduled actions must then be mapped onto the operational environment priority mechanism.

Embedded systems also are briefly described in Chapter 4 because of their close relationship to real-time systems: most real-time computing systems are embedded; and many embedded systems are real-time to some degree.

Chapter 5 summarizes research on the paradigm of time/utility functions and utility accrual by my Ph.D. students and co-advisor colleagues and I, and identifies some of the future work needed to expand on that. Those research results in their published form may be analytically challenging for some readers, but the chapter summarizes their important contributions.

Chapter 6 illustrates dynamically real-time computing systems and applications with examples inspired by my professional work, using material (re-)introduced in this book. It also shows some examples of how the time/utility function paradigm is being employed outside the field of real-time systems (e.g., high performance computing, clouds, web server farms, transaction systems, etc.).

Chapter 7 has a brief survey of some important open problems to be solved for dynamically real-time systems. It also notes relevant work by researchers inside and outside the field of real-time systems.

Chapter 8 describes the need for software tools appropriate for thinking about, demonstrating, and designing dynamically real-time computing systems. It describes one open source program for doing so using material from this book [ ]. It mentions generic tools, such as MATLAB [ ], R [ ], Stan [ ], JASP [ ], etc., which can be very useful.

Chapter 9 is a annotated list of selected references, primarily by members of my research teams, plus relevant papers and books by other authors.


Real-Time (excerpted and condensed from the book)

Productive Thinking, First Principles, and Mental Models (from Chapter 1)

"Typically, we think reproductively—that is, on the basis of similar problems encountered in the past. When confronted with problems, we fixate on something in our past that has worked before. We ask, "What have I been taught in life, education or work on how to solve the problem?" Then we analytically select the most promising approach based on past experiences, excluding all other approaches, and work within a clearly defined direction towards the solution of the problem. Because of the soundness of the steps based on past experiences, we become arrogantly certain of the correctness of our conclusion.

In contrast, [one should] think productively, not reproductively. When confronted with a problem, [one should] ask "How many different ways can I look at it?", "How can I rethink the way I see it?", and "How many different ways can I solve it?" instead of "What have I been taught by someone else on how to solve this?" [Then people] tend to come up with many different responses, some of which are unconventional and possibly unique.

In order to creatively solve a problem, the thinker must abandon the initial approach that stems from past experience and re-conceptualize the problem. By not settling with one perspective, [people] do not merely solve existing problems, like inventing an environmentally-friendly fuel.

They identify new ones."

-- Michael Michalko, creativitypost.com


(to be continued...copied and revised from the previous article which LinkedIn software problems prevented me from updating)

To view or add a comment, sign in

More articles by E. Douglas (Doug) Jensen

Others also viewed

Explore content categories