Preemptive and non-preemptive event-driven embedded software

Preemptive and non-preemptive event-driven embedded software

#embedded #programming #event-driven #rtos #state-machine #actor-model

In my "Modern Embedded Systems Programming" video course, I've recently explored various ways to execute event-driven active objects with internal hierarchical state machines. The videos in this series include:

The active object model offers such a wide range of options because of the inherently non-blocking nature of event-driven programming. However, a good question to ask is: Do you really need that much choice?

Well, a simple non-preemptive scheduler is undoubtedly handy. This is because event-driven systems (state machines) process events in run-to-completion (RTC) steps. But because all waiting for interesting occurrences (events) happens outside the event-driven active objects, the RTC steps can be free of any internal polling or blocking. This makes the RTC steps naturally very short with easy-to-determine worst-case execution time (WCET). All this, in turn, means that the system of such event-driven components can be deterministic and meet real-time deadlines.

But do you really need a preemptive scheduler in event-driven software? The concern is very valid because preemption opens a whole new class of concurrency problems, such as race conditions, data inconsistency, and all other concurrency hazards. These problems tend to be challenging to test for, reproduce, isolate, and fix--the worst possible kind.

So, while a decision to use a preemptive scheduler should not be taken lightly, it might be the most effective and even the safest solution for certain kinds of event-driven systems.

Let me provide an example from my own experience. In the late 90's, I worked on a GPS receiver firmware, which ran several hard real-time control loops for tracking GPS satellite signals. The loops executed every 500 microseconds. In parallel to this, the 20MHz ARM CPU had to perform GPS position fix calculations every 500 milliseconds, which involved inverting floating-point matrices on this fixed-point CPU. There were also other, slower tasks, like the calculations of the satellite ephemeris, etc.

The point is that the RTC steps were CPU-bound and took sometimes hundreds of milliseconds to complete. It was simply impractical to identify and break up all the long RTC steps into short enough pieces to meet the 500-microsecond deadlines of the control loops. But even if possible, this would also result in a fragile design because any change in the long-running RTC steps could impact the timing of high-priority control tasks.

In this type of problem, a preemptive kernel was the best solution. This is because such a kernel can assure that high-priority tasks are virtually insensitive to changes in the low-priority tasks, because high-priority RTC steps can preempt any lower-priority RTC steps. If you have a control application like that, a preemptive, priority-based kernel might actually be the simplest, most elegant, and most robust way to design and implement your system.

Pfananani Muthubi guess WHO comes to my mind whenever I see "active objects" 🤪

I fully agree with your perspective, and your GPS example illustrates the point very clearly. When I raised the topic of preemption, I was not referring to arbitrary preemption within an event’s run-to-completion step. I was referring to the exact form of preemption you’ve consistently advocated in your work: preemption between RTC steps (between events), not inside them. That distinction is essential. Preserving run-to-completion semantics keeps the state machine logic deterministic and analyzable, while priority-based preemption at event boundaries provides the temporal isolation needed when workloads are CPU-bound and not practically fragmentable. In systems where RTC steps are short and predictable, a non-preemptive kernel is indeed simpler and superior. But as you explained, when long-running computations coexist with hard real-time control loops, cooperative designs become fragile. In that context, preemption at the event level is not only justified — it is often the most robust and safest architectural choice. So we’re fully aligned: preemption is not a philosophical preference, but a response to concrete timing constraints, and when applied at the right granularity, it complements the event-driven model.

To view or add a comment, sign in

More articles by Miro Samek

  • Optimal Hierarchical State Machine Implementation

    Most embedded developers are familiar with the classic implementation strategies for finite state machines (FSMs):…

    6 Comments
  • Can an RTOS be really real-time?

    Ask the question What is an RTOS? and you will almost always get a definition along the lines of: "A real-time…

    52 Comments
  • Are you shooting yourself in the foot with stack overflow?

    Have you checked where in RAM your main stack is placed? Because chances are that it is at the top of RAM, as shown in…

    36 Comments
  • Still Using a Naked RTOS?

    If you're still building embedded applications with a traditional RTOS and naked threads, it's time to ask: Are you…

    37 Comments
  • Blocking == Technical Debt

    Blocking occurs every time a program waits in line for something to happen. For instance, the basic Arduino "Blink"…

    22 Comments
  • Low-Code?

    In my last lesson in the "Modern Embedded Systems Programming" series on Embedded.com, I discuss visual modeling and…

    1 Comment
  • The Origin of Software by Means of Artificial Selection

    New video lesson #49 in the Modern Embedded Systems Programming course explores the vast subject of software testing…

    2 Comments
  • Optimal state machine implementation in C

    In the fifth lesson in the state machine segment on Embedded.com, I introduce the state machine implementation in C…

    3 Comments
  • Cutting through the confusion with state machines

    If you search the Internet for “state machine,” you are likely to get a kitchen sink of results: from hardware…

    4 Comments
  • Are you still debugging with "printf"?

    The venerable "printf" is still the most popular debugging technique, but it has terrible real-time performance. If you…

    1 Comment

Others also viewed

Explore content categories