ES: Scheduling Algorithms

Scheduling algorithms define how the CPU is allocated among tasks to ensure timing correctness, responsiveness, and real-time guarantees in embedded systems. From fixed priority and round robin to RMS and EDF, they combine theoretical models with kernel-level mechanisms like preemption control, resource protection, and WCET analysis to build deterministic systems.

Scheduling Algorithms (From Theory to Implementation)

Scheduling in embedded systems is not merely about choosing the next

task to run. It is about guaranteeing timing correctness, determinism,

and system stability under worst-case conditions. In real-time systems,

a missed deadline may lead to functional failure, safety hazards, or

financial loss. Therefore, scheduling must be understood both

theoretically and at kernel implementation level.

The scheduler acts as the decision engine between application

requirements and CPU execution.

txt
Application Tasks  -->  Scheduler Policy  -->  CPU Execution

In most RTOS systems, scheduling decisions are triggered by timer

interrupts, task blocking/unblocking events, or explicit yields.


What Scheduling Really Means

At the core of any RTOS (like FreeRTOS), the scheduler decides:

  • Which task runs
  • For how long
  • When it is preempted
  • Whether deadlines are met

txt
           +------------------+
Ready ---> |   Scheduler      | ---> Running
 Queue     |  (Policy Logic)  |
           +------------------+
                  ^
                  |
              System Tick

The scheduler is triggered by:

  • Timer interrupt (SysTick)
  • Blocking/unblocking events
  • Interrupt service routines
  • Task yielding

Now let's explore the main scheduling strategies — from simplest to mathematically optimal.


Fixed Priority Scheduling

Fixed Priority Scheduling assigns each task a static priority at design

time. The scheduler always selects the highest-priority READY task.

Example:

txt
Priority 3  -> Motor Control
Priority 2  -> Sensor Processing
Priority 1  -> Logging

If Motor Control becomes ready while Logging is running, Logging is

preempted immediately.

txt
Before:
[ Logging Running ]

After:
[ Motor Control Running ]
[ Logging Preempted ]

This model is deterministic and simple to implement. It is the default

model used in many RTOS kernels such as FreeRTOS.


Round Robin Scheduling

When multiple tasks share the same priority, fairness must be ensured.

Round Robin scheduling rotates execution among equal-priority tasks.

Example:

txt
Task A
Task B
Task C

Execution order:

txt
| A | B | C | A | B | C |

Each task receives CPU time in cyclic order. This prevents starvation

within the same priority level.


Time Slicing

Time slicing defines how long a task runs before switching to another

task of equal priority. It is typically driven by a periodic system tick

interrupt.

Example:

txt
Tick = 1 ms
Time Slice = 1 tick

Flow:

txt
    SysTick Interrupt
          |
          v
    Scheduler
          |
          v
    Context Switch

Context switching involves saving and restoring CPU registers:

txt
Save Registers
Switch Stack Pointer
Restore Registers

Time slicing improves fairness but does not guarantee deadline

satisfaction.


Rate Monotonic Scheduling (RMS)

RMS is a fixed-priority real-time scheduling algorithm. Tasks with

shorter periods receive higher priorities.

Example:

txt
Task A: Period = 10 ms
Task B: Period = 50 ms
Task C: Period = 100 ms

Priority assignment:

A > B > C

txt
Time →
|A|A|A|A|A|A|A|A|A|A|
   |   B   |   B   |
         |      C      |

The theoretical schedulability bound for n tasks:

CPU Utilization <= <= n(2^(1/n) - 1)

As n increases, utilization limit approaches approximately 69%. RMS

guarantees deadlines only if total CPU utilization stays within this

bound.


Earliest Deadline First (EDF)

EDF is a dynamic scheduling algorithm where the task with the nearest

deadline runs first.

Example:

txt
Task A deadline = 20 ms
Task B deadline = 15 ms

Execution:

txt
|---B---|----A----|

EDF can theoretically achieve 100% CPU utilization in single-core

preemptive systems. However, it requires runtime deadline tracking and

introduces additional kernel complexity.


Preemption Threshold

Preemption threshold allows a task to temporarily raise its effective

priority, reducing unnecessary preemptions.

Example:

txt
Task A priority = 2
Threshold = 4

Only tasks with priority higher than 4 can preempt it. This reduces

jitter and improves timing predictability for short critical sections.


Priority Inversion

Priority inversion occurs when a high-priority task waits for a resource

held by a low-priority task, while medium-priority tasks continue

executing.

Example:

txt
Low task locks mutex
High task blocks waiting
Medium task runs

Timeline:

txt
Low:    |----holds mutex----|
High:        | waiting |
Medium:           |----running----|

This can cause unbounded delays for high-priority tasks.

Real example:

Mars Pathfinder mission failure (fixed later with priority inheritance).

Solution requires protocol support.


Priority Inheritance

Priority inheritance temporarily raises the priority of a low-priority

task holding a resource required by a higher-priority task.

When high-priority task waits on a mutex held by low-priority task:

Flow:

txt
H blocks on mutex
L priority = H
L runs
L releases mutex
L priority restored
H runs

This limits blocking time and reduces inversion duration.

txt
Before:
L(1), M(2), H(3)

After block:
L becomes priority 3

But priority inheritance still allows chained blocking.

For stronger guarantees, we use ceiling protocols.


Priority Ceiling Protocol

Each resource is assigned a ceiling priority equal to the highest

priority of tasks that may access it.

When a task locks the resource, its priority immediately becomes the

ceiling value.

txt
Mutex Ceiling = 5
Low task locks -> priority becomes 5

This prevents chained blocking and guarantees bounded blocking time.

Benefits:

  • Prevents deadlock
  • Prevents chained blocking
  • Predictable blocking time

Used in safety-critical systems (AUTOSAR, avionics).

Now we reach the most critical analysis concept.


Worst Case Execution Time (WCET)

WCET is the maximum time a task may take under worst-case conditions.

Without WCET, real-time guarantees cannot be validated.

Response time formula:

txt
Response Time =
	WCET
  + Blocking Time
  + Interference from Higher Priority Tasks

Sources affecting WCET:

  • Cache misses
  • Interrupt latency
  • Bus contention
  • Flash wait states
  • DMA interactions

txt
|---Blocking---|---Interference---|---Execution---|

In RMS analysis: Ri = Ci + Σ (ceil(Ri / Tj) * Cj)

Where:

  • Ri = response time
  • Ci = WCET of task i
  • Tj = period of higher priority tasks

In practice:

  • Measure using GPIO toggling + oscilloscope
  • Use static analysis tools
  • Use cycle counters (DWT on Cortex-M)

Example measurement approach:

txt
DWT->CYCCNT = 0;
TaskFunction();
uint32_t cycles = DWT->CYCCNT;

Convert to time: time = cycles / CPU_frequency


From Theory to Implementation

A real RTOS scheduling cycle typically follows:

txt
Timer Interrupt
	  ↓
Update Tick Count
	  ↓
Move Expired Tasks to Ready List
	  ↓
Select Highest Priority Ready Task
	  ↓
Context Switch

Effective scheduling requires:

  • Proper priority assignment
  • Resource protection protocols
  • Accurate WCET estimation
  • Response-time analysis

Scheduling is not merely a kernel configuration. It is a system-level

architectural decision combining mathematics, hardware behavior, and

RTOS internals to guarantee deterministic execution.

Scheduling Algorithms Comparison Table

Algorithm / ConceptTypePriority AssignmentDeterminism LevelCPU Utilization BoundAdvantagesLimitationsTypical Use Case
Fixed Priority SchedulingStaticAssigned at design timeHighDepends on analysisSimple, predictable, low overheadMay waste CPU capacityMost RTOS systems (e.g., FreeRTOS)
Round RobinStatic (same priority)Equal priority tasks rotateMediumNot deadline-awareFairness among equal tasksNo deadline guaranteeCooperative/background tasks
Time SlicingMechanismBased on tick intervalMediumNot analyticalImproves fairnessContext switch overheadEqual-priority multitasking
Rate Monotonic (RMS)Static Real-TimeShorter period → higher priorityHigh~69% theoretical boundMathematically analyzableLimited CPU utilizationPeriodic control systems
Earliest Deadline First (EDF)Dynamic Real-TimeNearest deadline firstVery HighUp to 100%Optimal CPU usageHigher runtime overheadHigh-utilization real-time systems
Preemption ThresholdOptimizationTemporary priority raiseHighN/AReduces jitterAdds configuration complexityShort critical sections
Priority InversionProblemN/ALow if unmanagedN/ACan cause unbounded delayShared resource systems
Priority InheritanceMitigationLow task inherits high priorityHighBounded blockingSimple and effectiveDoes not prevent deadlockMutex-based RTOS systems
Priority Ceiling ProtocolMitigationTask raised to resource ceilingVery HighFully bounded blockingPrevents deadlock & chainingMore configuration effortSafety-critical systems
WCET AnalysisTiming AnalysisN/AEssential for guaranteesDefines feasibilityEnables schedulability proofHard to measure preciselyHard real-time validation