ES: Interrupt, Clock, Reset and Event Handling Architecture

Interrupt, Clock, and Reset subsystems define how an MCU starts, runs, and reacts.The Interrupt Vector Table and ISR mechanism implement deterministic hardware-driven control transfer while preserving program state through stack-based context switching.

A microcontroller is not just executing instructions sequentially.

It is a real-time reactive machine that must:

  • Start in a deterministic state
  • Run with precise timing
  • React immediately to external and internal events

These behaviors are controlled by three tightly coupled subsystems:

md
                 +------------------+
                 |      CLOCK       |
                 +------------------+
                        |
                        v
+------------------+   CPU   +------------------+
|      RESET       |-------->|    INTERRUPT     |
+------------------+         +------------------+

Clock drives execution.

Reset defines initial state.

Interrupts dynamically redirect execution flow.

To understand interrupts deeply, we must first understand how the CPU is running.


Clock Driven Execution and Event Awareness

The CPU executes instructions in a loop:

FETCH → DECODE → EXECUTE → NEXT

Synchronous digital logic means:

Every state change occurs on a clock edge.

md
Clock:   ↑   ↑   ↑   ↑   ↑
         |   |   |   |   |
State:  I1  I2  I3  I4  I5

Between two clock edges, the CPU checks whether an interrupt request is pending.

So interrupt handling is not random — it is synchronized with clock cycles.

This leads us to how events are detected.


Polling as an Active Monitoring Mechanism

Before hardware interrupts existed, event handling was done by polling.

Polling means:

The CPU repeatedly checks peripheral status.

cpp
while(1)
{
    if (TIMER_FLAG == 1)
    {
        handle_timer();
    }
}

Polling cycle:

md
Check flag
   |
   v
Flag set?
  |     |
 No     Yes
  |       |
  v       v
Repeat   Process
           |
           v
        Repeat

Polling rate defines responsiveness.

If polling every 10 ms, worst case delay = 10 ms.

This wastes CPU cycles and power.

So hardware designers introduced interrupt logic.


Interrupt as Hardware Driven Control Transfer

Interrupt is a hardware signal that causes the CPU to temporarily suspend current execution and jump to a dedicated handler.

Unlike polling:

CPU does not check continuously.

Hardware sets a flag line:

Peripheral → Interrupt Line → Interrupt Controller → CPU

This introduces a new architectural component:

md
+-----------------------+
|  Interrupt Controller |
+-----------------------+

Now we must understand how this controller works internally.


Interrupt Controller and Signal Arbitration

Interrupt controller performs:

  • Source detection
  • Priority resolution
  • Masking
  • Vector selection

Internally:

md
              +---------------------+
External IRQ →|                     |
Timer IRQ    →|                     |
UART IRQ     →|  Interrupt Logic    |→ CPU
ADC IRQ      →|                     |
              +---------------------+
                   |     |     |
                Mask   Priority  Flag

If multiple interrupts occur simultaneously:

Priority encoder decides which one is served first.

Now once CPU accepts the interrupt…

It needs to know where to jump.

This is where the Interrupt Vector Table becomes critical.


Interrupt Vector Table Deep Architecture

Interrupt Vector Table is not just a list.

It is the hardware entry point mapping between interrupt numbers and handler addresses.

Stored in Flash at a fixed location (or relocatable in advanced MCUs).

Conceptually:

Vector Table = Array of function pointers

Memory layout example:

md
FLASH MEMORY

Address         Content
--------------------------------
0x00000000      Initial Stack Pointer
0x00000004      Reset Handler
0x00000008      NMI Handler
0x0000000C      HardFault Handler
0x00000010      Timer Handler
0x00000014      UART Handler
0x00000018      GPIO Handler

When interrupt number N occurs:

PC ← (Vector_Table_Base + 4N)

Important architectural details:

  • Vector table alignment is required
  • It must be in executable memory
  • It is usually defined in startup assembly
  • The linker script places it in .isr_vector section

Example linker snippet conceptually:

md
.isr_vector :
{
    KEEP(*(.isr_vector))
} > FLASH

The KEEP directive ensures the vector table is not optimized away.

Now we must examine what happens inside the CPU when jumping to ISR.


Context Saving and Stack Behavior During Interrupt

Interrupt handling requires preserving program state.

If we don't save context:

Returning from ISR would corrupt execution.

When interrupt is accepted:

CPU automatically pushes context to stack.

Typical saved registers:

  • Program Counter
  • Status Register
  • General purpose registers (architecture dependent)

Stack operation:

md
Before interrupt:

   Stack Top
       |
       v
   +--------+
   |  Data  |
   +--------+

During interrupt:

Push PC
Push Status

   Stack Top
       |
       v
   +--------+
   | Status |
   +--------+
   |   PC   |
   +--------+
   |  Data  |
   +--------+

Then:

PC ← ISR address

After ISR:

md
Pop Status
Pop PC

Execution resumes exactly where it stopped.

This mechanism is entirely hardware-driven.

Now let's examine the ISR itself.


Interrupt Service Routine Internal Rules

ISR is not a normal function.

It has special ABI behavior.

Example in C:

cpp
void __vector_1(void) __attribute__((signal, used));

Compiler attributes:

  • signal → generate proper prologue/epilogue for interrupt
  • used → prevent removal by optimizer

ISR constraints:

  • Must be short
  • Must avoid blocking
  • Should avoid heavy computation
  • Should avoid dynamic memory
  • Should avoid printf

Why?

Because:

While ISR runs:

  • Lower priority interrupts blocked
  • System latency increases
  • Real-time deadlines may be violated

Advanced best practice:

ISR should only:

md
Set flag
Store data in buffer
Exit

Processing should occur in main loop or RTOS task.

This leads us to interrupt nesting.


Priority, Masking and Nested Interrupts

Interrupt controller allows:

  • Enable/disable per interrupt
  • Global interrupt enable
  • Priority levels

Global flag often called: IEN = Global Interrupt Enable

If IEN = 0 → no interrupts serviced.

Nested interrupts:

If high priority interrupt occurs during lower priority ISR:

md
Low ISR running
     |
High interrupt arrives
     |
CPU pauses Low ISR
     |
Executes High ISR
     |
Returns to Low ISR

Stack grows deeper with nesting.

This requires careful stack size configuration.


Interrupt Life Cycle at Cycle Level

Detailed hardware sequence:

md
Fetch instruction
Execute instruction
Check Global IEN
Check pending interrupt lines

If none → continue
If pending → set interrupt flag
Push PC
Push Status

Load PC from IVT
Execute ISR

Pop Status
Pop PC
Clear interrupt flag

Resume main

Graphical flow:

md
             +-------------------+
             | Execute Instruction|
             +-------------------+
                       |
                       v
                  Interrupt?
                   /      \
                 No        Yes
                 |          |
                 v          v
              Next Inst   Save Context
                             |
                             v
                        PC ← IVT
                             |
                             v
                         Execute ISR
                             |
                             v
                        Restore Context

This is deterministic and clock-synchronized.

Now let's clarify two important timing metrics.


Interrupt Latency and Interrupt Response

Interrupt latency is:

Time from interrupt request to first ISR instruction.

It includes:

  • Completion of current instruction
  • Context save
  • Vector fetch

Interrupt response time is:

Time from interrupt request to ISR completion.

So: Response = Latency + ISR execution time

In real-time systems:

Latency must be bounded.

ISR must be deterministic.


Connecting Reset, Clock and Interrupt at System Level

Complete system behavior:

md
Power On
   |
Reset Asserted
   |
Clock Stabilizes
   |
Reset Released
   |
CPU Starts
   |
Interrupts Enabled
   |
System Reactive

Full architecture:

md
           +------------------+
           |    CLOCK TREE    |
           +--------+---------+
                    |
                    v
                CPU CORE
                    |
        +-----------+-----------+
        |                       |
+-------v-------+       +-------v-------+
| INTERRUPT CTRL|       | RESET CTRL    |
+---------------+       +---------------+

These subsystems together define:

  • Determinism
  • Power efficiency
  • Real-time capability
  • Security robustness

Real Microcontroller Implementation Example

Consider STM32F407

It includes:

  • NVIC interrupt controller
  • Configurable vector table relocation
  • Multiple priority levels
  • Independent watchdog reset
  • Brown-out detection
  • Multi-domain clock tree

Its interrupt controller supports:

  • Tail chaining
  • Late arrival
  • Preemption

These are hardware optimizations for latency reduction.


Security Perspective on Interrupt System

Interrupt system is attack surface.

Possible attacks:

  • Clock glitch to skip context save
  • Interrupt masking manipulation
  • Vector table relocation to malicious memory
  • Fault injection during secure boot

Example:

md
Glitch during signature verification
Skip conditional branch
Bypass authentication

Therefore secure MCUs protect:

  • Vector table region
  • Reset logic
  • Clock integrity
  • Interrupt configuration registers

Understanding interrupt internals is essential for embedded security engineering.