SE: Time and Space Complexity - O notation

Time and space complexity give us a language to reason about scalability. They allow us to predict performance before the code runs in production, before the system slows down, and before memory becomes a bottleneck.

When we write code, we usually focus on correctness first: does it work, does it solve the problem, does it pass the tests?

But once correctness is achieved, a deeper question appears:

How does this code behave as the input grows?

What Complexity Really Measures

Complexity does not measure how fast your program runs on your laptop.

It measures how resource usage grows relative to input size.

We describe this growth using Big-O notation, which focuses on the _dominant behavior_ as input size n becomes large.

md
Input size (n)  →  grows
Observed cost   →  grows in a pattern
Big-O           →  describes that pattern

Two dimensions matter:

  • Time Complexity → how the number of operations grows
  • Space Complexity → how memory usage grows

These are independent — improving one may worsen the other.


Time Complexity: How Long Does the Work Take?

Time complexity models how many basic steps an algorithm performs relative to input size.

We don’t count CPU cycles or nanoseconds.

We count growth trends.

The Core Idea

md
Time = f(n)
Big-O = dominant term of f(n)

If n doubles, what happens to the work?

That question alone explains most performance behavior.


Constant Time — O(1)

An operation that does not depend on input size.

md
int x = arr[5];

No matter how large arr is, the work stays the same.

md
n ────────────────────────────>
|████|████|████|████|████|
cost stays flat

Mental model:

One action. One step. No growth.

Linear Time — O(n)

Work grows directly with input size.

md
for i in range(n):
    process(i)

Each extra element adds one extra step.

md
n:    1   2   3   4   5
ops:  █   ██  ███ ████ █████

Mental model:

Walk through the data once.

Most real-world code lives here — logging, scanning, validation, streaming.


Quadratic Time — O(n²)

Work grows with the square of input size.

md
for i in range(n):
    for j in range(n):
        compare(i, j)

md
n:    1    2     3      4
ops:  █    ████  █████████ ████████████████

Mental model:

Every element interacts with every other element.

This appears in:

  • Naive sorting
  • Pairwise comparisons
  • Nested loops over the same dataset

This is often the first performance wall developers hit.


Logarithmic Time — O(log n)

Each step cuts the problem in half.

md
Binary Search:
n → n/2 → n/4 → n/8 → ...

md
n: 1024 → 512 → 256 → 128 → 64 → 32 → 16 → 8 → 4 → 2 → 1
steps: 10

md
log growth diagram:
n ────────────────────────────>
|█|██|███|████|█████|
slow, gentle slope

Mental model:

Divide and conquer.

This is why sorted data and trees are powerful.


Linearithmic Time — O(n log n)

A combination of linear work and logarithmic depth.

Classic example: efficient sorting.

md
Split data → work log(n) levels
Each level → process n elements

md
Total work:
n × log(n)

md
Diagram:
Level 1: ██████████
Level 2: ██████████
Level 3: ██████████
          ↑ log(n) levels

Mental model:

Touch everything, but in organized layers.

This is the _practical upper limit_ for scalable general-purpose algorithms.


Comparing Time Complexities (Intuition View)

md
O(1)     ─ flat
O(log n) ─ gentle curve
O(n)     ─ straight line
O(n log n) ─ bending upward
O(n²)    ─ steep wall

Once n becomes large, constants stop mattering — growth dominates.


Space Complexity: How Much Memory Is Used?

Space complexity measures additional memory used as input grows.

We usually ignore:

  • program code
  • fixed constants
  • OS overhead

We focus on extra space allocated by the algorithm.


Constant Space — O(1)

Memory usage does not grow with input size.

md
int sum = 0;
for i in range(n):
    sum += i

Only one variable is used.

md
Memory:
|█|
constant


Linear Space — O(n)

Memory grows directly with input size.

md
new_array = []
for i in range(n):
    new_array.append(i)

md
n:    1   2   3   4   5
mem:  █   ██  ███ ████ █████

Mental model:

You keep a copy of the data.

Recursive Space and the Call Stack

Recursion consumes space even without allocating arrays.

md
function f(n):
    if n == 0: return
    f(n - 1)

Each call stays on the stack until completion.

md
Call stack:
f(3)
 f(2)
  f(1)
   f(0)

Space complexity: O(n)

Even though no container is used.


Time vs Space: The Fundamental Trade-off

Often, you can make code faster by using more memory.

md
Time ↑  Space ↓   (recompute every time)
Time ↓  Space ↑   (cache results)

Example: memoization

md
Without cache:
f(n) → recompute → slow, low memory

With cache:
store results → fast, higher memory

This trade-off is everywhere:

  • lookup tables
  • indexes
  • caches
  • buffers

There is no free optimization — you choose your cost.


Worst, Average, and Best Case

Big-O usually describes the worst case, because systems must survive bad inputs.

md
Search in array:
Best case: first element → O(1)
Worst case: last element → O(n)

Designing for worst-case behavior is what separates robust systems from fragile ones.


Reading Code Through a Complexity Lens

When you read code, ask:

md
- How many loops?
- Do loops depend on each other?
- Does recursion shrink the problem?
- Is extra memory allocated per input?

Translate structure → behavior.

md
One loop        → O(n)
Nested loops    → O(n²)
Divide by half  → O(log n)
Copy data       → O(n) space

This mental translation becomes automatic with practice.


Summary

md
O(1)        Constant
O(log n)    Logarithmic
O(n)        Linear
O(n log n)  Linearithmic
O(n²)       Quadratic

Each class represents a fundamentally different scalability profile.

md
Cost
^
|                          O(n²)
|                        ###
|                    ####
|                ####
|            ####            O(n log n)
|        ####            ####
|    ####           ####
| ####        ####
| ##     ####          O(n)
| #   ####
| #  ##           O(log n)
| # ##
| ####         O(1)
+------------------------------------> n

As n grows:

  • slow curves stay manageable
  • steep curves become infeasible