When we write code, we usually focus on correctness first: does it work, does it solve the problem, does it pass the tests?
But once correctness is achieved, a deeper question appears:
How does this code behave as the input grows?
What Complexity Really Measures
Complexity does not measure how fast your program runs on your laptop.
It measures how resource usage grows relative to input size.
We describe this growth using Big-O notation, which focuses on the _dominant behavior_ as input size n becomes large.
Input size (n) → grows
Observed cost → grows in a pattern
Big-O → describes that pattern
Two dimensions matter:
- Time Complexity → how the number of operations grows
- Space Complexity → how memory usage grows
These are independent — improving one may worsen the other.
Time Complexity: How Long Does the Work Take?
Time complexity models how many basic steps an algorithm performs relative to input size.
We don’t count CPU cycles or nanoseconds.
We count growth trends.
The Core Idea
Time = f(n)
Big-O = dominant term of f(n)
If n doubles, what happens to the work?
That question alone explains most performance behavior.
Constant Time — O(1)
An operation that does not depend on input size.
int x = arr[5];
No matter how large arr is, the work stays the same.
n ────────────────────────────>
|████|████|████|████|████|
cost stays flat
Mental model:
One action. One step. No growth.
Linear Time — O(n)
Work grows directly with input size.
for i in range(n):
process(i)
Each extra element adds one extra step.
n: 1 2 3 4 5
ops: █ ██ ███ ████ █████
Mental model:
Walk through the data once.
Most real-world code lives here — logging, scanning, validation, streaming.
Quadratic Time — O(n²)
Work grows with the square of input size.
for i in range(n):
for j in range(n):
compare(i, j)
n: 1 2 3 4
ops: █ ████ █████████ ████████████████
Mental model:
Every element interacts with every other element.
This appears in:
- Naive sorting
- Pairwise comparisons
- Nested loops over the same dataset
This is often the first performance wall developers hit.
Logarithmic Time — O(log n)
Each step cuts the problem in half.
Binary Search:
n → n/2 → n/4 → n/8 → ...
n: 1024 → 512 → 256 → 128 → 64 → 32 → 16 → 8 → 4 → 2 → 1
steps: 10
log growth diagram:
n ────────────────────────────>
|█|██|███|████|█████|
slow, gentle slope
Mental model:
Divide and conquer.
This is why sorted data and trees are powerful.
Linearithmic Time — O(n log n)
A combination of linear work and logarithmic depth.
Classic example: efficient sorting.
Split data → work log(n) levels
Each level → process n elements
Total work:
n × log(n)
Diagram:
Level 1: ██████████
Level 2: ██████████
Level 3: ██████████
↑ log(n) levels
Mental model:
Touch everything, but in organized layers.
This is the _practical upper limit_ for scalable general-purpose algorithms.
Comparing Time Complexities (Intuition View)
O(1) ─ flat
O(log n) ─ gentle curve
O(n) ─ straight line
O(n log n) ─ bending upward
O(n²) ─ steep wall
Once n becomes large, constants stop mattering — growth dominates.
Space Complexity: How Much Memory Is Used?
Space complexity measures additional memory used as input grows.
We usually ignore:
- program code
- fixed constants
- OS overhead
We focus on extra space allocated by the algorithm.
Constant Space — O(1)
Memory usage does not grow with input size.
int sum = 0;
for i in range(n):
sum += i
Only one variable is used.
Memory:
|█|
constant
Linear Space — O(n)
Memory grows directly with input size.
new_array = []
for i in range(n):
new_array.append(i)
n: 1 2 3 4 5
mem: █ ██ ███ ████ █████
Mental model:
You keep a copy of the data.
Recursive Space and the Call Stack
Recursion consumes space even without allocating arrays.
function f(n):
if n == 0: return
f(n - 1)
Each call stays on the stack until completion.
Call stack:
f(3)
f(2)
f(1)
f(0)
Space complexity: O(n)
Even though no container is used.
Time vs Space: The Fundamental Trade-off
Often, you can make code faster by using more memory.
Time ↑ Space ↓ (recompute every time)
Time ↓ Space ↑ (cache results)
Example: memoization
Without cache:
f(n) → recompute → slow, low memory
With cache:
store results → fast, higher memory
This trade-off is everywhere:
- lookup tables
- indexes
- caches
- buffers
There is no free optimization — you choose your cost.
Worst, Average, and Best Case
Big-O usually describes the worst case, because systems must survive bad inputs.
Search in array:
Best case: first element → O(1)
Worst case: last element → O(n)
Designing for worst-case behavior is what separates robust systems from fragile ones.
Reading Code Through a Complexity Lens
When you read code, ask:
- How many loops?
- Do loops depend on each other?
- Does recursion shrink the problem?
- Is extra memory allocated per input?
Translate structure → behavior.
One loop → O(n)
Nested loops → O(n²)
Divide by half → O(log n)
Copy data → O(n) space
This mental translation becomes automatic with practice.
Summary
O(1) Constant
O(log n) Logarithmic
O(n) Linear
O(n log n) Linearithmic
O(n²) Quadratic
Each class represents a fundamentally different scalability profile.
Cost
^
| O(n²)
| ###
| ####
| ####
| #### O(n log n)
| #### ####
| #### ####
| #### ####
| ## #### O(n)
| # ####
| # ## O(log n)
| # ##
| #### O(1)
+------------------------------------> n
As n grows:
- slow curves stay manageable
- steep curves become infeasible