DSA: Arrays Algorithms and Problem Solving

Arrays store data in contiguous memory for O(1) access, while dynamic arrays move this concept to the heap, trading fixed size and safety for runtime flexibility and manual memory control.

How to think, how fast it runs, and why it works

Arrays look simple: a block of contiguous memory, indexed from 0 to n-1.

Yet most performance problems, bugs, and interview failures happen inside arrays — not because arrays are hard, but because the cost model is ignored.

Every array pattern exists to answer two questions simultaneously:

What information do I carry forward?
How does that choice affect time and space?

This article walks through all major array patterns with their Big-O behavior baked into the intuition.


Traversal as the Baseline Mental Model

Traversal is the foundation of all array algorithms.

md
[ a, b, c, d, e ]
  ^

A single pointer moves from left to right, once.

Typical carried state:

  • min / max
  • sum
  • validation flags

Example: finding the maximum element.

md
[ 3, 1, 7, 2, 5 ]
        ^

Each element is inspected exactly once.

Time Complexity: O(n)

Space Complexity: O(1)

Traversal defines the lower bound for most array problems.

If your solution is slower than O(n), the next patterns exist to fix that.


Two Pointers as Spatial Optimization

Two pointers exist because restarting work is expensive.

md
[ 1, 2, 4, 7, 11, 15 ]
  L                 R

Instead of nested loops (O(n²)), you move pointers intelligently.

Each pointer:

  • moves only forward or backward
  • never resets

Every element is visited at most once per pointer.

Time Complexity: O(n)

Space Complexity: O(1)

Two pointers convert quadratic comparisons into linear movement, but only when:

  • order exists (sorted array)
  • or relative position carries meaning

Sliding Window as Range Preservation

Sliding window improves two pointers by remembering the cost of the range.

md
[ 2, 1, 5, 1, 3, 2 ]
      |-----|
       window

Instead of recomputing subarray sums:

md
sum(l..r)  →  O(k)

You update incrementally:

md
+ a[r]
- a[l]

Each element:

  • enters the window once
  • leaves the window once

Time Complexity: O(n)

Build Space: O(n)

This holds for:

  • fixed-size windows
  • variable-size windows (as long as constraints are monotonic)

Sliding window breaks when negative numbers destroy monotonicity — which is why prefix patterns exist.


Prefix Thinking as Trading Space for Time

Prefix patterns precompute history.

md
Array:   [ 2, 1, 5, 1, 3 ]
Prefix:  [ 0, 2, 3, 8, 9, 12 ]

Building the prefix array:

md
prefix[i] = prefix[i-1] + a[i-1]

Build Time:: O(n)

Build Space: O(n)

Once built, every range query becomes:

md
sum(l..r) = prefix[r+1] - prefix[l]

Query Time: O(1)

Prefix sums shift work from query time to preprocessing time — a classic space-time tradeoff.


Frequency Thinking as Value Compression

Frequency-based patterns transform arrays into counts.

md
[ 1, 2, 2, 3, 1, 2 ]
↓
1 → 2
2 → 3
3 → 1

Using:

  • array as frequency table → O(1) access
  • hash map → average O(1)

Single pass construction:

Time Complexity: O(n)

Space Complexity: O(n) (worst case: all elements unique)

When combined with prefix sums (e.g., subarray sum = K), the full algorithm remains linear:

Total Time: O(n)

Total Space: O(n)

This is often the fastest possible solution class for many counting problems.


Sorting as Structural Simplification

Sorting is often the moment where chaos becomes solvable.

md
Before: [ 7, 1, 5, 3, 6 ]
After:  [ 1, 3, 5, 6, 7 ]