Big O Notation Explained: A Simple Guide to Algorithm Complexity

A visually structured chart illustrating Big O notation complexities in ascending order of complexity, from O(1) (constant time) at the bottom to O(n!) (factorial time) at the top. The design resembles a staircase or pyramid, with each step representing a different complexity class: O(1), O(log n), O(n), O(n log n), O(n²), O(2ⁿ), and O(n!). The image has a futuristic, tech-inspired look with clean typography and gradient effects, making it easy to understand for learners
Nigel Holder
Nigel Holder

Understanding Big O Notation: A Simple Comparison

This post is part of my data structures and algorithms refresher series, where I revisit essential concepts that every developer should understand. One of the most important is Big O notation—a way to measure how an algorithm's efficiency scales as input size increases.

Teaching is one of the best ways to learn, so I’m breaking this down in a way that makes sense for both beginners and those, like me, who need a refresher.

What is Big O Notation?

Big O notation is a way of measuring an algorithm’s efficiency—how its runtime or space usage grows as the input size increases. Think of it like a unit of measure for complexity.

To make this more relatable, consider the metric system:

  • Centimeters (cm) measure small distances.
  • Meters (m) are larger.
  • Kilometers (km) are much larger.

Similarly, some algorithms handle large inputs efficiently (like O(1) and O(log n)), while others become impractical as input grows (like O(n²) and O(n!)).

Common Big O Complexities (From Least to Most Complex)

1. O(1) – Constant Time

  • No matter how large the input is, execution time remains the same.
  • Example: Accessing an array element by index → arr[5]
  • Comparison: Like saying "one meter is always one meter"—it doesn’t change based on distance.

2. O(log n) – Logarithmic Time

  • Runtime grows slowly as input increases.
  • Example: Binary search (dividing data in half each step).
  • Comparison: Like climbing a staircase with fewer and fewer steps per level.

3. O(n) – Linear Time

  • Execution time grows proportionally to the input size.
  • Example: Looping through an array once.
  • Comparison: Like walking a straight path—more distance means more steps.

4. O(n log n) – Linearithmic Time

  • Faster than quadratic but slower than linear.
  • Example: Sorting algorithms like Merge Sort, Quick Sort, and Heap Sort.
  • Comparison: Like organizing a shuffled deck by dividing it into sections before sorting.

5. O(n²) – Quadratic Time

  • Time grows quadratically, meaning each increase in input size drastically increases execution time.
  • Example: Nested loops (e.g., checking every pair in a list).
  • Comparison: Like every person in a group shaking hands with everyone else.
    • If there are 4 people, there are 6 total handshakes.
    • If there are 5 people, there are 10 total handshakes.
    • If there are n people, the number of handshakes is n(n-1)/2, which is O(n²).

6. O(2ⁿ) – Exponential Time

  • Time doubles with each additional input.
  • Example: Recursive problems like the Fibonacci sequence.
  • Comparison: Like a tree branching out—each level has twice as many branches as the last.

7. O(n!) – Factorial Time

  • Growth is explosive and becomes impractical very quickly.
  • Example: Brute-force solutions like the Traveling Salesperson Problem.
  • Comparison: Like trying every possible combination to unlock a safe—the more dials, the harder it gets.

Why This Matters

Understanding Big O is key to writing scalable, efficient code. Just like metric units help us compare distances, Big O helps us compare algorithm performance.

This post is just the start of my data structures and algorithms refresher series—I'll be covering more topics like sorting, recursion, and data structures.

What’s an algorithm you’ve worked with before? Drop it in the comments, and let’s break down its complexity together! 🚀