Algorithm Efficiency
Two algorithms can solve the same problem, but one may be vastly faster or use less memory. Algorithm efficiency analysis lets us predict performance without running the code. We use Big-O notation to express how an algorithm's time or space grows as the input size increases. This is essential for choosing the right algorithm for a problem.
Learning Objectives
- 11.5.2.6 Evaluate the efficiency of an algorithm
- 11.5.2.7 Determine the complexity of an algorithm using Big-O notation
Conceptual Anchor
The Route Analogy
Imagine two routes from home to school. Route A takes 10 minutes regardless of traffic. Route B takes 5 minutes with no traffic but 2 hours in rush hour. Route B has a better best case but a terrible worst case. Like routes, algorithms have different performance in different scenarios. Big-O describes the worst case — the maximum time you'd ever need.
Rules & Theory
What is Big-O Notation?
Big-O notation describes the upper bound of an algorithm's growth rate. It tells us how the number of operations scales as the input size n increases. We ignore constants and lower-order terms because they become insignificant for large n.
| Big-O | Name | Example | n=10 | n=1000 | n=1M |
|---|---|---|---|---|---|
O(1) |
Constant | Array access by index | 1 | 1 | 1 |
O(log n) |
Logarithmic | Binary search | 3 | 10 | 20 |
O(n) |
Linear | Linear search, single loop | 10 | 1,000 | 1,000,000 |
O(n log n) |
Linearithmic | Merge sort, quick sort | 33 | 10,000 | 20,000,000 |
O(n²) |
Quadratic | Bubble sort, nested loops | 100 | 1,000,000 | 10¹² |
O(2ⁿ) |
Exponential | Brute-force subsets | 1,024 | impractical | impossible |
Rules for Determining Big-O
Time vs Space Complexity
| Aspect | Time Complexity | Space Complexity |
|---|---|---|
| Measures | Number of operations | Amount of memory used |
| Example: Bubble Sort | O(n²) | O(1) — only uses a temp variable |
| Example: Creating a copy of array | O(n) | O(n) — new array of same size |
| Trade-off | Often you can trade time for space or vice versa | |
Worked Examples
1 Identify the Big-O of Code Fragments
Fragment A — Single loop:
total ← 0
FOR i ← 0 TO n - 1
total ← total + arr[i]
NEXT iThe loop runs n times, each iteration does O(1) work → O(n)
Fragment B — Nested loops:
FOR i ← 0 TO n - 1
FOR j ← 0 TO n - 1
OUTPUT arr[i][j]
NEXT j
NEXT iOuter loop: n times. Inner loop: n times per outer iteration. Total: n × n = O(n²)
Fragment C — Halving loop:
i ← n
WHILE i > 1
i ← i DIV 2
OUTPUT i
ENDWHILEEach iteration halves i. Starting from n, it takes log₂(n) halvings to reach 1 → O(log n)
Fragment D — Consecutive loops:
// Loop 1
FOR i ← 0 TO n - 1
OUTPUT arr[i]
NEXT i
// Loop 2
FOR i ← 0 TO n - 1
FOR j ← 0 TO n - 1
IF arr[i] = arr[j] THEN count ← count + 1
NEXT j
NEXT iLoop 1 is O(n), Loop 2 is O(n²). Total: O(n + n²) = O(n²) (drop lower-order term)
2 Compare Two Algorithms
Algorithm X processes n items in 3n² + 5n + 100 operations.
Algorithm Y processes n items in 200n log₂(n) operations.
| n | Algorithm X (3n² + 5n + 100) | Algorithm Y (200n log₂n) | Winner |
|---|---|---|---|
| 10 | 450 | 6,644 | X |
| 100 | 30,600 | 132,877 | X |
| 1,000 | 3,005,100 | 1,993,157 | Y |
| 1,000,000 | 3 × 10¹² | 3.99 × 10⁹ | Y (750× faster) |
For small n, the constants matter. For large n, the growth rate dominates. X is O(n²), Y is O(n log n). Y wins for large inputs.
Pitfalls & Common Errors
Confusing Big-O with Exact Count
Big-O is not the exact number of operations. It describes the growth rate. O(n²) means "grows proportionally to n-squared" — the actual count could be 3n² or 0.5n² + 100.
Thinking O(1) Means "Fast"
O(1) means constant time — it doesn't depend on input size. But it could be a large constant (e.g., 10,000 operations). O(1) just means it's the same regardless of n.
Nested Loops ≠ Always O(n²)
Only if both loops depend on n. If the inner loop runs a fixed number of times (e.g., always 5), then nested loops can be O(5n) = O(n).
Pro-Tips for Exams
Quick Pattern Recognition
- Single loop → likely O(n)
- Nested loops (both depend on n) → likely O(n²)
- Loop variable halved/doubled each iteration → O(log n)
- No loops, no recursion → O(1)
- Loop inside a halving loop → O(n log n)
Know These by Heart
- Linear search → O(n)
- Binary search → O(log n)
- Bubble sort → O(n²)
- Insertion sort → O(n²)
- Array access → O(1)
Graded Tasks
List the Big-O complexities from fastest to slowest: O(n²), O(1), O(n log n), O(log n), O(n), O(2ⁿ).
Explain why we "drop constants" in Big-O. Why is O(3n) simplified to O(n)?
Determine the Big-O of the following: a loop from 1 to n containing another loop from 1 to n, where the inner loop contains an IF statement that calls a function running a loop from 1 to n.
Algorithm A has complexity O(n²) and processes 100 items in 1 second. Algorithm B has O(n log n) and processes 100 items in 2 seconds. At what input size will B become faster than A? Show your reasoning.
Write two different algorithms to find if an array contains any duplicates. One should be O(n²) and the other should sort the array first. State the Big-O of each approach.