Big O complexity · Core data structures · Every LeetCode pattern you need to know
A mathematical notation describing how runtime or space requirements grow relative to input size as n → ∞
We drop constants and lower-order terms. O(2n + 5) → O(n). We care about growth rate, not exact counts. Best case uses Ω (Omega), average case uses Θ (Theta), worst case uses O (Big O). In interviews, always default to worst case unless asked otherwise.
Runtime does not change regardless of input size. Always executes in the same time or space regardless of n.
Input is halved at each step. Incredibly efficient — doubling n only adds one more operation.
Runtime scales directly with input size. Every element must be visited at least once.
The sweet spot for comparison-based sorting. Each of n elements takes O(log n) work.
Nested loops over the same data. Doubles in n means 4× the work. Avoid for n > 10,000.
Doubles with each addition to input. Each element has 2 choices: include or exclude.
Generating all permutations. Completely impractical beyond n=12. The worst growth rate.
O(1) < O(log n) < O(√n) < O(n) < O(n log n) < O(n²) < O(n³) < O(2ⁿ) < O(n!)
The building blocks. Know these deeply — every algorithm lives on top of one of these.
Contiguous block of memory storing elements of the same type. O(1) access by index, but insertion/deletion mid-array is costly due to shifting.
| Operation | Avg | Worst |
|---|---|---|
| Access | O(1) | O(1) |
| Search | O(n) | O(n) |
| Insert (end) | O(1) | O(n) |
| Insert (mid) | O(n) | O(n) |
| Delete | O(n) | O(n) |
Key-value store using a hash function to compute array index. O(1) average for all operations. Handle collisions via chaining or open addressing.
| Operation | Avg | Worst |
|---|---|---|
| Access | O(1) | O(n) |
| Search | O(1) | O(n) |
| Insert | O(1) | O(n) |
| Delete | O(1) | O(n) |
Nodes linked by pointers. No contiguous memory. O(1) insert/delete at known position, O(n) search. Doubly-linked lists allow O(1) deletion given node reference.
| Operation | Avg | Worst |
|---|---|---|
| Access | O(n) | O(n) |
| Search | O(n) | O(n) |
| Insert (head) | O(1) | O(1) |
| Delete (known) | O(1) | O(1) |
LIFO — Last In, First Out. Only access the top element. Implemented with array or linked list. Core to DFS, backtracking, expression parsing.
| Operation | Avg | Worst |
|---|---|---|
| Push | O(1) | O(1) |
| Pop | O(1) | O(1) |
| Peek/Top | O(1) | O(1) |
| Search | O(n) | O(n) |
FIFO — First In, First Out. A deque (double-ended queue) allows O(1) push/pop from both ends. Used in BFS, sliding window maximum.
| Operation | Avg | Worst |
|---|---|---|
| Enqueue | O(1) | O(1) |
| Dequeue | O(1) | O(1) |
| Peek | O(1) | O(1) |
| Search | O(n) | O(n) |
Complete binary tree satisfying heap property. Min-heap: parent ≤ children. O(log n) insert/delete, O(1) peek at min/max. Built on array — children of i are at 2i+1 and 2i+2.
| Operation | Avg | Worst |
|---|---|---|
| Peek Min/Max | O(1) | O(1) |
| Insert | O(log n) | O(log n) |
| Delete Min/Max | O(log n) | O(log n) |
| Heapify | O(n) | O(n) |
BST: left < parent < right. Enables O(log n) search/insert on balanced trees. Unbalanced degrades to O(n). DFS traversals: inorder, preorder, postorder.
| Operation | Avg (balanced) | Worst |
|---|---|---|
| Access | O(log n) | O(n) |
| Search | O(log n) | O(n) |
| Insert | O(log n) | O(n) |
| Delete | O(log n) | O(n) |
Set of vertices (nodes) connected by edges. Directed or undirected, weighted or unweighted. Represented as adjacency list O(V+E) or adjacency matrix O(V²).
| Operation | Adj. List | Adj. Matrix |
|---|---|---|
| Add vertex | O(1) | O(V²) |
| Add edge | O(1) | O(1) |
| DFS/BFS | O(V+E) | O(V²) |
| Check edge | O(E) | O(1) |
Tree where each node represents a character. Enables O(L) search/insert where L = word length. Perfect for prefix searches, autocomplete, word dictionaries.
Disjoint Set Union — tracks connected components. With path compression + union by rank, both union and find run in O(α(n)) ≈ O(1) amortized.
Quick-reference for time and space complexity of the most common algorithms
| Algorithm | Best | Average | Worst | Space |
|---|---|---|---|---|
| Bubble Sort | O(n) | O(n²) | O(n²) | O(1) |
| Selection Sort | O(n²) | O(n²) | O(n²) | O(1) |
| Insertion Sort | O(n) | O(n²) | O(n²) | O(1) |
| Merge Sort | O(n log n) | O(n log n) | O(n log n) | O(n) |
| Quick Sort | O(n log n) | O(n log n) | O(n²) | O(log n) |
| Heap Sort | O(n log n) | O(n log n) | O(n log n) | O(1) |
| Counting Sort | O(n+k) | O(n+k) | O(n+k) | O(k) |
| Radix Sort | O(nk) | O(nk) | O(nk) | O(n+k) |
| Linear Search | O(1) | O(n) | O(n) | O(1) |
| Binary Search | O(1) | O(log n) | O(log n) | O(1) |
| BFS | O(1) | O(V+E) | O(V+E) | O(V) |
| DFS | O(1) | O(V+E) | O(V+E) | O(V) |
| Dijkstra | O(E) | O((V+E) log V) | O((V+E) log V) | O(V) |
| Bellman-Ford | O(VE) | O(VE) | O(VE) | O(V) |
| Floyd-Warshall | O(V³) | O(V³) | O(V³) | O(V²) |
| Kruskal's MST | O(E log E) | O(E log E) | O(E log E) | O(V) |
Master these 18 patterns and you can solve 90%+ of LeetCode problems. Recognize the pattern → apply the template.
Two indices traversing the data, often from opposite ends or at different speeds. Reduces O(n²) brute force to O(n).
Maintain a window of elements that expands/shrinks as you iterate. Converts O(n²) nested loops into a single O(n) pass.
Halve the search space each iteration. Applies not just to sorted arrays but to any monotonic search space — including on the answer itself.
Floyd's cycle detection. Two pointers moving at different speeds — if there's a cycle, fast will eventually catch slow. Also finds middle of linked list in O(n).
Precompute cumulative sums so that any range sum [i, j] can be answered in O(1): prefix[j+1] - prefix[i]. Classic tradeoff of O(n) precompute for O(1) queries.
Explore as deep as possible along each branch before backtracking. Implemented recursively (call stack) or iteratively (explicit stack). Key for tree traversal and graph exploration.
Explore all neighbors at current depth before going deeper. Uses a queue. Guarantees shortest path in unweighted graphs. Level-order traversal of trees.
Systematically build candidates and abandon ("backtrack") a candidate as soon as it can't lead to a valid solution. Prune the search space early.
Break problem into overlapping subproblems; cache results (memoization = top-down, tabulation = bottom-up). Optimal substructure + overlapping subproblems = DP.
A stack that maintains a monotonically increasing or decreasing order. Pop elements that violate the order — those pops reveal the "next greater/smaller" element.
Use a min-heap of size K to find the K largest elements in O(n log k). Avoids sorting the entire array. Also used for merging K sorted lists.
Sort intervals by start time. Merge overlapping ones. Two intervals [a,b] and [c,d] overlap when c ≤ b. Works for scheduling, meeting rooms, calendar conflicts.
Linear ordering of vertices in a DAG such that for every edge u→v, u comes before v. Kahn's algorithm (BFS, in-degree) or DFS with a visited stack.
Know all four traversals cold: inorder (sorted for BST), preorder (serialize), postorder (delete), level-order (BFS). BST: validate, kth smallest, LCA.
Make the locally optimal choice at each step, hoping it leads to the global optimum. Faster than DP when it works, but requires proving correctness (exchange argument).
Efficiently track connected components. With path compression and union by rank, find/union are nearly O(1). Ideal for dynamic connectivity problems.
Insert words into a trie and exploit prefix sharing for fast lookups. Each path from root to a marked node spells a word. Use for searching all words with a given prefix.
Direct operations on binary representations. XOR tricks, bit shifting, masks. Often O(1) solutions to problems that seem to require O(n) or more.