Understanding data structures is not just about knowing how they work, but knowing when to use them to optimize time and space complexity.
1. Static & Dynamic Arrays
The foundation of data storage.
- Concept: Elements stored in contiguous memory locations.
- Key Fact: Static arrays have a fixed size; Dynamic arrays (like Python's list or Java's ArrayList) resize automatically.
- Complexity: Access $O(1)$, Search $O(n)$, Insertion/Deletion $O(n)$.
2. Singly & Doubly Linked Lists
- Concept: A sequence of nodes where each node contains data and a pointer to the next node.
- Interview Favorite: "Reverse a Linked List" or "Detect a Cycle (Floyd’s Cycle-Finding Algorithm)."
- Complexity: Insertion/Deletion at the head is $O(1)$.
- Concept: Last-In, First-Out. Imagine a stack of plates.
- Use Cases: Function calls (the "Call Stack"), undo mechanisms, and balancing parentheses problems.
- Complexity: Push/Pop $O(1)$.
- Concept: First-In, First-Out. Imagine a line at a bank.
- Use Cases: Breadth-First Search (BFS), task scheduling, and handling shared resources (like printers).
- Complexity: Enqueue/Dequeue $O(1)$.
5. Hash Tables / Hash Maps
- Concept: Stores data in key-value pairs using a hash function.
- Why it's Crucial: It provides near-instant lookup.
- Complexity: Search/Insert/Delete $O(1)$ on average.
6. Binary Search Trees (BST)
- Concept: A tree where the left child < parent and the right child > parent.
- Key Skill: Mastering In-order, Pre-order, and Post-order traversals.
- Complexity: Search/Insert $O(\log n)$ if balanced.
7. Binary Heaps (Min/Max)
- Concept: A complete binary tree used to implement Priority Queues.
- Use Cases: Finding the "K-th smallest/largest element" and Dijkstra's algorithm.
- Complexity: Get Min/Max $O(1)$, Insert/Delete $O(\log n)$.
- Concept: A collection of nodes (vertices) and connections (edges).
- Required Algorithms: Depth-First Search (DFS) and Breadth-First Search (BFS).
- Representations: Adjacency Matrix and Adjacency List.
- Concept: A specialized tree used to store strings.
- Use Cases: Autocomplete, spell checkers, and IP routing.
- Key Advantage: Searching for a prefix takes $O(L)$ where $L$ is the length of the word, independent of how many words are in the Trie.
10. Disjoint Set Union (Union-Find)
- Concept: Tracks elements partitioned into several disjoint sets.
- Use Cases: Detecting cycles in undirected graphs and Kruskal's Minimum Spanning Tree algorithm.
11. AVL / Red-Black Trees (Self-Balancing)
- Concept: BSTs that automatically stay balanced after insertions or deletions.
- Interview Context: You rarely have to code these from scratch, but you must know why they prevent the $O(n)$ "worst-case" scenario of a standard BST.
12. Segment Trees / Fenwick Trees
- Concept: Used for storing information about intervals or segments.
- Use Cases: Range Sum Queries or Range Minimum Queries in $O(\log n)$ time when the underlying data is constantly changing.