System Design

LRU Cache

20 min

LRU Cache

LRU Cache - Low-Level Design

Problem Statement

Design and implement a Least Recently Used (LRU) Cache with O(1) time complexity for both

text
1get
and
text
1put
operations.

Requirements

Functional Requirements

  1. text
    1get(key)
    : Get value of the key if it exists, return -1 otherwise
  2. text
    1put(key, value)
    : Set or insert the value if key doesn't exist. When cache reaches capacity, invalidate least recently used item
  3. Fixed capacity defined at initialization
  4. Both operations should be O(1)

Non-Functional Requirements

  1. Thread-safe implementation
  2. Generic key-value support
  3. Configurable eviction policy
  4. Memory efficient

Data Structure Choice

Why HashMap + Doubly Linked List?

  • HashMap: O(1) lookup
  • Doubly Linked List: O(1) insertion/deletion at both ends
  • Combination: O(1) for both get and put operations

Structure

text
1HashMap: key -> Node 2Doubly Linked List: Head <-> Node1 <-> Node2 <-> ... <-> Tail 3 4Most Recently Used (MRU) -----> Head 5Least Recently Used (LRU) -----> Tail

Class Diagram

Main Class
+ properties
+ attributes
+ methods()
+ operations()
Helper Class
+ fields
+ data
+ helpers()
+ utilities()
Interface
«interface»
+ abstract methods()
━━━▶ Dependency
◆━━━ Composition
△━━━ Inheritance

Implementation

Node Class

java
1class Node<K, V> { 2 K key; 3 V value; 4 Node<K, V> prev; 5 Node<K, V> next; 6 7 public Node(K key, V value) { 8 this.key = key; 9 this.value = value; 10 } 11}

LRU Cache Implementation

java
1import java.util.HashMap; 2import java.util.Map; 3 4public class LRUCache<K, V> { 5 private final int capacity; 6 private final Map<K, Node<K, V>> cache; 7 private final Node<K, V> head; // Most recently used 8 private final Node<K, V> tail; // Least recently used 9 10 public LRUCache(int capacity) { 11 if (capacity <= 0) { 12 throw new IllegalArgumentException("Capacity must be positive"); 13 } 14 15 this.capacity = capacity; 16 this.cache = new HashMap<>(); 17 18 // Initialize dummy head and tail 19 this.head = new Node<>(null, null); 20 this.tail = new Node<>(null, null); 21 head.next = tail; 22 tail.prev = head; 23 } 24 25 /** 26 * Get value for given key. 27 * Move accessed node to head (most recently used) 28 * Time Complexity: O(1) 29 */ 30 public V get(K key) { 31 Node<K, V> node = cache.get(key); 32 33 if (node == null) { 34 return null; 35 } 36 37 // Move to head (most recently used) 38 moveToHead(node); 39 40 return node.value; 41 } 42 43 /** 44 * Put key-value pair in cache. 45 * If key exists, update value and move to head. 46 * If key doesn't exist, add new node to head. 47 * If capacity exceeded, remove tail (least recently used). 48 * Time Complexity: O(1) 49 */ 50 public void put(K key, V value) { 51 Node<K, V> node = cache.get(key); 52 53 if (node != null) { 54 // Update existing node 55 node.value = value; 56 moveToHead(node); 57 } else { 58 // Create new node 59 Node<K, V> newNode = new Node<>(key, value); 60 cache.put(key, newNode); 61 addToHead(newNode); 62 63 // Check capacity 64 if (cache.size() > capacity) { 65 // Remove least recently used (tail) 66 Node<K, V> lru = removeTail(); 67 cache.remove(lru.key); 68 } 69 } 70 } 71 72 /** 73 * Remove node from current position and add to head 74 */ 75 private void moveToHead(Node<K, V> node) { 76 removeNode(node); 77 addToHead(node); 78 } 79 80 /** 81 * Add node right after dummy head 82 */ 83 private void addToHead(Node<K, V> node) { 84 node.prev = head; 85 node.next = head.next; 86 87 head.next.prev = node; 88 head.next = node; 89 } 90 91 /** 92 * Remove node from linked list 93 */ 94 private void removeNode(Node<K, V> node) { 95 node.prev.next = node.next; 96 node.next.prev = node.prev; 97 } 98 99 /** 100 * Remove and return tail node (LRU) 101 */ 102 private Node<K, V> removeTail() { 103 Node<K, V> lru = tail.prev; 104 removeNode(lru); 105 return lru; 106 } 107 108 /** 109 * Get current size of cache 110 */ 111 public int size() { 112 return cache.size(); 113 } 114 115 /** 116 * Clear all entries 117 */ 118 public void clear() { 119 cache.clear(); 120 head.next = tail; 121 tail.prev = head; 122 } 123 124 /** 125 * Check if key exists 126 */ 127 public boolean containsKey(K key) { 128 return cache.containsKey(key); 129 } 130}

Thread-Safe Implementation

java
1import java.util.concurrent.locks.ReadWriteLock; 2import java.util.concurrent.locks.ReentrantReadWriteLock; 3 4public class ThreadSafeLRUCache<K, V> { 5 private final LRUCache<K, V> cache; 6 private final ReadWriteLock lock; 7 8 public ThreadSafeLRUCache(int capacity) { 9 this.cache = new LRUCache<>(capacity); 10 this.lock = new ReentrantReadWriteLock(); 11 } 12 13 public V get(K key) { 14 lock.readLock().lock(); 15 try { 16 return cache.get(key); 17 } finally { 18 lock.readLock().unlock(); 19 } 20 } 21 22 public void put(K key, V value) { 23 lock.writeLock().lock(); 24 try { 25 cache.put(key, value); 26 } finally { 27 lock.writeLock().unlock(); 28 } 29 } 30 31 public int size() { 32 lock.readLock().lock(); 33 try { 34 return cache.size(); 35 } finally { 36 lock.readLock().unlock(); 37 } 38 } 39}

Alternative: Using LinkedHashMap

Java's

text
1LinkedHashMap
provides built-in LRU functionality:

java
1import java.util.LinkedHashMap; 2import java.util.Map; 3 4public class LRUCacheLinkedHashMap<K, V> extends LinkedHashMap<K, V> { 5 private final int capacity; 6 7 public LRUCacheLinkedHashMap(int capacity) { 8 // true = access-order (LRU), false = insertion-order 9 super(capacity, 0.75f, true); 10 this.capacity = capacity; 11 } 12 13 @Override 14 protected boolean removeEldestEntry(Map.Entry<K, V> eldest) { 15 return size() > capacity; 16 } 17 18 public V get(K key) { 19 return super.getOrDefault(key, null); 20 } 21 22 public void put(K key, V value) { 23 super.put(key, value); 24 } 25}

Usage Example

java
1public class LRUCacheDemo { 2 public static void main(String[] args) { 3 LRUCache<Integer, String> cache = new LRUCache<>(3); 4 5 // Add entries 6 cache.put(1, "One"); 7 cache.put(2, "Two"); 8 cache.put(3, "Three"); 9 10 System.out.println(cache.get(1)); // "One" (moves to head) 11 12 // Cache is full: [1, 2, 3] 13 // Adding 4 will evict 2 (LRU) 14 cache.put(4, "Four"); 15 16 System.out.println(cache.get(2)); // null (evicted) 17 System.out.println(cache.get(3)); // "Three" 18 System.out.println(cache.get(4)); // "Four" 19 20 // Current order: [3, 4, 1] 21 cache.put(5, "Five"); // Evicts 1 22 23 System.out.println(cache.get(1)); // null (evicted) 24 System.out.println(cache.size()); // 3 25 } 26}

Complexity Analysis

OperationTime ComplexitySpace Complexity
get(key)O(1)O(1)
put(key, value)O(1)O(1)
Space-O(capacity)

Why O(1)?

  • HashMap lookup: O(1)
  • Add/Remove from Doubly Linked List: O(1)
  • Combined: O(1)

Design Patterns Used

  1. Data Structure Pattern: HashMap + Doubly Linked List
  2. Encapsulation: Private helper methods
  3. Generics: Type-safe implementation
  4. Decorator Pattern: ThreadSafeLRUCache wraps LRUCache

Variations

1. LFU Cache (Least Frequently Used)

java
1// Track access frequency instead of recency 2// Evict item with lowest frequency

2. TTL Cache (Time To Live)

java
1class TTLNode<K, V> extends Node<K, V> { 2 long expiryTime; 3 4 public TTLNode(K key, V value, long ttlMillis) { 5 super(key, value); 6 this.expiryTime = System.currentTimeMillis() + ttlMillis; 7 } 8 9 public boolean isExpired() { 10 return System.currentTimeMillis() > expiryTime; 11 } 12}

3. Size-based Eviction

java
1// Evict based on memory size rather than count 2// Track cumulative size of entries

Testing

java
1import org.junit.jupiter.api.Test; 2import static org.junit.jupiter.api.Assertions.*; 3 4public class LRUCacheTest { 5 6 @Test 7 public void testBasicOperations() { 8 LRUCache<Integer, String> cache = new LRUCache<>(2); 9 10 cache.put(1, "One"); 11 cache.put(2, "Two"); 12 13 assertEquals("One", cache.get(1)); 14 assertEquals("Two", cache.get(2)); 15 assertNull(cache.get(3)); 16 } 17 18 @Test 19 public void testEviction() { 20 LRUCache<Integer, String> cache = new LRUCache<>(2); 21 22 cache.put(1, "One"); 23 cache.put(2, "Two"); 24 cache.put(3, "Three"); // Evicts 1 25 26 assertNull(cache.get(1)); 27 assertEquals("Two", cache.get(2)); 28 assertEquals("Three", cache.get(3)); 29 } 30 31 @Test 32 public void testUpdate() { 33 LRUCache<Integer, String> cache = new LRUCache<>(2); 34 35 cache.put(1, "One"); 36 cache.put(1, "ONE"); // Update 37 38 assertEquals("ONE", cache.get(1)); 39 assertEquals(1, cache.size()); 40 } 41 42 @Test 43 public void testAccessOrder() { 44 LRUCache<Integer, String> cache = new LRUCache<>(2); 45 46 cache.put(1, "One"); 47 cache.put(2, "Two"); 48 cache.get(1); // Access 1 (moves to head) 49 cache.put(3, "Three"); // Evicts 2 (not 1) 50 51 assertEquals("One", cache.get(1)); 52 assertNull(cache.get(2)); 53 assertEquals("Three", cache.get(3)); 54 } 55}

Real-world Applications

  1. Database Query Cache: Cache frequently accessed queries
  2. Web Browser Cache: Store recently visited pages
  3. CDN: Cache popular content closer to users
  4. Operating Systems: Page replacement algorithms
  5. API Rate Limiting: Track recent API calls per user

Trade-offs

AspectHashMap + DLLLinkedHashMap
ControlFull control over implementationBuilt-in, less control
PerformanceSlightly fasterSlightly slower
CodeMore codeConcise
LearningBetter for interviewsProduction-ready

💡 Interview Tips & Out-of-the-Box Thinking

Common Pitfalls

  • Forgetting to remove from HashMap: When evicting tail, must also remove from HashMap (memory leak)
  • Not updating on get(): LRU requires moving accessed node to head, not just on put()
  • Thread safety: Plain HashMap + LinkedList not thread-safe - need synchronization
  • Dummy nodes complexity: Forgetting dummy head/tail leads to null checks everywhere

Why This Data Structure?

  • HashMap: O(1) key lookup - can't use just doubly linked list (O(n) search)
  • Doubly Linked List: O(1) add/remove - can't use HashMap alone (no eviction order)
  • Combination: Best of both worlds - O(1) get, O(1) put, O(1) eviction

Advanced Considerations

  • TTL support: Add timestamp to nodes, background thread for expiry
  • Size-based eviction: Track memory size, not just count (for variable-size objects)
  • Write-through vs Write-back: Update DB immediately vs Batch updates for performance
  • Cache stampede: Multiple threads miss cache, all hit DB - use locks or probabilistic early expiration

Creative Solutions

  • 2Q Algorithm: Two queues (recent + frequent) to avoid one-time access pollution
  • Adaptive Replacement Cache (ARC): Balance between recency and frequency dynamically
  • Segmented LRU: Hot/warm/cold segments to reduce lock contention
  • Probabilistic eviction: Randomly sample K items, evict LRU among them (simpler than global LRU)

Trade-off Discussions

  • LRU vs LFU: Recency (temporal locality) vs Frequency (popular items) - LRU simpler, LFU better for skewed access
  • Eviction on put() vs background: Immediate (blocking) vs Async (complexity + memory overhead)
  • Synchronization granularity: Entire cache (simple, contention) vs Per-bucket locks (complex, higher throughput)

Edge Cases to Mention

  • Capacity = 0: Should throw exception or handle gracefully?
  • Negative capacity: Input validation needed
  • put() same key twice: Update value and move to head (don't create duplicate)
  • get() non-existent key: Return null (documented behavior)
  • Concurrent put() of same key: Last write wins with proper locking
  • Eviction during iteration: ConcurrentModificationException if not using thread-safe iterator

Follow-up Questions

  1. How would you implement LFU (Least Frequently Used) cache?

    • Add frequency counter to each node
    • Use min-heap to track lowest frequency
    • O(log n) for eviction
  2. How to handle concurrent access?

    • Use
      text
      1ConcurrentHashMap
      + synchronization
    • ReadWriteLock for better throughput
    • Lock-free data structures
  3. How to make it distributed?

    • Use Redis with TTL
    • Consistent hashing for partitioning
    • Handle cache invalidation
  4. What if values are large objects?

    • Store references, not full objects
    • Implement size-based eviction
    • Use weak references for memory management

Common Interview Follow-ups

  1. Implement LRU cache with expiration time
  2. Design a cache that supports multiple eviction policies
  3. Make the cache persistent (survive restarts)
  4. Implement a distributed cache system
  5. Add metrics (hit rate, eviction count)

Class Diagram

LRU Cache Class Diagram

Detailed class/schema diagram will be displayed here

Press j for next, k for previous