Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Select an option

  • Save carefree-ladka/51980f719e0afdf133e9741b45472e0b to your computer and use it in GitHub Desktop.

Select an option

Save carefree-ladka/51980f719e0afdf133e9741b45472e0b to your computer and use it in GitHub Desktop.
Java Collections — Complete Reference for DSA & Daily Use

Java Collections — Complete Reference for DSA & Daily Use

A categorized guide to Java's Collection Framework covering the most-used methods in competitive programming, DSA problem solving, and everyday development.


Table of Contents

  1. Collection Framework Overview
  2. List Interface
  3. Stack
  4. Queue & Deque
  5. Set Interface
  6. Map Interface
  7. Utility Classes
  8. DSA Quick-Pick Guide
  9. Complexity Cheat Sheet

Collection Framework Overview

java.util
│
├── Collection (Interface)
│   ├── List          → ArrayList, LinkedList, Stack, Vector
│   ├── Set           → HashSet, LinkedHashSet, TreeSet
│   └── Queue         → PriorityQueue, ArrayDeque, LinkedList
│
└── Map (Interface)   → HashMap, LinkedHashMap, TreeMap, Hashtable

Key Properties at a glance:

Class Order Duplicates Null Thread-safe
ArrayList Insertion
LinkedList Insertion
HashSet None 1 null
LinkedHashSet Insertion 1 null
TreeSet Sorted
HashMap None Keys: ❌ 1 null key
LinkedHashMap Insertion Keys: ❌ 1 null key
TreeMap Sorted Key Keys: ❌
PriorityQueue Natural/Comparator
ArrayDeque Insertion

List Interface

ArrayList

Best for: Random access, frequent reads, index-based operations.

import java.util.ArrayList;

ArrayList<Integer> list = new ArrayList<>();
ArrayList<Integer> list = new ArrayList<>(initialCapacity); // pre-allocate

Most Used Methods

// ─── Add ───────────────────────────────────────────────
list.add(10);               // Append to end         → O(1) amortized
list.add(0, 99);            // Insert at index        → O(n)
list.addAll(otherList);     // Append all elements    → O(k)
list.addAll(2, otherList);  // Insert all at index    → O(n+k)

// ─── Access ────────────────────────────────────────────
list.get(2);                // Get by index           → O(1)
list.size();                // Number of elements     → O(1)
list.isEmpty();             // true if size == 0      → O(1)
list.contains(5);           // Linear search          → O(n)
list.indexOf(5);            // First index of value   → O(n)
list.lastIndexOf(5);        // Last index of value    → O(n)

// ─── Update ────────────────────────────────────────────
list.set(1, 42);            // Replace at index       → O(1)

// ─── Remove ────────────────────────────────────────────
list.remove(Integer.valueOf(5)); // Remove by value   → O(n)
list.remove(2);             // Remove by index        → O(n)
list.removeAll(otherList);  // Remove all matches     → O(n*k)
list.clear();               // Empty the list         → O(n)

// ─── Sublist & Conversion ──────────────────────────────
list.subList(1, 4);         // View [1, 4)            → O(1)
list.toArray();             // Convert to Object[]    → O(n)
Collections.sort(list);     // Sort ascending         → O(n log n)
Collections.reverse(list);  // Reverse in-place       → O(n)

// ─── DSA Patterns ──────────────────────────────────────
// Two-pointer swap
Collections.swap(list, i, j);

// Frequency count
Collections.frequency(list, 5);

// Binary search (only on sorted list)
Collections.binarySearch(list, 5);   // O(log n)

// Fill all elements with value
Collections.fill(list, 0);

Daily Usage Examples

// Building result list in DSA
List<List<Integer>> result = new ArrayList<>();
List<Integer> row = new ArrayList<>(Arrays.asList(1, 2, 3));
result.add(new ArrayList<>(row)); // Always copy when reusing!

// String split to list
String[] arr = s.split(",");
List<String> words = new ArrayList<>(Arrays.asList(arr));

LinkedList

Best for: Frequent insertions/deletions at head/tail, implementing Deque.

import java.util.LinkedList;

LinkedList<Integer> ll = new LinkedList<>();

Most Used Methods

// ─── Add ───────────────────────────────────────────────
ll.add(5);              // Add at tail                → O(1)
ll.addFirst(1);         // Add at head                → O(1)
ll.addLast(9);          // Add at tail                → O(1)
ll.add(2, 7);           // Add at index               → O(n)

// ─── Access ────────────────────────────────────────────
ll.get(0);              // By index (slow)            → O(n)
ll.getFirst();          // Peek head                  → O(1)
ll.getLast();           // Peek tail                  → O(1)
ll.peek();              // Peek head (null-safe)       → O(1)
ll.peekFirst();         // Same as peek               → O(1)
ll.peekLast();          // Peek tail (null-safe)       → O(1)

// ─── Remove ────────────────────────────────────────────
ll.remove();            // Remove head (throws if empty) → O(1)
ll.removeFirst();       // Remove head                → O(1)
ll.removeLast();        // Remove tail                → O(1)
ll.poll();              // Remove head (null-safe)    → O(1)
ll.pollFirst();         // Same as poll               → O(1)
ll.pollLast();          // Remove tail (null-safe)    → O(1)

// ─── Stack-like ────────────────────────────────────────
ll.push(x);             // Push to front (addFirst)   → O(1)
ll.pop();               // Pop from front (removeFirst)→ O(1)

Stack

Best for: DFS, backtracking, expression evaluation, undo operations.

⚠️ Prefer Deque<Integer> stack = new ArrayDeque<>() over Stack<> in DSA — it's faster (no synchronization overhead).

import java.util.Stack;
Stack<Integer> stack = new Stack<>();

// Preferred in DSA:
Deque<Integer> stack = new ArrayDeque<>();

Most Used Methods

// Using Stack class
stack.push(10);         // Push                       → O(1)
stack.pop();            // Pop & return top           → O(1)
stack.peek();           // View top (no remove)       → O(1)
stack.isEmpty();        // Check empty                → O(1)
stack.size();           // Size                       → O(1)
stack.search(5);        // 1-based position from top  → O(n)

// Using ArrayDeque as Stack (preferred)
Deque<Integer> stack = new ArrayDeque<>();
stack.push(10);         // addFirst()                 → O(1)
stack.pop();            // removeFirst()              → O(1)
stack.peek();           // peekFirst()                → O(1)

DSA Patterns

// Monotonic Stack (Next Greater Element)
Deque<Integer> mono = new ArrayDeque<>();
for (int num : nums) {
    while (!mono.isEmpty() && mono.peek() < num)
        mono.pop();
    mono.push(num);
}

// Valid Parentheses
Deque<Character> st = new ArrayDeque<>();
for (char c : s.toCharArray()) {
    if (c == '(') st.push(c);
    else if (!st.isEmpty()) st.pop();
    else return false;
}
return st.isEmpty();

Queue & Deque

PriorityQueue (Heap)

Best for: Dijkstra's, Top-K problems, scheduling, greedy algorithms.

import java.util.PriorityQueue;

PriorityQueue<Integer> minHeap = new PriorityQueue<>();               // Min-heap
PriorityQueue<Integer> maxHeap = new PriorityQueue<>(Collections.reverseOrder()); // Max-heap
PriorityQueue<int[]> pq = new PriorityQueue<>((a, b) -> a[0] - b[0]); // Custom comparator

Most Used Methods

pq.offer(5);            // Insert (preferred)         → O(log n)
pq.add(5);              // Insert (throws on fail)    → O(log n)
pq.peek();              // View min (no remove)       → O(1)
pq.poll();              // Remove & return min        → O(log n)
pq.remove();            // Same as poll (throws)      → O(log n)
pq.contains(5);         // Linear search              → O(n)
pq.size();              // Size                       → O(1)
pq.isEmpty();           // Check empty                → O(1)
pq.clear();             // Empty heap                 → O(n)

DSA Patterns

// Top K Largest elements
PriorityQueue<Integer> minH = new PriorityQueue<>();
for (int n : nums) {
    minH.offer(n);
    if (minH.size() > k) minH.poll(); // keep only k largest
}

// Dijkstra's shortest path
PriorityQueue<int[]> pq = new PriorityQueue<>((a, b) -> a[1] - b[1]);
pq.offer(new int[]{src, 0}); // {node, dist}
while (!pq.isEmpty()) {
    int[] curr = pq.poll();
    // ... relax edges
}

// Merge K sorted lists
PriorityQueue<int[]> heap = new PriorityQueue<>((a, b) -> a[0] - b[0]);

ArrayDeque

Best for: BFS, sliding window, palindrome checks, double-ended queue operations.

import java.util.ArrayDeque;

Deque<Integer> dq = new ArrayDeque<>();
Queue<Integer> q = new ArrayDeque<>(); // Use as Queue in BFS

Most Used Methods

// ─── Add ───────────────────────────────────────────────
dq.offerFirst(1);       // Add to front               → O(1)
dq.offerLast(9);        // Add to back                → O(1)
dq.offer(5);            // Add to back (Queue mode)   → O(1)
dq.addFirst(1);         // Add to front (throws)      → O(1)
dq.addLast(9);          // Add to back (throws)       → O(1)

// ─── Peek ──────────────────────────────────────────────
dq.peekFirst();         // View front                 → O(1)
dq.peekLast();          // View back                  → O(1)
dq.peek();              // View front (Queue mode)    → O(1)

// ─── Remove ────────────────────────────────────────────
dq.pollFirst();         // Remove from front          → O(1)
dq.pollLast();          // Remove from back           → O(1)
dq.poll();              // Remove front (Queue mode)  → O(1)

// ─── Size ──────────────────────────────────────────────
dq.size();              // Size                       → O(1)
dq.isEmpty();           // Check empty                → O(1)

DSA Patterns

// BFS (Tree/Graph)
Queue<Integer> bfs = new ArrayDeque<>();
bfs.offer(root);
while (!bfs.isEmpty()) {
    int node = bfs.poll();
    // process node
    bfs.offer(node.left);
    bfs.offer(node.right);
}

// Sliding Window Maximum (Monotonic Deque)
Deque<Integer> dq = new ArrayDeque<>(); // stores indices
for (int i = 0; i < nums.length; i++) {
    while (!dq.isEmpty() && dq.peekFirst() < i - k + 1)
        dq.pollFirst();
    while (!dq.isEmpty() && nums[dq.peekLast()] < nums[i])
        dq.pollLast();
    dq.offerLast(i);
    if (i >= k - 1) result[i - k + 1] = nums[dq.peekFirst()];
}

LinkedList as Queue/Deque

Queue<Integer> queue = new LinkedList<>();
queue.offer(1); queue.offer(2); queue.offer(3);
while (!queue.isEmpty()) {
    System.out.print(queue.poll() + " "); // 1 2 3
}

Prefer ArrayDeque over LinkedList as a queue — no node allocation overhead.


Set Interface

HashSet

Best for: Deduplication, O(1) lookup, visited tracking in BFS/DFS.

import java.util.HashSet;

HashSet<Integer> set = new HashSet<>();
Set<Integer> set = new HashSet<>(Arrays.asList(1, 2, 3)); // Init from array

Most Used Methods

set.add(5);             // Add element                → O(1) avg
set.remove(5);          // Remove element             → O(1) avg
set.contains(5);        // Check presence             → O(1) avg
set.size();             // Count elements             → O(1)
set.isEmpty();          // Check empty                → O(1)
set.clear();            // Remove all                 → O(n)
set.addAll(list);       // Add collection (union)     → O(k)

// Set operations
Set<Integer> a = new HashSet<>(set1);
a.retainAll(set2);      // Intersection               → O(n)
a.removeAll(set2);      // Difference                 → O(n)
a.addAll(set2);         // Union                      → O(n)

DSA Patterns

// Two Sum (O(n))
Set<Integer> seen = new HashSet<>();
for (int n : nums) {
    if (seen.contains(target - n)) return true;
    seen.add(n);
}

// Longest Consecutive Sequence (O(n))
Set<Integer> s = new HashSet<>(Arrays.asList(nums));
for (int n : s) {
    if (!s.contains(n - 1)) { // start of sequence
        int len = 1;
        while (s.contains(n + len)) len++;
        max = Math.max(max, len);
    }
}

// Detect Cycle (Floyd or HashSet)
Set<ListNode> visited = new HashSet<>();
while (node != null) {
    if (!visited.add(node)) return true; // add returns false if duplicate
    node = node.next;
}

LinkedHashSet

Best for: Deduplication while preserving insertion order.

LinkedHashSet<String> lhs = new LinkedHashSet<>();
lhs.add("banana"); lhs.add("apple"); lhs.add("cherry");
// Iteration: banana → apple → cherry (insertion order)

Same methods as HashSet. Only difference: iteration order is insertion order.


TreeSet

Best for: Sorted unique elements, range queries, floor/ceiling operations.

import java.util.TreeSet;

TreeSet<Integer> ts = new TreeSet<>();
TreeSet<Integer> desc = new TreeSet<>(Collections.reverseOrder());

Most Used Methods

ts.add(5);              // Insert                     → O(log n)
ts.remove(5);           // Delete                     → O(log n)
ts.contains(5);         // Search                     → O(log n)
ts.size();              // Count                      → O(1)

// ─── Navigation (unique to TreeSet) ────────────────────
ts.first();             // Smallest element           → O(log n)
ts.last();              // Largest element            → O(log n)
ts.floor(5);            // Largest ≤ 5                → O(log n)
ts.ceiling(5);          // Smallest ≥ 5               → O(log n)
ts.lower(5);            // Largest < 5 (strict)       → O(log n)
ts.higher(5);           // Smallest > 5 (strict)      → O(log n)
ts.pollFirst();         // Remove & return smallest   → O(log n)
ts.pollLast();          // Remove & return largest    → O(log n)
ts.headSet(5);          // All elements < 5           → O(log n)
ts.tailSet(5);          // All elements ≥ 5           → O(log n)
ts.subSet(3, 8);        // Elements in [3, 8)         → O(log n)
ts.descendingSet();     // Reverse view               → O(1)

DSA Patterns

// Contains Duplicate III (sliding window + TreeSet)
TreeSet<Long> window = new TreeSet<>();
for (int i = 0; i < nums.length; i++) {
    Long floor = window.floor((long)nums[i] + t);
    if (floor != null && floor >= (long)nums[i] - t) return true;
    window.add((long)nums[i]);
    if (i >= k) window.remove((long)nums[i - k]);
}

// Closest value in BST set
ts.floor(target);   // ≤ target
ts.ceiling(target); // ≥ target

Map Interface

HashMap

Best for: Frequency counting, memoization, graph adjacency, grouping.

import java.util.HashMap;

HashMap<String, Integer> map = new HashMap<>();
Map<Integer, List<Integer>> adj = new HashMap<>(); // adjacency list

Most Used Methods

// ─── Put / Update ──────────────────────────────────────
map.put("a", 1);            // Insert/overwrite       → O(1) avg
map.putIfAbsent("a", 0);    // Insert only if missing → O(1) avg

// ─── Frequency Pattern ─────────────────────────────────
map.getOrDefault("a", 0);   // Get or default value   → O(1)
map.merge("a", 1, Integer::sum);  // key++ elegantly  → O(1)

// Java 8+ compute
map.compute("a", (k, v) -> v == null ? 1 : v + 1); // manual merge
map.computeIfAbsent("a", k -> new ArrayList<>());    // init lists

// ─── Access ────────────────────────────────────────────
map.get("a");               // Get value (null if absent) → O(1)
map.containsKey("a");       // Check key              → O(1)
map.containsValue(5);       // Check value (slow)     → O(n)
map.size();                 // Number of entries      → O(1)
map.isEmpty();              // Check empty            → O(1)

// ─── Remove ────────────────────────────────────────────
map.remove("a");            // Remove by key          → O(1)
map.remove("a", 5);         // Remove only if value = 5→ O(1)

// ─── Iterate ───────────────────────────────────────────
for (Map.Entry<String, Integer> e : map.entrySet()) {
    System.out.println(e.getKey() + " → " + e.getValue());
}
for (String key : map.keySet()) { ... }
for (int val : map.values()) { ... }

// ─── Replace ───────────────────────────────────────────
map.replace("a", 99);       // Replace value for key  → O(1)
map.replaceAll((k, v) -> v * 2); // Transform all values → O(n)

DSA Patterns

// Frequency Count
Map<Integer, Integer> freq = new HashMap<>();
for (int n : nums)
    freq.merge(n, 1, Integer::sum);

// Group Anagrams
Map<String, List<String>> groups = new HashMap<>();
for (String s : strs) {
    char[] ch = s.toCharArray();
    Arrays.sort(ch);
    groups.computeIfAbsent(new String(ch), k -> new ArrayList<>()).add(s);
}

// Memoization (top-down DP)
Map<Integer, Long> memo = new HashMap<>();
long fib(int n) {
    if (n <= 1) return n;
    return memo.computeIfAbsent(n, k -> fib(k-1) + fib(k-2));
}

// Two Sum
Map<Integer, Integer> idx = new HashMap<>();
for (int i = 0; i < nums.length; i++) {
    if (idx.containsKey(target - nums[i]))
        return new int[]{idx.get(target - nums[i]), i};
    idx.put(nums[i], i);
}

// Subarray Sum = K (prefix sum)
Map<Integer, Integer> prefixCount = new HashMap<>();
prefixCount.put(0, 1);
int sum = 0, count = 0;
for (int n : nums) {
    sum += n;
    count += prefixCount.getOrDefault(sum - k, 0);
    prefixCount.merge(sum, 1, Integer::sum);
}

LinkedHashMap

Best for: LRU Cache, ordered maps, preserving insertion order.

LinkedHashMap<Integer, Integer> lru = new LinkedHashMap<>(capacity, 0.75f, true) {
    protected boolean removeEldestEntry(Map.Entry<Integer, Integer> eldest) {
        return size() > capacity;
    }
};
// accessOrder=true → access order (LRU); false → insertion order

Same methods as HashMap. Key extras: removeEldestEntry, access-order mode.


TreeMap

Best for: Sorted maps, range queries, scheduling, time-series data.

import java.util.TreeMap;

TreeMap<Integer, String> tm = new TreeMap<>();
TreeMap<Integer, Integer> desc = new TreeMap<>(Collections.reverseOrder());

Most Used Methods

tm.put(3, "c");             // Insert                 → O(log n)
tm.get(3);                  // Get by key             → O(log n)
tm.remove(3);               // Delete                 → O(log n)
tm.containsKey(3);          // Key exists             → O(log n)
tm.size();                  // Count                  → O(1)

// ─── Navigation (unique to TreeMap) ────────────────────
tm.firstKey();              // Smallest key           → O(log n)
tm.lastKey();               // Largest key            → O(log n)
tm.floorKey(5);             // Largest key ≤ 5        → O(log n)
tm.ceilingKey(5);           // Smallest key ≥ 5       → O(log n)
tm.lowerKey(5);             // Largest key < 5        → O(log n)
tm.higherKey(5);            // Smallest key > 5       → O(log n)
tm.pollFirstEntry();        // Remove & return min    → O(log n)
tm.pollLastEntry();         // Remove & return max    → O(log n)
tm.headMap(5);              // Keys < 5               → O(log n)
tm.tailMap(5);              // Keys ≥ 5               → O(log n)
tm.subMap(3, 8);            // Keys in [3, 8)         → O(log n)
tm.firstEntry();            // Min key-value pair     → O(log n)
tm.lastEntry();             // Max key-value pair     → O(log n)

DSA Patterns

// Meeting Rooms / Interval Scheduling
TreeMap<Integer, Integer> timeline = new TreeMap<>();
for (int[] interval : intervals) {
    timeline.merge(interval[0], 1, Integer::sum);   // +1 at start
    timeline.merge(interval[1], -1, Integer::sum);  // -1 at end
}

// Car Fleet / Sweep Line
TreeMap<Integer, String> events = new TreeMap<>();
for (int[] ev : schedule)
    events.put(ev[0], "start");

// Count of Smaller Numbers After Self
// (use TreeMap<value, count> or BIT/Merge Sort)

// Stock price range queries
TreeMap<Integer, Integer> prices = new TreeMap<>();
int maxInRange = prices.subMap(lo, hi).values().stream().max(Integer::compare).orElse(0);

Utility Classes

Collections Utility

import java.util.Collections;

List<Integer> list = new ArrayList<>(Arrays.asList(3, 1, 4, 1, 5));

// ─── Sorting ───────────────────────────────────────────
Collections.sort(list);                         // Ascending       O(n log n)
Collections.sort(list, Collections.reverseOrder()); // Descending  O(n log n)
Collections.sort(list, (a, b) -> b - a);        // Custom         O(n log n)

// ─── Search ────────────────────────────────────────────
Collections.binarySearch(list, 4);              // Must be sorted  O(log n)

// ─── Min / Max ─────────────────────────────────────────
Collections.min(list);                          //                 O(n)
Collections.max(list);                          //                 O(n)
Collections.min(list, comparator);              // Custom          O(n)

// ─── Shuffle & Reverse ─────────────────────────────────
Collections.shuffle(list);                      // Random order    O(n)
Collections.reverse(list);                      // In-place        O(n)

// ─── Copy & Fill ───────────────────────────────────────
Collections.fill(list, 0);                      // Set all to 0    O(n)
List<Integer> copy = new ArrayList<>(list);     // Preferred copy
Collections.copy(dest, src);                    // dest.size ≥ src O(n)

// ─── Frequency & Disjoint ──────────────────────────────
Collections.frequency(list, 1);                 // Count of value  O(n)
Collections.disjoint(list1, list2);             // No common elems O(n)

// ─── Swap & Rotate ─────────────────────────────────────
Collections.swap(list, i, j);                   //                 O(1)
Collections.rotate(list, k);                    // Shift right k   O(n)

// ─── Immutable Wrappers ─────────────────────────────────
List<Integer> immutable = Collections.unmodifiableList(list);
Set<Integer> singleton = Collections.singleton(42);
List<Integer> nCopies = Collections.nCopies(5, 0); // [0,0,0,0,0]
List<Integer> empty = Collections.emptyList();

Arrays Utility

import java.util.Arrays;

int[] arr = {5, 3, 1, 4, 2};

// ─── Sort ──────────────────────────────────────────────
Arrays.sort(arr);                               // Primitive       O(n log n)
Arrays.sort(arr, 2, 5);                         // Range sort      O(k log k)
Integer[] boxed = {3,1,2};
Arrays.sort(boxed, Collections.reverseOrder()); // Reverse (boxed) O(n log n)
Arrays.sort(boxed, (a, b) -> b - a);            // Custom         O(n log n)

// ─── Search ────────────────────────────────────────────
Arrays.binarySearch(arr, 3);                    // Sorted only     O(log n)

// ─── Copy ──────────────────────────────────────────────
int[] copy = Arrays.copyOf(arr, arr.length);    // Full copy       O(n)
int[] part = Arrays.copyOfRange(arr, 1, 4);     // [1, 4)          O(k)

// ─── Fill ──────────────────────────────────────────────
Arrays.fill(arr, 0);                            // Fill all        O(n)
Arrays.fill(arr, 2, 5, -1);                     // Fill range      O(k)

// ─── Compare ───────────────────────────────────────────
Arrays.equals(arr1, arr2);                      // Element-wise    O(n)
Arrays.deepEquals(matrix1, matrix2);            // 2D arrays       O(n*m)

// ─── Convert ───────────────────────────────────────────
List<Integer> list = Arrays.asList(1, 2, 3);    // Fixed-size list O(1)
String s = Arrays.toString(arr);                // "[5, 3, 1...]"  O(n)
String s2 = Arrays.deepToString(matrix);        // 2D print        O(n*m)

// ─── Stream ────────────────────────────────────────────
int sum = Arrays.stream(arr).sum();             //                 O(n)
int max = Arrays.stream(arr).max().getAsInt();  //                 O(n)
int[] filtered = Arrays.stream(arr).filter(x -> x > 2).toArray();

DSA Quick-Pick Guide

Problem Type Best Collection Key Method
Sliding Window Max ArrayDeque (monotonic) peekFirst(), pollLast()
Top-K Elements PriorityQueue (min-heap) offer(), poll(), peek()
BFS / Level Order ArrayDeque as Queue offer(), poll()
DFS / Backtracking ArrayDeque as Stack push(), pop()
Frequency Count HashMap merge(), getOrDefault()
Two Sum / Fast Lookup HashSet / HashMap contains(), get()
Sorted Unique Values TreeSet floor(), ceiling()
Range Queries (map) TreeMap floorKey(), subMap()
LRU Cache LinkedHashMap removeEldestEntry()
Anagram Grouping HashMap<String, List> computeIfAbsent()
Graph Adjacency HashMap<Node, List<Node>> computeIfAbsent()
Cycle Detection HashSet add() returns false if dup
Merge K Sorted Lists PriorityQueue offer(), poll()
Valid Parentheses ArrayDeque as Stack push(), pop(), isEmpty()
Next Greater Element ArrayDeque (mono stack) peek(), pop(), push()
Memoization HashMap computeIfAbsent()
Intervals / Sweep Line TreeMap merge(), entrySet()
Deduplicate + Order LinkedHashSet add(), iteration

Complexity Cheat Sheet

Time Complexity

Operation ArrayList LinkedList HashSet TreeSet HashMap TreeMap PriorityQueue
Add O(1)* O(1) O(1)* O(log n) O(1)* O(log n) O(log n)
Remove O(n) O(1) O(1)* O(log n) O(1)* O(log n) O(log n)
Get/Search O(1) O(n) O(1)* O(log n) O(1)* O(log n) O(n)
Peek Min/Max O(n) O(1)† O(n) O(log n) O(log n) O(1)
Iteration O(n) O(n) O(n) O(n) O(n) O(n) O(n)

*O(1) average, O(n) worst case for hash collisions
†Only for first/last elements

Space Complexity

All collections: O(n) where n = number of elements.


💡 Pro Tips:

  • Always use Integer.valueOf() not == when comparing boxed integers in collections.
  • Use new ArrayList<>(existingCollection) to copy — never assign directly.
  • ArrayDeque is always preferred over Stack and LinkedList as queue in DSA.
  • getOrDefault() + merge() are the two most powerful HashMap methods in DSA.
  • For 2D DP tables, int[][] dp = new int[m][n] (primitives) is faster than ArrayList<ArrayList<Integer>>.
@carefree-ladka
Copy link
Copy Markdown
Author

Java Thread-Safe Collections — Complete Reference

A categorized guide to Java's thread-safe collection equivalents — covering java.util.concurrent, synchronized wrappers, and atomic types. Includes most-used methods, concurrency patterns, and when to prefer one over another.


Table of Contents

  1. Concurrency Overview
  2. Thread-Safe List
  3. Thread-Safe Stack
  4. Thread-Safe Queue & Deque
  5. Thread-Safe Set
  6. Thread-Safe Map
  7. Atomic Variables
  8. Non-Thread-Safe → Thread-Safe Migration
  9. Concurrency Quick-Pick Guide
  10. Complexity & Locking Cheat Sheet

Concurrency Overview

Two Approaches to Thread Safety

Thread-Safe Collections
│
├── 1. java.util.concurrent (preferred)
│   ├── Lock-free / CAS-based   → ConcurrentHashMap, ConcurrentLinkedQueue
│   ├── Blocking / bounded      → ArrayBlockingQueue, LinkedBlockingQueue
│   ├── Copy-on-write           → CopyOnWriteArrayList, CopyOnWriteArraySet
│   └── Skip-list (sorted)      → ConcurrentSkipListMap, ConcurrentSkipListSet
│
└── 2. Synchronized Wrappers (legacy / simple use)
    └── Collections.synchronized*()  → wraps any collection with a mutex

Non-Thread-Safe → Thread-Safe Equivalents

Non-Thread-Safe Thread-Safe Equivalent Notes
ArrayList CopyOnWriteArrayList Best for read-heavy
ArrayList Collections.synchronizedList() General purpose
HashSet CopyOnWriteArraySet Read-heavy sets
HashSet Collections.synchronizedSet() General purpose
TreeSet ConcurrentSkipListSet Sorted + concurrent
HashMap ConcurrentHashMap ⭐ Most used
TreeMap ConcurrentSkipListMap Sorted + concurrent
HashMap Collections.synchronizedMap() Full-lock, avoid
LinkedList (queue) ConcurrentLinkedQueue Lock-free FIFO
ArrayDeque ConcurrentLinkedDeque Lock-free deque
PriorityQueue PriorityBlockingQueue Blocking heap
Stack ConcurrentLinkedDeque Preferred stack

Locking Strategies

Strategy How it Works Example
Intrinsic lock synchronized on entire object Vector, Hashtable
Segment lock (old) Lock per bucket segment ConcurrentHashMap < Java 8
CAS (lock-free) Compare-And-Swap CPU instruction ConcurrentHashMap ≥ Java 8, AtomicInteger
Copy-on-write Write makes full copy; reads never block CopyOnWriteArrayList
ReentrantLock Explicit lock, supports tryLock / timeout LinkedBlockingQueue
Skip list Probabilistic sorted structure ConcurrentSkipListMap/Set

Thread-Safe List

CopyOnWriteArrayList

Thread-safe counterpart of ArrayList. Every write creates a fresh copy of the underlying array. Reads are always lock-free.

Best for: Read-heavy workloads (event listeners, observer lists, config snapshots).
Avoid when: Writes are frequent — each write is O(n).

import java.util.concurrent.CopyOnWriteArrayList;

CopyOnWriteArrayList<String> list = new CopyOnWriteArrayList<>();

// Init from existing
CopyOnWriteArrayList<Integer> list = new CopyOnWriteArrayList<>(Arrays.asList(1, 2, 3));

Most Used Methods

// ─── Add ────────────────────────────────────────────────
list.add("a");                  // Append (copies array)       → O(n)
list.add(0, "x");               // Insert at index             → O(n)
list.addIfAbsent("b");          // Add only if not present     → O(n)
list.addAllAbsent(otherList);   // Add missing elements        → O(n*k)
list.addAll(otherList);         // Append all                  → O(n+k)

// ─── Access ─────────────────────────────────────────────
list.get(2);                    // Random access               → O(1)
list.size();                    // Count                       → O(1)
list.isEmpty();                 // Check empty                 → O(1)
list.contains("a");             // Linear search               → O(n)
list.indexOf("a");              // First index                 → O(n)

// ─── Remove ─────────────────────────────────────────────
list.remove("a");               // Remove by value             → O(n)
list.remove(2);                 // Remove by index             → O(n)
list.clear();                   // Empty (copies empty array)  → O(1)

// ─── Iteration (SAFE — snapshot of array at call time) ──
for (String s : list) {         // No ConcurrentModificationException
    System.out.println(s);
}
// Iterator reflects array state AT time iterator was created
// Mutations during iteration are invisible to the iterator — by design

// ─── Snapshot trick ─────────────────────────────────────
Object[] snapshot = list.toArray(); // Safe copy for processing

Concurrency Patterns

// Event listener registry (classic CopyOnWriteArrayList use case)
CopyOnWriteArrayList<EventListener> listeners = new CopyOnWriteArrayList<>();

// Thread A — registers listener (rare)
listeners.add(myListener);

// Threads B, C, D — fire events concurrently (frequent, lock-free)
for (EventListener l : listeners) l.onEvent(event);

// Safe removal during traversal
for (EventListener l : listeners) {
    if (l.isExpired()) listeners.remove(l); // safe but expensive
}

// ─── Compound operation (still needs external sync) ─────
// check-then-act is NOT atomic even with CopyOnWriteArrayList
synchronized (list) {
    if (!list.contains(x)) list.add(x); // use addIfAbsent() instead!
}
// Better:
list.addIfAbsent(x); // atomic check-then-add

Vector (Legacy)

Synchronized version of ArrayList — every method acquires the intrinsic lock. Largely replaced by CopyOnWriteArrayList and Collections.synchronizedList().

⚠️ Avoid in new code. All methods lock on the entire object — very low throughput under contention.

Vector<Integer> v = new Vector<>();
v.add(1); v.add(2); v.add(3);

// Same API as ArrayList — all methods are synchronized
v.get(0);       // O(1) but locks
v.remove(0);    // O(n) + locks
v.size();       // O(1) + locks

// Extra methods (not in ArrayList)
v.firstElement();   // Like get(0)
v.lastElement();    // Like get(size-1)
v.elementAt(2);     // Same as get(2)
v.addElement(5);    // Same as add(5)
v.removeElement(5); // Same as remove(Integer.valueOf(5))
v.capacity();       // Current internal array capacity

synchronizedList Wrapper

Wraps any List with a mutex. Simpler than CopyOnWriteArrayList for write-heavy lists.

import java.util.Collections;

List<Integer> safe = Collections.synchronizedList(new ArrayList<>());
List<Integer> safe = Collections.synchronizedList(new LinkedList<>());
// All single operations are automatically synchronized
safe.add(5);
safe.get(2);
safe.remove(Integer.valueOf(5));
safe.size();
safe.contains(3);

// ⚠️ CRITICAL: Manual sync required for iteration
synchronized (safe) {
    for (Integer i : safe) {
        System.out.println(i);     // Must hold lock during iteration
    }
}

// ⚠️ CRITICAL: Compound operations also need manual sync
synchronized (safe) {
    if (!safe.contains(x)) safe.add(x);
}

Thread-Safe Stack

ConcurrentLinkedDeque as Stack

There is no dedicated thread-safe Stack in java.util.concurrent. Use ConcurrentLinkedDeque.

import java.util.concurrent.ConcurrentLinkedDeque;

Deque<Integer> stack = new ConcurrentLinkedDeque<>();
stack.push(10);         // addFirst() — lock-free       → O(1)
stack.pop();            // removeFirst() — lock-free    → O(1)
stack.peek();           // peekFirst() — lock-free      → O(1)
stack.isEmpty();        //                              → O(1)
stack.size();           // ⚠️ O(n) — traverses all nodes

LinkedBlockingDeque is the blocking alternative — use when you need push to block when full, or pop to block when empty.


Thread-Safe Queue & Deque

ConcurrentLinkedQueue

Lock-free, unbounded FIFO queue based on Michael-Scott non-blocking algorithm. Best general-purpose concurrent queue.

Best for: Producer-consumer with multiple threads, task queues, work-stealing.

import java.util.concurrent.ConcurrentLinkedQueue;

Queue<Task> queue = new ConcurrentLinkedQueue<>();

Most Used Methods

queue.offer(task);      // Enqueue (always true)       → O(1) lock-free
queue.poll();           // Dequeue head (null if empty)→ O(1) lock-free
queue.peek();           // View head (null if empty)   → O(1) lock-free
queue.isEmpty();        // May be inaccurate mid-op    → O(1)
queue.size();           // ⚠️ O(n) — not O(1)!
queue.contains(task);   // Linear scan                 → O(n)
queue.remove(task);     // Remove specific element     → O(n)

// ─── Drain (bulk consume) ───────────────────────────────
List<Task> batch = new ArrayList<>();
queue.drainTo(batch);   // Not available — use loop:
Task t;
while ((t = queue.poll()) != null) batch.add(t);

Concurrency Patterns

// Multi-producer, multi-consumer
ConcurrentLinkedQueue<Runnable> taskQueue = new ConcurrentLinkedQueue<>();

// Producer threads
executor.submit(() -> taskQueue.offer(new MyTask()));

// Consumer threads
executor.submit(() -> {
    while (running) {
        Runnable t = taskQueue.poll();
        if (t != null) t.run();
        else Thread.sleep(1); // back-off
    }
});

ConcurrentLinkedDeque

Lock-free, unbounded double-ended queue.

import java.util.concurrent.ConcurrentLinkedDeque;

Deque<Integer> dq = new ConcurrentLinkedDeque<>();

dq.offerFirst(1);       // Add to front                → O(1)
dq.offerLast(9);        // Add to back                 → O(1)
dq.peekFirst();         // View front                  → O(1)
dq.peekLast();          // View back                   → O(1)
dq.pollFirst();         // Remove from front           → O(1)
dq.pollLast();          // Remove from back            → O(1)
dq.size();              // ⚠️ O(n) — traverse all nodes

LinkedBlockingQueue

Optionally-bounded blocking queue backed by linked nodes. Separate head/tail locks = higher throughput than ArrayBlockingQueue for asymmetric producer-consumer loads.

Best for: Classic producer-consumer with blocking behavior, ThreadPoolExecutor work queues.

import java.util.concurrent.LinkedBlockingQueue;

BlockingQueue<Task> bq = new LinkedBlockingQueue<>();          // Unbounded
BlockingQueue<Task> bq = new LinkedBlockingQueue<>(100);       // Capacity 100

Most Used Methods

// ─── Non-blocking ───────────────────────────────────────
bq.offer(task);             // Enqueue, returns false if full  → O(1)
bq.poll();                  // Dequeue head, null if empty     → O(1)
bq.peek();                  // View head, null if empty        → O(1)

// ─── Blocking (KEY advantage over ConcurrentLinkedQueue) ─
bq.put(task);               // Block until space available     → O(1)
bq.take();                  // Block until element available   → O(1)

// ─── Timed (try with timeout) ───────────────────────────
bq.offer(task, 200, TimeUnit.MILLISECONDS); // Wait up to 200ms
bq.poll(200, TimeUnit.MILLISECONDS);        // Wait up to 200ms

// ─── Bulk ───────────────────────────────────────────────
bq.drainTo(list);           // Move all to list               → O(n)
bq.drainTo(list, 10);       // Move up to 10 elements         → O(k)

// ─── Info ───────────────────────────────────────────────
bq.size();                  // Current size                   → O(1)
bq.remainingCapacity();     // Space left                     → O(1)
bq.isEmpty();               //                                → O(1)
bq.contains(task);          // Linear scan                    → O(n)
bq.remove(task);            // Remove specific element        → O(n)

Concurrency Patterns

// Classic Producer-Consumer
BlockingQueue<String> queue = new LinkedBlockingQueue<>(50);

// Producer
new Thread(() -> {
    while (true) {
        queue.put(produceItem()); // blocks when full
    }
}).start();

// Consumer
new Thread(() -> {
    while (true) {
        String item = queue.take(); // blocks when empty
        process(item);
    }
}).start();

// Drain and batch-process
List<Task> batch = new ArrayList<>();
queue.drainTo(batch, 20); // grab up to 20 tasks at once
processBatch(batch);

// Thread pool with bounded queue
ThreadPoolExecutor pool = new ThreadPoolExecutor(
    4, 8, 60L, TimeUnit.SECONDS,
    new LinkedBlockingQueue<>(200) // work queue
);

ArrayBlockingQueue

Bounded blocking queue backed by a fixed array. Single lock for both head and tail — simpler but lower throughput than LinkedBlockingQueue under high concurrency.

Best for: Fixed-size bounded buffers, rate-limiting, backpressure.

import java.util.concurrent.ArrayBlockingQueue;

BlockingQueue<Integer> abq = new ArrayBlockingQueue<>(100);       // capacity required
BlockingQueue<Integer> abq = new ArrayBlockingQueue<>(100, true); // fair ordering (FIFO for waiting threads)
// Same API as LinkedBlockingQueue
abq.put(5);                 // Block until space              → O(1)
abq.take();                 // Block until element            → O(1)
abq.offer(5);               // Non-blocking enqueue           → O(1)
abq.poll();                 // Non-blocking dequeue           → O(1)
abq.offer(5, 100, TimeUnit.MILLISECONDS);  // Timed
abq.poll(100, TimeUnit.MILLISECONDS);      // Timed
abq.remainingCapacity();    // Remaining slots                → O(1)
abq.drainTo(list);          // Bulk consume                   → O(n)

LinkedBlockingQueue vs ArrayBlockingQueue:

  • LinkedBlockingQueue — separate read/write locks, better throughput; unbounded option
  • ArrayBlockingQueue — single lock, lower memory (no node objects), supports fair ordering

PriorityBlockingQueue

Unbounded blocking priority queue. Blocks only on take() when empty. Never blocks on put()/offer().

Best for: Priority-based task scheduling, Dijkstra's in multi-threaded context.

import java.util.concurrent.PriorityBlockingQueue;

PriorityBlockingQueue<Integer> pbq = new PriorityBlockingQueue<>();             // Min-heap
PriorityBlockingQueue<Task> pbq = new PriorityBlockingQueue<>(11, comparator); // Custom order
pbq.offer(5);               // Always succeeds (unbounded)    → O(log n)
pbq.put(5);                 // Same as offer (never blocks)   → O(log n)
pbq.take();                 // Blocks when empty              → O(log n)
pbq.poll();                 // Null if empty                  → O(log n)
pbq.peek();                 // View min (null if empty)       → O(1)
pbq.size();                 //                                → O(1)
pbq.drainTo(list);          // Bulk consume in priority order → O(n log n)

// Custom priority task
class Task implements Comparable<Task> {
    int priority;
    public int compareTo(Task o) { return this.priority - o.priority; }
}
PriorityBlockingQueue<Task> scheduler = new PriorityBlockingQueue<>();

LinkedBlockingDeque

Optionally-bounded blocking double-ended queue. Supports blocking operations on both ends.

import java.util.concurrent.LinkedBlockingDeque;

BlockingDeque<Integer> bdq = new LinkedBlockingDeque<>();
BlockingDeque<Integer> bdq = new LinkedBlockingDeque<>(100); // bounded

bdq.putFirst(1);            // Block until space at front     → O(1)
bdq.putLast(9);             // Block until space at back      → O(1)
bdq.takeFirst();            // Block until element at front   → O(1)
bdq.takeLast();             // Block until element at back    → O(1)
bdq.offerFirst(1, 100, TimeUnit.MILLISECONDS); // Timed
bdq.peekFirst();            // View front                     → O(1)
bdq.peekLast();             // View back                      → O(1)
bdq.pollFirst();            // Non-blocking remove front      → O(1)
bdq.pollLast();             // Non-blocking remove back       → O(1)

SynchronousQueue

Zero-capacity handoff queue. Each put() blocks until another thread calls take(), and vice versa. No buffering — direct thread-to-thread transfer.

Best for: Direct handoff patterns, Executors.newCachedThreadPool() work queue.

import java.util.concurrent.SynchronousQueue;

SynchronousQueue<Runnable> sq = new SynchronousQueue<>();
SynchronousQueue<Runnable> sq = new SynchronousQueue<>(true); // fair

sq.put(task);               // Block until consumer ready     → O(1)
sq.take();                  // Block until producer ready     → O(1)
sq.offer(task);             // Return false if no consumer    → O(1)
sq.poll();                  // Return null if no producer     → O(1)
sq.offer(task, 100, TimeUnit.MILLISECONDS);  // Timed handoff
sq.size();                  // Always 0
sq.isEmpty();               // Always true

DelayQueue

Unbounded blocking queue where elements become available only after their delay has expired. Elements must implement Delayed.

Best for: Scheduled task execution, cache expiration, retry-after mechanisms.

import java.util.concurrent.DelayQueue;
import java.util.concurrent.Delayed;
import java.util.concurrent.TimeUnit;

class DelayedTask implements Delayed {
    long triggerTime; // System.nanoTime() + delay
    String name;

    DelayedTask(String name, long delayMs) {
        this.name = name;
        this.triggerTime = System.nanoTime() + TimeUnit.MILLISECONDS.toNanos(delayMs);
    }

    public long getDelay(TimeUnit unit) {
        return unit.convert(triggerTime - System.nanoTime(), TimeUnit.NANOSECONDS);
    }

    public int compareTo(Delayed o) {
        return Long.compare(this.triggerTime, ((DelayedTask) o).triggerTime);
    }
}

DelayQueue<DelayedTask> dq = new DelayQueue<>();

dq.offer(new DelayedTask("job1", 1000)); // available after 1s
dq.offer(new DelayedTask("job2", 500));  // available after 500ms

DelayedTask t = dq.take();     // Blocks until soonest task is ready
DelayedTask t = dq.poll();     // Null if no task is ready yet
dq.peek();                     // View soonest task (may not be ready)
dq.size();                     // All tasks (including not-yet-ready)

Thread-Safe Set

CopyOnWriteArraySet

Thread-safe Set backed by a CopyOnWriteArrayList. Every write copies the whole array.

Best for: Small sets with rare writes (subscriber lists, whitelist/allowlist).

import java.util.concurrent.CopyOnWriteArraySet;

CopyOnWriteArraySet<String> set = new CopyOnWriteArraySet<>();
CopyOnWriteArraySet<String> set = new CopyOnWriteArraySet<>(existingCollection);
set.add("x");               // Copies array on write          → O(n)
set.remove("x");            // Copies array on write          → O(n)
set.contains("x");          // Lock-free read (linear scan)   → O(n)
set.size();                 //                                → O(1)
set.isEmpty();              //                                → O(1)
set.addAll(collection);     //                                → O(n*k)

// Iteration is always safe — snapshot
for (String s : set) System.out.println(s); // No CME ever

⚠️ No O(1) contains — backed by list, not hash table. Use only for small sets.


ConcurrentSkipListSet

Thread-safe, sorted Set equivalent of TreeSet. Based on a skip list (probabilistic balanced structure). Lock-free for most operations.

Best for: Sorted sets with concurrent access, range queries in multi-threaded code.

import java.util.concurrent.ConcurrentSkipListSet;

ConcurrentSkipListSet<Integer> css = new ConcurrentSkipListSet<>();
ConcurrentSkipListSet<Integer> css = new ConcurrentSkipListSet<>(Collections.reverseOrder());
ConcurrentSkipListSet<Integer> css = new ConcurrentSkipListSet<>(existingCollection);

Most Used Methods

css.add(5);                 // Insert                         → O(log n)
css.remove(5);              // Delete                         → O(log n)
css.contains(5);            // Search                         → O(log n)
css.size();                 // ⚠️ O(n) — traverse all nodes
css.isEmpty();              //                                → O(1)

// ─── Navigation (mirrors TreeSet) ───────────────────────
css.first();                // Smallest element               → O(log n)
css.last();                 // Largest element                → O(log n)
css.floor(5);               // Largest ≤ 5                    → O(log n)
css.ceiling(5);             // Smallest ≥ 5                   → O(log n)
css.lower(5);               // Largest < 5                    → O(log n)
css.higher(5);              // Smallest > 5                   → O(log n)
css.pollFirst();            // Remove & return smallest       → O(log n)
css.pollLast();             // Remove & return largest        → O(log n)
css.headSet(5);             // View of elements < 5           → O(log n)
css.tailSet(5);             // View of elements ≥ 5           → O(log n)
css.subSet(3, 8);           // View of elements [3, 8)        → O(log n)
css.descendingSet();        // Reverse-order view             → O(1)

Concurrency Pattern

// Online leaderboard (sorted, concurrent updates)
ConcurrentSkipListSet<Long> scores = new ConcurrentSkipListSet<>(Collections.reverseOrder());
scores.add(9500L);
scores.add(8800L);
// Top 3 at any moment (lock-free)
scores.stream().limit(3).forEach(System.out::println);

synchronizedSet Wrapper

Set<Integer> safe = Collections.synchronizedSet(new HashSet<>());
Set<Integer> sortedSafe = Collections.synchronizedSortedSet(new TreeSet<>());

safe.add(5);        // Synchronized
safe.remove(5);     // Synchronized
safe.contains(5);   // Synchronized

// ⚠️ Manual sync required for iteration
synchronized (safe) {
    for (Integer i : safe) System.out.println(i);
}

Thread-Safe Map

ConcurrentHashMap

⭐ The most important concurrent collection in Java. Lock-free reads (via volatile), fine-grained CAS writes per bucket. Replaced Hashtable completely in modern Java.

Best for: Concurrent caches, frequency counters, shared state maps in any multi-threaded app.

import java.util.concurrent.ConcurrentHashMap;

ConcurrentHashMap<String, Integer> map = new ConcurrentHashMap<>();
ConcurrentHashMap<String, Integer> map = new ConcurrentHashMap<>(capacity, loadFactor, concurrencyLevel);

Most Used Methods

// ─── Basic (same as HashMap) ────────────────────────────
map.put("a", 1);                        // Insert/overwrite         → O(1)
map.get("a");                           // Read (lock-free)         → O(1)
map.remove("a");                        // Delete                   → O(1)
map.containsKey("a");                   // Check key (lock-free)    → O(1)
map.size();                             // ⚠️ Approximate under concurrent mods
map.isEmpty();                          //                          → O(1)

// ─── Atomic Compound Operations (KEY advantage) ─────────
map.putIfAbsent("a", 0);                // Insert only if missing   → O(1) atomic
map.replace("a", 1, 2);                 // CAS replace              → O(1) atomic
map.remove("a", 1);                     // Remove if value matches  → O(1) atomic

// ─── Atomic Compute (most powerful) ─────────────────────
map.merge("a", 1, Integer::sum);        // Atomic increment         → O(1)
map.computeIfAbsent("a", k -> 0);       // Init if absent           → O(1)
map.computeIfPresent("a", (k, v) -> v + 1); // Update if present    → O(1)
map.compute("a", (k, v) -> v == null ? 1 : v + 1); // Atomic upsert→ O(1)
map.getOrDefault("a", 0);              // Read with default         → O(1)

// ─── Bulk Operations (Java 8+) ──────────────────────────
map.forEach(1, (k, v) -> System.out.println(k + "=" + v)); // parallelism threshold=1
map.replaceAll((k, v) -> v * 2);        // Transform all values     → O(n)

// Parallel reduce/search (use with large maps)
int total = map.reduceValues(1, Integer::sum);          // sum all values
String found = map.searchKeys(1, k -> k.startsWith("a") ? k : null);

// ─── Views ──────────────────────────────────────────────
// All views are live, weakly-consistent (reflect concurrent mods)
for (Map.Entry<String, Integer> e : map.entrySet()) { ... }
for (String key : map.keySet()) { ... }
for (int val : map.values()) { ... }

// Thread-safe Set view (backed by ConcurrentHashMap)
Set<String> safeSet = ConcurrentHashMap.newKeySet();
Set<String> safeSet = ConcurrentHashMap.newKeySet(expectedSize);

Concurrency Patterns

// ─── Concurrent Frequency Counter ───────────────────────
ConcurrentHashMap<String, Integer> freq = new ConcurrentHashMap<>();
words.parallelStream().forEach(w -> freq.merge(w, 1, Integer::sum));

// ─── Thread-safe cache with lazy init ───────────────────
ConcurrentHashMap<Integer, Result> cache = new ConcurrentHashMap<>();
Result r = cache.computeIfAbsent(key, k -> expensiveCompute(k));

// ─── CAS-based counter (no AtomicInteger needed) ────────
map.merge("count", 1, Integer::sum);

// ─── Avoid double-check locking antipattern ─────────────
// ❌ Not atomic:
if (!map.containsKey(k)) map.put(k, v);
// ✅ Atomic:
map.putIfAbsent(k, v);

// ─── Concurrent graph adjacency list ────────────────────
ConcurrentHashMap<Integer, Set<Integer>> graph = new ConcurrentHashMap<>();
graph.computeIfAbsent(node, k -> ConcurrentHashMap.newKeySet()).add(neighbor);

// ─── Remove entry when value reaches zero ───────────────
map.computeIfPresent("a", (k, v) -> v <= 1 ? null : v - 1); // null removes key

ConcurrentSkipListMap

Thread-safe sorted Map equivalent of TreeMap. Lock-free, based on skip lists.

Best for: Sorted concurrent maps, time-series data, range queries across threads.

import java.util.concurrent.ConcurrentSkipListMap;

ConcurrentSkipListMap<Integer, String> csm = new ConcurrentSkipListMap<>();
ConcurrentSkipListMap<Integer, String> csm = new ConcurrentSkipListMap<>(Collections.reverseOrder());

Most Used Methods

csm.put(3, "c");                // Insert                    → O(log n)
csm.get(3);                     // Get by key                → O(log n)
csm.remove(3);                  // Delete                    → O(log n)
csm.putIfAbsent(3, "c");        // Atomic put if absent      → O(log n)
csm.replace(3, "c", "d");       // CAS replace               → O(log n)
csm.containsKey(3);             //                           → O(log n)

// ─── Navigation (same as TreeMap) ───────────────────────
csm.firstKey();                 // Smallest key              → O(log n)
csm.lastKey();                  // Largest key               → O(log n)
csm.floorKey(5);                // Largest key ≤ 5           → O(log n)
csm.ceilingKey(5);              // Smallest key ≥ 5          → O(log n)
csm.lowerKey(5);                // Largest key < 5           → O(log n)
csm.higherKey(5);               // Smallest key > 5          → O(log n)
csm.pollFirstEntry();           // Remove & return min entry → O(log n)
csm.pollLastEntry();            // Remove & return max entry → O(log n)
csm.headMap(5);                 // Keys < 5 (live view)      → O(log n)
csm.tailMap(5);                 // Keys ≥ 5 (live view)      → O(log n)
csm.subMap(3, 8);               // Keys [3, 8) (live view)   → O(log n)
csm.firstEntry();               // Min key-value pair        → O(log n)
csm.lastEntry();                // Max key-value pair        → O(log n)

Concurrency Pattern

// Timestamp-based event log (sorted, concurrent writes)
ConcurrentSkipListMap<Long, String> eventLog = new ConcurrentSkipListMap<>();
eventLog.put(System.nanoTime(), "User login");

// Range query: events in last 5 seconds
long now = System.nanoTime();
eventLog.subMap(now - 5_000_000_000L, now).forEach((t, e) -> System.out.println(e));

Hashtable (Legacy)

The original thread-safe map — every method locks on the entire object. Entirely superseded by ConcurrentHashMap.

⚠️ Never use in new code. No null keys or values. Global lock kills throughput.

Hashtable<String, Integer> ht = new Hashtable<>();
ht.put("a", 1);         // Locks entire table
ht.get("a");            // Locks entire table
ht.remove("a");         // Locks entire table
ht.containsKey("a");    // Locks entire table
ht.containsValue(1);    // O(n) + global lock
ht.keys();              // Returns Enumeration (old API)
ht.elements();          // Returns Enumeration of values

synchronizedMap Wrapper

Map<String, Integer> safe = Collections.synchronizedMap(new HashMap<>());
Map<String, Integer> sorted = Collections.synchronizedSortedMap(new TreeMap<>());

// All single ops are synchronized
safe.put("a", 1);
safe.get("a");
safe.remove("a");
safe.putIfAbsent("a", 0); // ⚠️ Synchronized but NOT atomic (2 ops)

// ⚠️ Manual sync required for iteration
synchronized (safe) {
    for (Map.Entry<String, Integer> e : safe.entrySet()) { ... }
}

// ⚠️ Manual sync for compound ops
synchronized (safe) {
    if (!safe.containsKey("a")) safe.put("a", 0);
}
// Prefer ConcurrentHashMap.putIfAbsent() instead!

Atomic Variables

Atomic types use CPU-level Compare-And-Swap (CAS) — no locks, no blocking, very fast.

AtomicInteger / AtomicLong

import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicLong;

AtomicInteger counter = new AtomicInteger(0);
AtomicLong bigCounter = new AtomicLong(0L);

Most Used Methods

counter.get();                      // Read current value             → O(1)
counter.set(5);                     // Write value                    → O(1)
counter.getAndSet(10);              // Swap, return old               → O(1)

// ─── Increment / Decrement ──────────────────────────────
counter.incrementAndGet();          // ++counter (returns new)        → O(1)
counter.getAndIncrement();          // counter++ (returns old)        → O(1)
counter.decrementAndGet();          // --counter (returns new)        → O(1)
counter.getAndDecrement();          // counter-- (returns old)        → O(1)

// ─── Add ────────────────────────────────────────────────
counter.addAndGet(5);               // counter += 5, return new       → O(1)
counter.getAndAdd(5);               // return old, counter += 5       → O(1)

// ─── CAS (Compare-And-Swap) ─────────────────────────────
counter.compareAndSet(0, 1);        // If current==0, set to 1        → O(1) atomic
counter.compareAndExchange(0, 1);   // Same, returns witness value     → O(1)

// ─── Functional update (Java 8+) ────────────────────────
counter.updateAndGet(v -> v * 2);   // Atomic transform               → O(1)
counter.getAndUpdate(v -> v * 2);   // Return old, then transform     → O(1)
counter.accumulateAndGet(5, Integer::sum); // Atomic merge             → O(1)

Concurrency Patterns

// Thread-safe counter (no synchronized needed)
AtomicInteger count = new AtomicInteger(0);
threads.forEach(t -> t.submit(() -> count.incrementAndGet()));

// Non-blocking try-acquire (lock-free mutex pattern)
AtomicInteger lock = new AtomicInteger(0);
if (lock.compareAndSet(0, 1)) {
    try { criticalSection(); }
    finally { lock.set(0); }
}

// Request ID generator
AtomicLong requestId = new AtomicLong(0);
long id = requestId.getAndIncrement(); // unique IDs across threads

// LongAdder for high-contention counters (better than AtomicLong)
import java.util.concurrent.atomic.LongAdder;
LongAdder adder = new LongAdder();
adder.increment();      // Striped — less contention than AtomicLong
adder.add(5);
adder.sum();            // Total across all stripes

AtomicReference

Atomic reference to any object. Enables lock-free updates to shared objects.

import java.util.concurrent.atomic.AtomicReference;

AtomicReference<String> ref = new AtomicReference<>("initial");

ref.get();                          // Read current reference
ref.set("new");                     // Write reference
ref.getAndSet("newer");             // Swap, return old
ref.compareAndSet("new", "newest"); // CAS on reference identity
ref.updateAndGet(s -> s.toUpperCase()); // Functional update

Concurrency Pattern

// Lock-free stack (Treiber stack)
AtomicReference<Node<T>> top = new AtomicReference<>(null);

void push(T val) {
    Node<T> newNode = new Node<>(val);
    do {
        newNode.next = top.get();
    } while (!top.compareAndSet(newNode.next, newNode));
}

T pop() {
    Node<T> curr;
    do {
        curr = top.get();
        if (curr == null) return null;
    } while (!top.compareAndSet(curr, curr.next));
    return curr.val;
}

AtomicBoolean

import java.util.concurrent.atomic.AtomicBoolean;

AtomicBoolean flag = new AtomicBoolean(false);

flag.get();                         // Read
flag.set(true);                     // Write
flag.getAndSet(true);               // Swap
flag.compareAndSet(false, true);    // CAS — safe one-time init

// One-time initialization pattern
AtomicBoolean initialized = new AtomicBoolean(false);
if (initialized.compareAndSet(false, true)) {
    init(); // only one thread ever runs this
}

Non-Thread-Safe → Thread-Safe Migration

Drop-in Replacements

// ─── List ────────────────────────────────────────────────
// Before (unsafe):
List<String> list = new ArrayList<>();

// After (read-heavy):
List<String> list = new CopyOnWriteArrayList<>();

// After (write-heavy / general):
List<String> list = Collections.synchronizedList(new ArrayList<>());

// ─── Set ────────────────────────────────────────────────
// Before:
Set<Integer> set = new HashSet<>();
// After:
Set<Integer> set = ConcurrentHashMap.newKeySet(); // ⭐ preferred
Set<Integer> set = new CopyOnWriteArraySet<>();   // small, read-heavy

// Before (sorted):
Set<Integer> set = new TreeSet<>();
// After:
Set<Integer> set = new ConcurrentSkipListSet<>();

// ─── Map ────────────────────────────────────────────────
// Before:
Map<String, Integer> map = new HashMap<>();
// After:
Map<String, Integer> map = new ConcurrentHashMap<>(); // ⭐ always preferred

// Before (sorted):
Map<Integer, String> map = new TreeMap<>();
// After:
Map<Integer, String> map = new ConcurrentSkipListMap<>();

// ─── Queue ──────────────────────────────────────────────
// Before:
Queue<Task> q = new ArrayDeque<>();
// After (non-blocking):
Queue<Task> q = new ConcurrentLinkedQueue<>();
// After (blocking):
BlockingQueue<Task> q = new LinkedBlockingQueue<>();

// ─── Priority Queue ──────────────────────────────────────
// Before:
PriorityQueue<Task> pq = new PriorityQueue<>();
// After:
PriorityBlockingQueue<Task> pq = new PriorityBlockingQueue<>();

Common Mistakes to Avoid

// ❌ WRONG: ConcurrentHashMap but non-atomic compound op
if (!map.containsKey(k)) map.put(k, v);           // Race condition!
// ✅ RIGHT:
map.putIfAbsent(k, v);

// ❌ WRONG: ConcurrentHashMap increment with get+put
map.put(k, map.getOrDefault(k, 0) + 1);           // Race condition!
// ✅ RIGHT:
map.merge(k, 1, Integer::sum);

// ❌ WRONG: Iterating synchronizedList without lock
for (String s : synchronizedList) { ... }         // ConcurrentModificationException!
// ✅ RIGHT:
synchronized (synchronizedList) {
    for (String s : synchronizedList) { ... }
}

// ❌ WRONG: Using size() to check ConcurrentLinkedQueue
if (queue.size() == 0) { ... }  // O(n) + may be stale
// ✅ RIGHT:
if (queue.isEmpty()) { ... }   // O(1) and consistent

// ❌ WRONG: Sharing a non-thread-safe collection across threads
List<Integer> shared = new ArrayList<>();          // Data corruption!
// ✅ RIGHT:
List<Integer> shared = new CopyOnWriteArrayList<>();

Concurrency Quick-Pick Guide

Scenario Best Choice
Concurrent read-heavy map ConcurrentHashMap
Concurrent frequency counter ConcurrentHashMap.merge()
Concurrent cache ConcurrentHashMap.computeIfAbsent()
Sorted concurrent map ConcurrentSkipListMap
Thread-safe set (general) ConcurrentHashMap.newKeySet()
Thread-safe sorted set ConcurrentSkipListSet
Read-heavy list (rarely writes) CopyOnWriteArrayList
Concurrent FIFO queue ConcurrentLinkedQueue
Blocking producer-consumer LinkedBlockingQueue
Bounded buffer / backpressure ArrayBlockingQueue
Priority-based task scheduling PriorityBlockingQueue
Scheduled / delayed tasks DelayQueue
Direct thread-to-thread handoff SynchronousQueue
Thread-safe deque (stack/queue) ConcurrentLinkedDeque
Blocking deque LinkedBlockingDeque
High-throughput counter LongAdder (striped)
Low-contention counter AtomicInteger / AtomicLong
One-time initialization flag AtomicBoolean.compareAndSet()
Lock-free object reference swap AtomicReference
LRU Cache LinkedHashMap + synchronized or ConcurrentHashMap + custom eviction
Simple sync (legacy/simple apps) Collections.synchronized*() + manual lock for iteration

Complexity & Locking Cheat Sheet

Time Complexity

Operation ConcurrentHashMap ConcurrentSkipListMap CopyOnWriteArrayList LinkedBlockingQueue PriorityBlockingQueue
Add / Put O(1) avg O(log n) O(n) O(1) O(log n)
Get / Peek O(1) avg O(log n) O(1) O(1) O(1)
Remove O(1) avg O(log n) O(n) O(1) O(log n)
Contains O(1) avg O(log n) O(n) O(n) O(n)
Size O(1)* O(n) O(1) O(1) O(1)
Iteration O(n) O(n) O(n) snapshot O(n) O(n)

*ConcurrentHashMap.size() is an approximation under concurrent modification.

Locking Behavior

Collection Read Lock Write Lock Blocking?
ConcurrentHashMap None (volatile) CAS per bucket No
ConcurrentSkipListMap/Set None (CAS) CAS per node No
CopyOnWriteArrayList/Set None (snapshot) Full object lock No
ConcurrentLinkedQueue/Deque None (CAS) CAS per node No
LinkedBlockingQueue Tail lock Head lock Yes (put/take)
ArrayBlockingQueue Single lock Single lock Yes (put/take)
PriorityBlockingQueue Single lock Single lock Yes (take only)
SynchronousQueue N/A (no storage) N/A Yes (always)
Hashtable Full object lock Full object lock No
Vector Full object lock Full object lock No
Collections.synchronized* Full object lock Full object lock No
AtomicInteger/Long/Reference None (volatile) CAS No

💡 Golden Rules for Concurrent Collections:

  • Default to ConcurrentHashMap — it's almost always the right map choice.
  • Never use Hashtable or Collections.synchronizedMap() in new code.
  • merge() and computeIfAbsent() are the two atomic operations you'll use most.
  • Iteration on synchronized* wrappers always needs an external synchronized block.
  • Prefer LinkedBlockingQueue over ArrayBlockingQueue unless you need bounded buffer with fair ordering.
  • Use LongAdder instead of AtomicLong when many threads increment a counter simultaneously.
  • CopyOnWriteArrayList is only efficient when reads >> writes — every write is O(n).
  • size() on ConcurrentLinkedQueue/ConcurrentLinkedDeque/ConcurrentSkipListMap is O(n) — use isEmpty() when possible.

@carefree-ladka
Copy link
Copy Markdown
Author

Java Thread-Safe Collections — Complete Reference

A categorized guide to Java's thread-safe collection equivalents — covering java.util.concurrent, synchronized wrappers, and atomic types. Includes most-used methods, concurrency patterns, and when to prefer one over another.


Table of Contents

  1. Concurrency Overview
  2. Thread-Safe List
  3. Thread-Safe Stack
  4. Thread-Safe Queue & Deque
  5. Thread-Safe Set
  6. Thread-Safe Map
  7. Atomic Variables
  8. Non-Thread-Safe → Thread-Safe Migration
  9. Concurrency Quick-Pick Guide
  10. Complexity & Locking Cheat Sheet

Concurrency Overview

Two Approaches to Thread Safety

Thread-Safe Collections
│
├── 1. java.util.concurrent (preferred)
│   ├── Lock-free / CAS-based   → ConcurrentHashMap, ConcurrentLinkedQueue
│   ├── Blocking / bounded      → ArrayBlockingQueue, LinkedBlockingQueue
│   ├── Copy-on-write           → CopyOnWriteArrayList, CopyOnWriteArraySet
│   └── Skip-list (sorted)      → ConcurrentSkipListMap, ConcurrentSkipListSet
│
└── 2. Synchronized Wrappers (legacy / simple use)
    └── Collections.synchronized*()  → wraps any collection with a mutex

Non-Thread-Safe → Thread-Safe Equivalents

Non-Thread-Safe Thread-Safe Equivalent Notes
ArrayList CopyOnWriteArrayList Best for read-heavy
ArrayList Collections.synchronizedList() General purpose
HashSet CopyOnWriteArraySet Read-heavy sets
HashSet Collections.synchronizedSet() General purpose
TreeSet ConcurrentSkipListSet Sorted + concurrent
HashMap ConcurrentHashMap ⭐ Most used
TreeMap ConcurrentSkipListMap Sorted + concurrent
HashMap Collections.synchronizedMap() Full-lock, avoid
LinkedList (queue) ConcurrentLinkedQueue Lock-free FIFO
ArrayDeque ConcurrentLinkedDeque Lock-free deque
PriorityQueue PriorityBlockingQueue Blocking heap
Stack ConcurrentLinkedDeque Preferred stack

Locking Strategies

Strategy How it Works Example
Intrinsic lock synchronized on entire object Vector, Hashtable
Segment lock (old) Lock per bucket segment ConcurrentHashMap < Java 8
CAS (lock-free) Compare-And-Swap CPU instruction ConcurrentHashMap ≥ Java 8, AtomicInteger
Copy-on-write Write makes full copy; reads never block CopyOnWriteArrayList
ReentrantLock Explicit lock, supports tryLock / timeout LinkedBlockingQueue
Skip list Probabilistic sorted structure ConcurrentSkipListMap/Set

Thread-Safe List

CopyOnWriteArrayList

Thread-safe counterpart of ArrayList. Every write creates a fresh copy of the underlying array. Reads are always lock-free.

Best for: Read-heavy workloads (event listeners, observer lists, config snapshots).
Avoid when: Writes are frequent — each write is O(n).

import java.util.concurrent.CopyOnWriteArrayList;

CopyOnWriteArrayList<String> list = new CopyOnWriteArrayList<>();

// Init from existing
CopyOnWriteArrayList<Integer> list = new CopyOnWriteArrayList<>(Arrays.asList(1, 2, 3));

Most Used Methods

// ─── Add ────────────────────────────────────────────────
list.add("a");                  // Append (copies array)       → O(n)
list.add(0, "x");               // Insert at index             → O(n)
list.addIfAbsent("b");          // Add only if not present     → O(n)
list.addAllAbsent(otherList);   // Add missing elements        → O(n*k)
list.addAll(otherList);         // Append all                  → O(n+k)

// ─── Access ─────────────────────────────────────────────
list.get(2);                    // Random access               → O(1)
list.size();                    // Count                       → O(1)
list.isEmpty();                 // Check empty                 → O(1)
list.contains("a");             // Linear search               → O(n)
list.indexOf("a");              // First index                 → O(n)

// ─── Remove ─────────────────────────────────────────────
list.remove("a");               // Remove by value             → O(n)
list.remove(2);                 // Remove by index             → O(n)
list.clear();                   // Empty (copies empty array)  → O(1)

// ─── Iteration (SAFE — snapshot of array at call time) ──
for (String s : list) {         // No ConcurrentModificationException
    System.out.println(s);
}
// Iterator reflects array state AT time iterator was created
// Mutations during iteration are invisible to the iterator — by design

// ─── Snapshot trick ─────────────────────────────────────
Object[] snapshot = list.toArray(); // Safe copy for processing

Concurrency Patterns

// Event listener registry (classic CopyOnWriteArrayList use case)
CopyOnWriteArrayList<EventListener> listeners = new CopyOnWriteArrayList<>();

// Thread A — registers listener (rare)
listeners.add(myListener);

// Threads B, C, D — fire events concurrently (frequent, lock-free)
for (EventListener l : listeners) l.onEvent(event);

// Safe removal during traversal
for (EventListener l : listeners) {
    if (l.isExpired()) listeners.remove(l); // safe but expensive
}

// ─── Compound operation (still needs external sync) ─────
// check-then-act is NOT atomic even with CopyOnWriteArrayList
synchronized (list) {
    if (!list.contains(x)) list.add(x); // use addIfAbsent() instead!
}
// Better:
list.addIfAbsent(x); // atomic check-then-add

Vector (Legacy)

Synchronized version of ArrayList — every method acquires the intrinsic lock. Largely replaced by CopyOnWriteArrayList and Collections.synchronizedList().

⚠️ Avoid in new code. All methods lock on the entire object — very low throughput under contention.

Vector<Integer> v = new Vector<>();
v.add(1); v.add(2); v.add(3);

// Same API as ArrayList — all methods are synchronized
v.get(0);       // O(1) but locks
v.remove(0);    // O(n) + locks
v.size();       // O(1) + locks

// Extra methods (not in ArrayList)
v.firstElement();   // Like get(0)
v.lastElement();    // Like get(size-1)
v.elementAt(2);     // Same as get(2)
v.addElement(5);    // Same as add(5)
v.removeElement(5); // Same as remove(Integer.valueOf(5))
v.capacity();       // Current internal array capacity

synchronizedList Wrapper

Wraps any List with a mutex. Simpler than CopyOnWriteArrayList for write-heavy lists.

import java.util.Collections;

List<Integer> safe = Collections.synchronizedList(new ArrayList<>());
List<Integer> safe = Collections.synchronizedList(new LinkedList<>());
// All single operations are automatically synchronized
safe.add(5);
safe.get(2);
safe.remove(Integer.valueOf(5));
safe.size();
safe.contains(3);

// ⚠️ CRITICAL: Manual sync required for iteration
synchronized (safe) {
    for (Integer i : safe) {
        System.out.println(i);     // Must hold lock during iteration
    }
}

// ⚠️ CRITICAL: Compound operations also need manual sync
synchronized (safe) {
    if (!safe.contains(x)) safe.add(x);
}

Thread-Safe Stack

ConcurrentLinkedDeque as Stack

There is no dedicated thread-safe Stack in java.util.concurrent. Use ConcurrentLinkedDeque.

import java.util.concurrent.ConcurrentLinkedDeque;

Deque<Integer> stack = new ConcurrentLinkedDeque<>();
stack.push(10);         // addFirst() — lock-free       → O(1)
stack.pop();            // removeFirst() — lock-free    → O(1)
stack.peek();           // peekFirst() — lock-free      → O(1)
stack.isEmpty();        //                              → O(1)
stack.size();           // ⚠️ O(n) — traverses all nodes

LinkedBlockingDeque is the blocking alternative — use when you need push to block when full, or pop to block when empty.


Thread-Safe Queue & Deque

ConcurrentLinkedQueue

Lock-free, unbounded FIFO queue based on Michael-Scott non-blocking algorithm. Best general-purpose concurrent queue.

Best for: Producer-consumer with multiple threads, task queues, work-stealing.

import java.util.concurrent.ConcurrentLinkedQueue;

Queue<Task> queue = new ConcurrentLinkedQueue<>();

Most Used Methods

queue.offer(task);      // Enqueue (always true)       → O(1) lock-free
queue.poll();           // Dequeue head (null if empty)→ O(1) lock-free
queue.peek();           // View head (null if empty)   → O(1) lock-free
queue.isEmpty();        // May be inaccurate mid-op    → O(1)
queue.size();           // ⚠️ O(n) — not O(1)!
queue.contains(task);   // Linear scan                 → O(n)
queue.remove(task);     // Remove specific element     → O(n)

// ─── Drain (bulk consume) ───────────────────────────────
List<Task> batch = new ArrayList<>();
queue.drainTo(batch);   // Not available — use loop:
Task t;
while ((t = queue.poll()) != null) batch.add(t);

Concurrency Patterns

// Multi-producer, multi-consumer
ConcurrentLinkedQueue<Runnable> taskQueue = new ConcurrentLinkedQueue<>();

// Producer threads
executor.submit(() -> taskQueue.offer(new MyTask()));

// Consumer threads
executor.submit(() -> {
    while (running) {
        Runnable t = taskQueue.poll();
        if (t != null) t.run();
        else Thread.sleep(1); // back-off
    }
});

ConcurrentLinkedDeque

Lock-free, unbounded double-ended queue.

import java.util.concurrent.ConcurrentLinkedDeque;

Deque<Integer> dq = new ConcurrentLinkedDeque<>();

dq.offerFirst(1);       // Add to front                → O(1)
dq.offerLast(9);        // Add to back                 → O(1)
dq.peekFirst();         // View front                  → O(1)
dq.peekLast();          // View back                   → O(1)
dq.pollFirst();         // Remove from front           → O(1)
dq.pollLast();          // Remove from back            → O(1)
dq.size();              // ⚠️ O(n) — traverse all nodes

LinkedBlockingQueue

Optionally-bounded blocking queue backed by linked nodes. Separate head/tail locks = higher throughput than ArrayBlockingQueue for asymmetric producer-consumer loads.

Best for: Classic producer-consumer with blocking behavior, ThreadPoolExecutor work queues.

import java.util.concurrent.LinkedBlockingQueue;

BlockingQueue<Task> bq = new LinkedBlockingQueue<>();          // Unbounded
BlockingQueue<Task> bq = new LinkedBlockingQueue<>(100);       // Capacity 100

Most Used Methods

// ─── Non-blocking ───────────────────────────────────────
bq.offer(task);             // Enqueue, returns false if full  → O(1)
bq.poll();                  // Dequeue head, null if empty     → O(1)
bq.peek();                  // View head, null if empty        → O(1)

// ─── Blocking (KEY advantage over ConcurrentLinkedQueue) ─
bq.put(task);               // Block until space available     → O(1)
bq.take();                  // Block until element available   → O(1)

// ─── Timed (try with timeout) ───────────────────────────
bq.offer(task, 200, TimeUnit.MILLISECONDS); // Wait up to 200ms
bq.poll(200, TimeUnit.MILLISECONDS);        // Wait up to 200ms

// ─── Bulk ───────────────────────────────────────────────
bq.drainTo(list);           // Move all to list               → O(n)
bq.drainTo(list, 10);       // Move up to 10 elements         → O(k)

// ─── Info ───────────────────────────────────────────────
bq.size();                  // Current size                   → O(1)
bq.remainingCapacity();     // Space left                     → O(1)
bq.isEmpty();               //                                → O(1)
bq.contains(task);          // Linear scan                    → O(n)
bq.remove(task);            // Remove specific element        → O(n)

Concurrency Patterns

// Classic Producer-Consumer
BlockingQueue<String> queue = new LinkedBlockingQueue<>(50);

// Producer
new Thread(() -> {
    while (true) {
        queue.put(produceItem()); // blocks when full
    }
}).start();

// Consumer
new Thread(() -> {
    while (true) {
        String item = queue.take(); // blocks when empty
        process(item);
    }
}).start();

// Drain and batch-process
List<Task> batch = new ArrayList<>();
queue.drainTo(batch, 20); // grab up to 20 tasks at once
processBatch(batch);

// Thread pool with bounded queue
ThreadPoolExecutor pool = new ThreadPoolExecutor(
    4, 8, 60L, TimeUnit.SECONDS,
    new LinkedBlockingQueue<>(200) // work queue
);

ArrayBlockingQueue

Bounded blocking queue backed by a fixed array. Single lock for both head and tail — simpler but lower throughput than LinkedBlockingQueue under high concurrency.

Best for: Fixed-size bounded buffers, rate-limiting, backpressure.

import java.util.concurrent.ArrayBlockingQueue;

BlockingQueue<Integer> abq = new ArrayBlockingQueue<>(100);       // capacity required
BlockingQueue<Integer> abq = new ArrayBlockingQueue<>(100, true); // fair ordering (FIFO for waiting threads)
// Same API as LinkedBlockingQueue
abq.put(5);                 // Block until space              → O(1)
abq.take();                 // Block until element            → O(1)
abq.offer(5);               // Non-blocking enqueue           → O(1)
abq.poll();                 // Non-blocking dequeue           → O(1)
abq.offer(5, 100, TimeUnit.MILLISECONDS);  // Timed
abq.poll(100, TimeUnit.MILLISECONDS);      // Timed
abq.remainingCapacity();    // Remaining slots                → O(1)
abq.drainTo(list);          // Bulk consume                   → O(n)

LinkedBlockingQueue vs ArrayBlockingQueue:

  • LinkedBlockingQueue — separate read/write locks, better throughput; unbounded option
  • ArrayBlockingQueue — single lock, lower memory (no node objects), supports fair ordering

PriorityBlockingQueue

Unbounded blocking priority queue. Blocks only on take() when empty. Never blocks on put()/offer().

Best for: Priority-based task scheduling, Dijkstra's in multi-threaded context.

import java.util.concurrent.PriorityBlockingQueue;

PriorityBlockingQueue<Integer> pbq = new PriorityBlockingQueue<>();             // Min-heap
PriorityBlockingQueue<Task> pbq = new PriorityBlockingQueue<>(11, comparator); // Custom order
pbq.offer(5);               // Always succeeds (unbounded)    → O(log n)
pbq.put(5);                 // Same as offer (never blocks)   → O(log n)
pbq.take();                 // Blocks when empty              → O(log n)
pbq.poll();                 // Null if empty                  → O(log n)
pbq.peek();                 // View min (null if empty)       → O(1)
pbq.size();                 //                                → O(1)
pbq.drainTo(list);          // Bulk consume in priority order → O(n log n)

// Custom priority task
class Task implements Comparable<Task> {
    int priority;
    public int compareTo(Task o) { return this.priority - o.priority; }
}
PriorityBlockingQueue<Task> scheduler = new PriorityBlockingQueue<>();

LinkedBlockingDeque

Optionally-bounded blocking double-ended queue. Supports blocking operations on both ends.

import java.util.concurrent.LinkedBlockingDeque;

BlockingDeque<Integer> bdq = new LinkedBlockingDeque<>();
BlockingDeque<Integer> bdq = new LinkedBlockingDeque<>(100); // bounded

bdq.putFirst(1);            // Block until space at front     → O(1)
bdq.putLast(9);             // Block until space at back      → O(1)
bdq.takeFirst();            // Block until element at front   → O(1)
bdq.takeLast();             // Block until element at back    → O(1)
bdq.offerFirst(1, 100, TimeUnit.MILLISECONDS); // Timed
bdq.peekFirst();            // View front                     → O(1)
bdq.peekLast();             // View back                      → O(1)
bdq.pollFirst();            // Non-blocking remove front      → O(1)
bdq.pollLast();             // Non-blocking remove back       → O(1)

SynchronousQueue

Zero-capacity handoff queue. Each put() blocks until another thread calls take(), and vice versa. No buffering — direct thread-to-thread transfer.

Best for: Direct handoff patterns, Executors.newCachedThreadPool() work queue.

import java.util.concurrent.SynchronousQueue;

SynchronousQueue<Runnable> sq = new SynchronousQueue<>();
SynchronousQueue<Runnable> sq = new SynchronousQueue<>(true); // fair

sq.put(task);               // Block until consumer ready     → O(1)
sq.take();                  // Block until producer ready     → O(1)
sq.offer(task);             // Return false if no consumer    → O(1)
sq.poll();                  // Return null if no producer     → O(1)
sq.offer(task, 100, TimeUnit.MILLISECONDS);  // Timed handoff
sq.size();                  // Always 0
sq.isEmpty();               // Always true

DelayQueue

Unbounded blocking queue where elements become available only after their delay has expired. Elements must implement Delayed.

Best for: Scheduled task execution, cache expiration, retry-after mechanisms.

import java.util.concurrent.DelayQueue;
import java.util.concurrent.Delayed;
import java.util.concurrent.TimeUnit;

class DelayedTask implements Delayed {
    long triggerTime; // System.nanoTime() + delay
    String name;

    DelayedTask(String name, long delayMs) {
        this.name = name;
        this.triggerTime = System.nanoTime() + TimeUnit.MILLISECONDS.toNanos(delayMs);
    }

    public long getDelay(TimeUnit unit) {
        return unit.convert(triggerTime - System.nanoTime(), TimeUnit.NANOSECONDS);
    }

    public int compareTo(Delayed o) {
        return Long.compare(this.triggerTime, ((DelayedTask) o).triggerTime);
    }
}

DelayQueue<DelayedTask> dq = new DelayQueue<>();

dq.offer(new DelayedTask("job1", 1000)); // available after 1s
dq.offer(new DelayedTask("job2", 500));  // available after 500ms

DelayedTask t = dq.take();     // Blocks until soonest task is ready
DelayedTask t = dq.poll();     // Null if no task is ready yet
dq.peek();                     // View soonest task (may not be ready)
dq.size();                     // All tasks (including not-yet-ready)

Thread-Safe Set

CopyOnWriteArraySet

Thread-safe Set backed by a CopyOnWriteArrayList. Every write copies the whole array.

Best for: Small sets with rare writes (subscriber lists, whitelist/allowlist).

import java.util.concurrent.CopyOnWriteArraySet;

CopyOnWriteArraySet<String> set = new CopyOnWriteArraySet<>();
CopyOnWriteArraySet<String> set = new CopyOnWriteArraySet<>(existingCollection);
set.add("x");               // Copies array on write          → O(n)
set.remove("x");            // Copies array on write          → O(n)
set.contains("x");          // Lock-free read (linear scan)   → O(n)
set.size();                 //                                → O(1)
set.isEmpty();              //                                → O(1)
set.addAll(collection);     //                                → O(n*k)

// Iteration is always safe — snapshot
for (String s : set) System.out.println(s); // No CME ever

⚠️ No O(1) contains — backed by list, not hash table. Use only for small sets.


ConcurrentSkipListSet

Thread-safe, sorted Set equivalent of TreeSet. Based on a skip list (probabilistic balanced structure). Lock-free for most operations.

Best for: Sorted sets with concurrent access, range queries in multi-threaded code.

import java.util.concurrent.ConcurrentSkipListSet;

ConcurrentSkipListSet<Integer> css = new ConcurrentSkipListSet<>();
ConcurrentSkipListSet<Integer> css = new ConcurrentSkipListSet<>(Collections.reverseOrder());
ConcurrentSkipListSet<Integer> css = new ConcurrentSkipListSet<>(existingCollection);

Most Used Methods

css.add(5);                 // Insert                         → O(log n)
css.remove(5);              // Delete                         → O(log n)
css.contains(5);            // Search                         → O(log n)
css.size();                 // ⚠️ O(n) — traverse all nodes
css.isEmpty();              //                                → O(1)

// ─── Navigation (mirrors TreeSet) ───────────────────────
css.first();                // Smallest element               → O(log n)
css.last();                 // Largest element                → O(log n)
css.floor(5);               // Largest ≤ 5                    → O(log n)
css.ceiling(5);             // Smallest ≥ 5                   → O(log n)
css.lower(5);               // Largest < 5                    → O(log n)
css.higher(5);              // Smallest > 5                   → O(log n)
css.pollFirst();            // Remove & return smallest       → O(log n)
css.pollLast();             // Remove & return largest        → O(log n)
css.headSet(5);             // View of elements < 5           → O(log n)
css.tailSet(5);             // View of elements ≥ 5           → O(log n)
css.subSet(3, 8);           // View of elements [3, 8)        → O(log n)
css.descendingSet();        // Reverse-order view             → O(1)

Concurrency Pattern

// Online leaderboard (sorted, concurrent updates)
ConcurrentSkipListSet<Long> scores = new ConcurrentSkipListSet<>(Collections.reverseOrder());
scores.add(9500L);
scores.add(8800L);
// Top 3 at any moment (lock-free)
scores.stream().limit(3).forEach(System.out::println);

synchronizedSet Wrapper

Set<Integer> safe = Collections.synchronizedSet(new HashSet<>());
Set<Integer> sortedSafe = Collections.synchronizedSortedSet(new TreeSet<>());

safe.add(5);        // Synchronized
safe.remove(5);     // Synchronized
safe.contains(5);   // Synchronized

// ⚠️ Manual sync required for iteration
synchronized (safe) {
    for (Integer i : safe) System.out.println(i);
}

Thread-Safe Map

ConcurrentHashMap

⭐ The most important concurrent collection in Java. Lock-free reads (via volatile), fine-grained CAS writes per bucket. Replaced Hashtable completely in modern Java.

Best for: Concurrent caches, frequency counters, shared state maps in any multi-threaded app.

import java.util.concurrent.ConcurrentHashMap;

ConcurrentHashMap<String, Integer> map = new ConcurrentHashMap<>();
ConcurrentHashMap<String, Integer> map = new ConcurrentHashMap<>(capacity, loadFactor, concurrencyLevel);

Most Used Methods

// ─── Basic (same as HashMap) ────────────────────────────
map.put("a", 1);                        // Insert/overwrite         → O(1)
map.get("a");                           // Read (lock-free)         → O(1)
map.remove("a");                        // Delete                   → O(1)
map.containsKey("a");                   // Check key (lock-free)    → O(1)
map.size();                             // ⚠️ Approximate under concurrent mods
map.isEmpty();                          //                          → O(1)

// ─── Atomic Compound Operations (KEY advantage) ─────────
map.putIfAbsent("a", 0);                // Insert only if missing   → O(1) atomic
map.replace("a", 1, 2);                 // CAS replace              → O(1) atomic
map.remove("a", 1);                     // Remove if value matches  → O(1) atomic

// ─── Atomic Compute (most powerful) ─────────────────────
map.merge("a", 1, Integer::sum);        // Atomic increment         → O(1)
map.computeIfAbsent("a", k -> 0);       // Init if absent           → O(1)
map.computeIfPresent("a", (k, v) -> v + 1); // Update if present    → O(1)
map.compute("a", (k, v) -> v == null ? 1 : v + 1); // Atomic upsert→ O(1)
map.getOrDefault("a", 0);              // Read with default         → O(1)

// ─── Bulk Operations (Java 8+) ──────────────────────────
map.forEach(1, (k, v) -> System.out.println(k + "=" + v)); // parallelism threshold=1
map.replaceAll((k, v) -> v * 2);        // Transform all values     → O(n)

// Parallel reduce/search (use with large maps)
int total = map.reduceValues(1, Integer::sum);          // sum all values
String found = map.searchKeys(1, k -> k.startsWith("a") ? k : null);

// ─── Views ──────────────────────────────────────────────
// All views are live, weakly-consistent (reflect concurrent mods)
for (Map.Entry<String, Integer> e : map.entrySet()) { ... }
for (String key : map.keySet()) { ... }
for (int val : map.values()) { ... }

// Thread-safe Set view (backed by ConcurrentHashMap)
Set<String> safeSet = ConcurrentHashMap.newKeySet();
Set<String> safeSet = ConcurrentHashMap.newKeySet(expectedSize);

Concurrency Patterns

// ─── Concurrent Frequency Counter ───────────────────────
ConcurrentHashMap<String, Integer> freq = new ConcurrentHashMap<>();
words.parallelStream().forEach(w -> freq.merge(w, 1, Integer::sum));

// ─── Thread-safe cache with lazy init ───────────────────
ConcurrentHashMap<Integer, Result> cache = new ConcurrentHashMap<>();
Result r = cache.computeIfAbsent(key, k -> expensiveCompute(k));

// ─── CAS-based counter (no AtomicInteger needed) ────────
map.merge("count", 1, Integer::sum);

// ─── Avoid double-check locking antipattern ─────────────
// ❌ Not atomic:
if (!map.containsKey(k)) map.put(k, v);
// ✅ Atomic:
map.putIfAbsent(k, v);

// ─── Concurrent graph adjacency list ────────────────────
ConcurrentHashMap<Integer, Set<Integer>> graph = new ConcurrentHashMap<>();
graph.computeIfAbsent(node, k -> ConcurrentHashMap.newKeySet()).add(neighbor);

// ─── Remove entry when value reaches zero ───────────────
map.computeIfPresent("a", (k, v) -> v <= 1 ? null : v - 1); // null removes key

ConcurrentSkipListMap

Thread-safe sorted Map equivalent of TreeMap. Lock-free, based on skip lists.

Best for: Sorted concurrent maps, time-series data, range queries across threads.

import java.util.concurrent.ConcurrentSkipListMap;

ConcurrentSkipListMap<Integer, String> csm = new ConcurrentSkipListMap<>();
ConcurrentSkipListMap<Integer, String> csm = new ConcurrentSkipListMap<>(Collections.reverseOrder());

Most Used Methods

csm.put(3, "c");                // Insert                    → O(log n)
csm.get(3);                     // Get by key                → O(log n)
csm.remove(3);                  // Delete                    → O(log n)
csm.putIfAbsent(3, "c");        // Atomic put if absent      → O(log n)
csm.replace(3, "c", "d");       // CAS replace               → O(log n)
csm.containsKey(3);             //                           → O(log n)

// ─── Navigation (same as TreeMap) ───────────────────────
csm.firstKey();                 // Smallest key              → O(log n)
csm.lastKey();                  // Largest key               → O(log n)
csm.floorKey(5);                // Largest key ≤ 5           → O(log n)
csm.ceilingKey(5);              // Smallest key ≥ 5          → O(log n)
csm.lowerKey(5);                // Largest key < 5           → O(log n)
csm.higherKey(5);               // Smallest key > 5          → O(log n)
csm.pollFirstEntry();           // Remove & return min entry → O(log n)
csm.pollLastEntry();            // Remove & return max entry → O(log n)
csm.headMap(5);                 // Keys < 5 (live view)      → O(log n)
csm.tailMap(5);                 // Keys ≥ 5 (live view)      → O(log n)
csm.subMap(3, 8);               // Keys [3, 8) (live view)   → O(log n)
csm.firstEntry();               // Min key-value pair        → O(log n)
csm.lastEntry();                // Max key-value pair        → O(log n)

Concurrency Pattern

// Timestamp-based event log (sorted, concurrent writes)
ConcurrentSkipListMap<Long, String> eventLog = new ConcurrentSkipListMap<>();
eventLog.put(System.nanoTime(), "User login");

// Range query: events in last 5 seconds
long now = System.nanoTime();
eventLog.subMap(now - 5_000_000_000L, now).forEach((t, e) -> System.out.println(e));

Hashtable (Legacy)

The original thread-safe map — every method locks on the entire object. Entirely superseded by ConcurrentHashMap.

⚠️ Never use in new code. No null keys or values. Global lock kills throughput.

Hashtable<String, Integer> ht = new Hashtable<>();
ht.put("a", 1);         // Locks entire table
ht.get("a");            // Locks entire table
ht.remove("a");         // Locks entire table
ht.containsKey("a");    // Locks entire table
ht.containsValue(1);    // O(n) + global lock
ht.keys();              // Returns Enumeration (old API)
ht.elements();          // Returns Enumeration of values

synchronizedMap Wrapper

Map<String, Integer> safe = Collections.synchronizedMap(new HashMap<>());
Map<String, Integer> sorted = Collections.synchronizedSortedMap(new TreeMap<>());

// All single ops are synchronized
safe.put("a", 1);
safe.get("a");
safe.remove("a");
safe.putIfAbsent("a", 0); // ⚠️ Synchronized but NOT atomic (2 ops)

// ⚠️ Manual sync required for iteration
synchronized (safe) {
    for (Map.Entry<String, Integer> e : safe.entrySet()) { ... }
}

// ⚠️ Manual sync for compound ops
synchronized (safe) {
    if (!safe.containsKey("a")) safe.put("a", 0);
}
// Prefer ConcurrentHashMap.putIfAbsent() instead!

Atomic Variables

Atomic types use CPU-level Compare-And-Swap (CAS) — no locks, no blocking, very fast.

AtomicInteger / AtomicLong

import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicLong;

AtomicInteger counter = new AtomicInteger(0);
AtomicLong bigCounter = new AtomicLong(0L);

Most Used Methods

counter.get();                      // Read current value             → O(1)
counter.set(5);                     // Write value                    → O(1)
counter.getAndSet(10);              // Swap, return old               → O(1)

// ─── Increment / Decrement ──────────────────────────────
counter.incrementAndGet();          // ++counter (returns new)        → O(1)
counter.getAndIncrement();          // counter++ (returns old)        → O(1)
counter.decrementAndGet();          // --counter (returns new)        → O(1)
counter.getAndDecrement();          // counter-- (returns old)        → O(1)

// ─── Add ────────────────────────────────────────────────
counter.addAndGet(5);               // counter += 5, return new       → O(1)
counter.getAndAdd(5);               // return old, counter += 5       → O(1)

// ─── CAS (Compare-And-Swap) ─────────────────────────────
counter.compareAndSet(0, 1);        // If current==0, set to 1        → O(1) atomic
counter.compareAndExchange(0, 1);   // Same, returns witness value     → O(1)

// ─── Functional update (Java 8+) ────────────────────────
counter.updateAndGet(v -> v * 2);   // Atomic transform               → O(1)
counter.getAndUpdate(v -> v * 2);   // Return old, then transform     → O(1)
counter.accumulateAndGet(5, Integer::sum); // Atomic merge             → O(1)

Concurrency Patterns

// Thread-safe counter (no synchronized needed)
AtomicInteger count = new AtomicInteger(0);
threads.forEach(t -> t.submit(() -> count.incrementAndGet()));

// Non-blocking try-acquire (lock-free mutex pattern)
AtomicInteger lock = new AtomicInteger(0);
if (lock.compareAndSet(0, 1)) {
    try { criticalSection(); }
    finally { lock.set(0); }
}

// Request ID generator
AtomicLong requestId = new AtomicLong(0);
long id = requestId.getAndIncrement(); // unique IDs across threads

// LongAdder for high-contention counters (better than AtomicLong)
import java.util.concurrent.atomic.LongAdder;
LongAdder adder = new LongAdder();
adder.increment();      // Striped — less contention than AtomicLong
adder.add(5);
adder.sum();            // Total across all stripes

AtomicReference

Atomic reference to any object. Enables lock-free updates to shared objects.

import java.util.concurrent.atomic.AtomicReference;

AtomicReference<String> ref = new AtomicReference<>("initial");

ref.get();                          // Read current reference
ref.set("new");                     // Write reference
ref.getAndSet("newer");             // Swap, return old
ref.compareAndSet("new", "newest"); // CAS on reference identity
ref.updateAndGet(s -> s.toUpperCase()); // Functional update

Concurrency Pattern

// Lock-free stack (Treiber stack)
AtomicReference<Node<T>> top = new AtomicReference<>(null);

void push(T val) {
    Node<T> newNode = new Node<>(val);
    do {
        newNode.next = top.get();
    } while (!top.compareAndSet(newNode.next, newNode));
}

T pop() {
    Node<T> curr;
    do {
        curr = top.get();
        if (curr == null) return null;
    } while (!top.compareAndSet(curr, curr.next));
    return curr.val;
}

AtomicBoolean

import java.util.concurrent.atomic.AtomicBoolean;

AtomicBoolean flag = new AtomicBoolean(false);

flag.get();                         // Read
flag.set(true);                     // Write
flag.getAndSet(true);               // Swap
flag.compareAndSet(false, true);    // CAS — safe one-time init

// One-time initialization pattern
AtomicBoolean initialized = new AtomicBoolean(false);
if (initialized.compareAndSet(false, true)) {
    init(); // only one thread ever runs this
}

Non-Thread-Safe → Thread-Safe Migration

Drop-in Replacements

// ─── List ────────────────────────────────────────────────
// Before (unsafe):
List<String> list = new ArrayList<>();

// After (read-heavy):
List<String> list = new CopyOnWriteArrayList<>();

// After (write-heavy / general):
List<String> list = Collections.synchronizedList(new ArrayList<>());

// ─── Set ────────────────────────────────────────────────
// Before:
Set<Integer> set = new HashSet<>();
// After:
Set<Integer> set = ConcurrentHashMap.newKeySet(); // ⭐ preferred
Set<Integer> set = new CopyOnWriteArraySet<>();   // small, read-heavy

// Before (sorted):
Set<Integer> set = new TreeSet<>();
// After:
Set<Integer> set = new ConcurrentSkipListSet<>();

// ─── Map ────────────────────────────────────────────────
// Before:
Map<String, Integer> map = new HashMap<>();
// After:
Map<String, Integer> map = new ConcurrentHashMap<>(); // ⭐ always preferred

// Before (sorted):
Map<Integer, String> map = new TreeMap<>();
// After:
Map<Integer, String> map = new ConcurrentSkipListMap<>();

// ─── Queue ──────────────────────────────────────────────
// Before:
Queue<Task> q = new ArrayDeque<>();
// After (non-blocking):
Queue<Task> q = new ConcurrentLinkedQueue<>();
// After (blocking):
BlockingQueue<Task> q = new LinkedBlockingQueue<>();

// ─── Priority Queue ──────────────────────────────────────
// Before:
PriorityQueue<Task> pq = new PriorityQueue<>();
// After:
PriorityBlockingQueue<Task> pq = new PriorityBlockingQueue<>();

Common Mistakes to Avoid

// ❌ WRONG: ConcurrentHashMap but non-atomic compound op
if (!map.containsKey(k)) map.put(k, v);           // Race condition!
// ✅ RIGHT:
map.putIfAbsent(k, v);

// ❌ WRONG: ConcurrentHashMap increment with get+put
map.put(k, map.getOrDefault(k, 0) + 1);           // Race condition!
// ✅ RIGHT:
map.merge(k, 1, Integer::sum);

// ❌ WRONG: Iterating synchronizedList without lock
for (String s : synchronizedList) { ... }         // ConcurrentModificationException!
// ✅ RIGHT:
synchronized (synchronizedList) {
    for (String s : synchronizedList) { ... }
}

// ❌ WRONG: Using size() to check ConcurrentLinkedQueue
if (queue.size() == 0) { ... }  // O(n) + may be stale
// ✅ RIGHT:
if (queue.isEmpty()) { ... }   // O(1) and consistent

// ❌ WRONG: Sharing a non-thread-safe collection across threads
List<Integer> shared = new ArrayList<>();          // Data corruption!
// ✅ RIGHT:
List<Integer> shared = new CopyOnWriteArrayList<>();

Concurrency Quick-Pick Guide

Scenario Best Choice
Concurrent read-heavy map ConcurrentHashMap
Concurrent frequency counter ConcurrentHashMap.merge()
Concurrent cache ConcurrentHashMap.computeIfAbsent()
Sorted concurrent map ConcurrentSkipListMap
Thread-safe set (general) ConcurrentHashMap.newKeySet()
Thread-safe sorted set ConcurrentSkipListSet
Read-heavy list (rarely writes) CopyOnWriteArrayList
Concurrent FIFO queue ConcurrentLinkedQueue
Blocking producer-consumer LinkedBlockingQueue
Bounded buffer / backpressure ArrayBlockingQueue
Priority-based task scheduling PriorityBlockingQueue
Scheduled / delayed tasks DelayQueue
Direct thread-to-thread handoff SynchronousQueue
Thread-safe deque (stack/queue) ConcurrentLinkedDeque
Blocking deque LinkedBlockingDeque
High-throughput counter LongAdder (striped)
Low-contention counter AtomicInteger / AtomicLong
One-time initialization flag AtomicBoolean.compareAndSet()
Lock-free object reference swap AtomicReference
LRU Cache LinkedHashMap + synchronized or ConcurrentHashMap + custom eviction
Simple sync (legacy/simple apps) Collections.synchronized*() + manual lock for iteration

Complexity & Locking Cheat Sheet

Time Complexity

Operation ConcurrentHashMap ConcurrentSkipListMap CopyOnWriteArrayList LinkedBlockingQueue PriorityBlockingQueue
Add / Put O(1) avg O(log n) O(n) O(1) O(log n)
Get / Peek O(1) avg O(log n) O(1) O(1) O(1)
Remove O(1) avg O(log n) O(n) O(1) O(log n)
Contains O(1) avg O(log n) O(n) O(n) O(n)
Size O(1)* O(n) O(1) O(1) O(1)
Iteration O(n) O(n) O(n) snapshot O(n) O(n)

*ConcurrentHashMap.size() is an approximation under concurrent modification.

Locking Behavior

Collection Read Lock Write Lock Blocking?
ConcurrentHashMap None (volatile) CAS per bucket No
ConcurrentSkipListMap/Set None (CAS) CAS per node No
CopyOnWriteArrayList/Set None (snapshot) Full object lock No
ConcurrentLinkedQueue/Deque None (CAS) CAS per node No
LinkedBlockingQueue Tail lock Head lock Yes (put/take)
ArrayBlockingQueue Single lock Single lock Yes (put/take)
PriorityBlockingQueue Single lock Single lock Yes (take only)
SynchronousQueue N/A (no storage) N/A Yes (always)
Hashtable Full object lock Full object lock No
Vector Full object lock Full object lock No
Collections.synchronized* Full object lock Full object lock No
AtomicInteger/Long/Reference None (volatile) CAS No

💡 Golden Rules for Concurrent Collections:

  • Default to ConcurrentHashMap — it's almost always the right map choice.
  • Never use Hashtable or Collections.synchronizedMap() in new code.
  • merge() and computeIfAbsent() are the two atomic operations you'll use most.
  • Iteration on synchronized* wrappers always needs an external synchronized block.
  • Prefer LinkedBlockingQueue over ArrayBlockingQueue unless you need bounded buffer with fair ordering.
  • Use LongAdder instead of AtomicLong when many threads increment a counter simultaneously.
  • CopyOnWriteArrayList is only efficient when reads >> writes — every write is O(n).
  • size() on ConcurrentLinkedQueue/ConcurrentLinkedDeque/ConcurrentSkipListMap is O(n) — use isEmpty() when possible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment