In our previous article, we explored the historical evolution of Object-Oriented Programming and touched on the challenges it faces in concurrent environments. Today, we'll dive deep into the specific headaches that emerge when multiple threads interact with shared mutable state – problems that have plagued developers for decades.
The Fundamental Problem: Shared Mutable State
Imagine Arthur and Maria both trying to purchase the last available ticket for a concert through a web application. Both click "Buy Now" at exactly the same time. What happens next illustrates the core challenges of concurrent programming:
public class TicketService {
private int availableTickets = 1; // Only one ticket left
public boolean purchaseTicket(String customerName) {
if (availableTickets > 0) {
// Danger zone: Another thread could execute here!
simulateProcessingTime(); // Credit card processing, etc.
availableTickets--;
System.out.println(customerName + " successfully purchased a ticket!");
return true;
}
System.out.println("Sorry " + customerName + ", no tickets available.");
return false;
}
}
Without proper synchronization, both Arthur and Maria might see availableTickets > 0, leading to both "successfully" purchasing the same ticket.

A visual metaphor for concurrent programming: multiple threads (represented by the characters) fighting over shared resources (the CPU/chip). This illustrates the core challenge of shared mutable state where multiple threads attempt to access and modify the same resource simultaneously, leading to conflicts and race conditions.
The Four Horsemen of Concurrent Programming
When multiple threads interact with shared state, four primary problems can arise, each more insidious than the last.
State diagram showing the four primary concurrency failure modes and how they lead to system problems. The Actor Model eliminates these issues through message passing and sequential processing within actors.
1. Race Conditions
Race conditions occur when the outcome of a program depends on the timing and interleaving of threads. The ticket example above is a classic race condition.
// Thread 1: Arthur's purchase
if (availableTickets > 0) { // Reads 1
// Thread 2 executes here and also reads 1
availableTickets--; // Sets to 0
}
// Thread 2: Maria's purchase
if (availableTickets > 0) { // This was 1 when checked!
availableTickets--; // Sets to -1 (!!)
}
The result? The system thinks it sold two tickets when only one was available, leading to data corruption and unhappy customers.
This sequence diagram visualizes the exact timing issue in the race condition. Both Arthur and Maria check availability before either decrements, resulting in both "successfully" purchasing the same ticket.

This sequence diagram shows a classic race condition where Arthur and Maria both check ticket availability simultaneously, both see "1 ticket available", and both attempt to purchase. The system ends up in an inconsistent state where both purchases succeed but only one ticket was actually available.
2. Deadlocks
Deadlocks happen when two or more threads are blocked forever, waiting for each other to release resources:
public class DeadlockExample {
private final Object lock1 = new Object();
private final Object lock2 = new Object();
public void method1() {
synchronized(lock1) {
System.out.println("Thread 1: Holding lock1...");
synchronized(lock2) {
System.out.println("Thread 1: Holding lock1 & lock2...");
}
}
}
public void method2() {
synchronized(lock2) {
System.out.println("Thread 2: Holding lock2...");
synchronized(lock1) { // Deadlock here!
System.out.println("Thread 2: Holding lock2 & lock1...");
}
}
}
}
If Thread 1 acquires lock1 while Thread 2 acquires lock2, they'll wait forever for each other.
This sequence diagram shows the circular dependency that creates a deadlock. Thread 1 holds lock1 and waits for lock2, while Thread 2 holds lock2 and waits for lock1. Neither can proceed.

This sequence diagram illustrates a real-world deadlock scenario where two users (Arthur and Maria) attempt to purchase a ticket simultaneously, with the payment system and reservation system acquiring locks in different orders, resulting in a deadlock.
3. Livelocks
Livelocks are similar to deadlocks, but threads aren't blocked – they're actively trying to resolve the conflict, creating an infinite loop of politeness:
// Two people in a narrow corridor, both stepping aside continuously
public void avoidCollision(Person other) {
while (isBlocking(other)) {
moveAside(); // Both people keep moving aside
Thread.yield(); // Being "polite"
}
}
The threads remain active but make no progress, like two people in a hallway both stepping left and right in sync.
flowchart TD
Start([Thread 1 & 2 Start]) --> Check1{Is Other<br/>Blocking?}
Check1 -->|Yes| Move1[Move Aside]
Check1 -->|No| Complete([Complete])
Move1 --> Yield1[Thread.yield]
Yield1 --> Check2{Is Other<br/>Still Blocking?}
Check2 -->|Yes| Move2[Move Aside Again]
Check2 -->|No| Complete
Move2 --> Yield2[Thread.yield]
Yield2 --> Check3{Is Other<br/>Still Blocking?}
Check3 -->|Yes| Move3[Move Aside Again]
Check3 -->|No| Complete
Move3 --> Loop[...]
Loop --> Check1
style Check1 fill:#fff3cd
style Check2 fill:#fff3cd
style Check3 fill:#fff3cd
style Loop fill:#ffebee
style Complete fill:#e1f5e1
Note1[Both threads repeatedly<br/>detect collision and yield]
Note2[Threads remain active<br/>consuming CPU cycles]
Note3[No progress made<br/>infinite politeness loop]
Livelock flow: threads continuously detect conflicts and yield to each other, but both make the same decision simultaneously, resulting in an infinite loop of mutual yielding without progress.

This diagram illustrates a livelock scenario where Arthur and Maria are both being polite, constantly saying "After you" in an endless loop. Unlike a deadlock where threads are blocked, in a livelock both threads remain active but make no progress, continuously attempting to yield to each other.
4. Starvation
Starvation occurs when a thread is perpetually denied access to resources it needs:
public class PriorityQueue {
public synchronized void processHighPriority() {
// High-priority tasks keep getting processed
while (hasHighPriorityTasks()) {
processNext();
}
}
public synchronized void processLowPriority() {
// This might never execute if high-priority tasks keep coming!
processNext();
}
}

This diagram demonstrates thread starvation where VIP customers continuously get priority access to resources, while regular users (Arthur) may never get a chance to complete their transactions, despite being active in the system.
Low-priority threads might wait indefinitely while high-priority threads continuously grab resources.
Traditional Solutions and Their Limitations
Java provides several mechanisms to handle these issues, but each comes with trade-offs:
Synchronized Keywords
public synchronized boolean purchaseTicket(String customerName) {
// Thread-safe, but creates a bottleneck
// Only one thread can execute this method at a time
}
Pros: Simple, prevents race conditions Cons: Poor scalability, potential for deadlocks
Volatile Fields
private volatile boolean isAvailable = true;
Pros: Ensures visibility of changes across threads Cons: Only works for single operations, doesn't prevent race conditions in complex operations
Atomic Classes
private AtomicInteger availableTickets = new AtomicInteger(1);
public boolean purchaseTicket(String customerName) {
if (availableTickets.getAndDecrement() > 0) {
return true;
}
availableTickets.incrementAndGet(); // Rollback
return false;
}
Pros: Better performance than synchronized Cons: Limited to simple operations, doesn't solve complex coordination
The Caching Conundrum
Modern applications often add caching layers for performance, which introduces additional complexity:
public class CachedTicketService {
private final Map<String, Integer> cache = new ConcurrentHashMap<>();
private final Database database;
public boolean purchaseTicket(String event) {
Integer cached = cache.get(event);
if (cached != null && cached > 0) {
// Cache hit, but is this data still valid?
// Another instance might have sold tickets!
return processPurchase(event);
}
// Cache miss, check database
return checkDatabaseAndPurchase(event);
}
}
Now we have to worry about:
- Cache invalidation across multiple instances
- Consistency between cache and database
- Distributed locking in clustered environments
Why OOP Struggles with Concurrency
Object-Oriented Programming's core principle of encapsulation assumes that objects can protect their internal state. But when multiple threads access the same object, this protection breaks down:
- Encapsulation isn't enough: Private fields don't protect against concurrent access
- Method-level synchronization is too coarse: It creates unnecessary bottlenecks
- Complex object graphs require complex locking: Leading to deadlock risks
- Inheritance complicates thread safety: Subclasses might break parent class assumptions
The Mental Model Problem
Perhaps the biggest challenge is that shared mutable state requires developers to think about all possible interleavings of thread execution. This quickly becomes mentally overwhelming:
- With 2 threads and 3 operations each, there are 20 possible execution orders
- With 3 threads and 4 operations each, there are 369,600 possible execution orders
- With realistic applications having hundreds of threads... the complexity explodes
A Different Path Forward
The Actor Model addresses these problems by eliminating shared mutable state entirely. Instead of multiple threads accessing the same data:
- Each actor owns its state completely
- Communication happens only through messages
- No locks, no race conditions, no deadlocks
- Natural fault isolation and supervision
Consider how our ticket service might look with actors:
// Conceptual Actor-based approach
public class TicketActor extends AbstractActor {
private int availableTickets = 1;
@Override
public Receive createReceive() {
return receiveBuilder()
.match(PurchaseRequest.class, this::handlePurchase)
.build();
}
private void handlePurchase(PurchaseRequest request) {
if (availableTickets > 0) {
availableTickets--;
getSender().tell(new PurchaseSuccess(), getSelf());
} else {
getSender().tell(new PurchaseFailure("No tickets available"), getSelf());
}
}
}
No synchronization keywords, no locks, no race conditions – just simple, sequential processing of messages.
Looking Ahead
The problems we've explored today – race conditions, deadlocks, livelocks, and starvation – have plagued concurrent programming for decades. Traditional OOP solutions, while functional, often create more complexity than they solve.
In our next article, we'll explore how the Actor Model provides elegant solutions to these challenges and dive into practical implementation patterns using Akka and Apache Pekko on the JVM.
The journey from shared mutable state to message-passing architectures isn't just about avoiding bugs – it's about building systems that are inherently more scalable, maintainable, and resilient to failure.
Next up in Part 3: We'll implement a complete Actor-based system, explore supervision strategies, and see how message-passing eliminates the concurrency pitfalls we've discussed today.
