CodeWithSQL.com
HomeSQL Advanced CourseManaging Transactions

Managing Transactions & Concurrency

In-depth guide to ACID properties, transaction isolation levels (Read Uncommitted to Serializable), concurrency management and deadlock resolution.

3 Topics Chapter 11 of 12 Advanced

1 ACID Properties In-Depth

Top

Building on the ACID overview, let's examine how databases actually implement each guarantee.

Atomicity: Write-Ahead Logging (WAL)

Before writing data to disk, the database first writes the change to a transaction log (also called redo log). If a crash occurs mid-transaction, the database reads the log on restart and rolls back incomplete transactions. This is called Write-Ahead Logging.

Consistency: Constraint Enforcement

The database checks all constraints (PRIMARY KEY, FOREIGN KEY, CHECK, UNIQUE, NOT NULL) at the end of each statement (immediate mode) or at COMMIT (deferred mode). Any violation aborts the transaction.

Isolation: MVCC

Multi-Version Concurrency Control (MVCC) is how InnoDB and PostgreSQL handle isolation without excessive locking. Instead of blocking readers, the database keeps multiple versions of each row. Each transaction sees a consistent snapshot of the data as it was when the transaction started, even if other transactions are modifying it concurrently.

MVCC in practice: When Transaction A updates a row, InnoDB doesn't overwrite it. It creates a new version and keeps the old one in an undo log. Transaction B (which started earlier) still sees the old version. When both commit, the old version is garbage-collected. This is why readers don't block writers in InnoDB.

Durability: Double-Write Buffer

InnoDB uses a double-write buffer: before writing a data page to its final location on disk, it first writes it to a contiguous area. If a crash happens during the write, the data can be recovered from the double-write buffer. Combined with WAL, this guarantees committed data survives any crash.

Key Takeaways

  • Atomicity is implemented via Write-Ahead Logging (WAL / redo log)
  • Consistency is enforced by constraints checked at statement or commit time
  • Isolation uses MVCC — multiple versions of rows so readers don't block writers
  • Durability uses the double-write buffer + WAL to survive crashes

2 Isolation Levels

Top

Building on the isolation overview, let's explore each level with practical examples.

LevelDirty ReadNon-Repeatable ReadPhantom ReadDefault In
READ UNCOMMITTEDYesYesYesRarely used
READ COMMITTEDNoYesYesPostgreSQL, Oracle, SQL Server
REPEATABLE READNoNoYes*MySQL/InnoDB
SERIALIZABLENoNoNoMaximum safety

*InnoDB's REPEATABLE READ actually prevents phantom reads too via gap locking, making it closer to SERIALIZABLE than the standard requires.

SQL-- Set isolation level for current session SET TRANSACTION ISOLATION LEVEL READ COMMITTED; -- Set for all new connections (MySQL) SET GLOBAL transaction_isolation = 'READ-COMMITTED'; -- Check current level SELECT @@transaction_isolation;

When to Change the Default

  • Financial applications: Consider SERIALIZABLE for critical balance calculations
  • High-concurrency web apps: READ COMMITTED reduces lock contention
  • Reporting queries: READ UNCOMMITTED (dirty reads OK) for approximate real-time stats
  • Most applications: Stick with the default — it's been chosen for a reason

Changing isolation levels affects performance and correctness. Higher isolation = more locking = slower concurrency. Lower isolation = less locking = risk of reading stale data. Only change it if you understand the specific problem you're solving.

Key Takeaways

  • READ COMMITTED is the safest general-purpose level (PostgreSQL/Oracle default)
  • REPEATABLE READ is MySQL's default and prevents most concurrency issues
  • SERIALIZABLE offers maximum safety but lowest concurrency
  • Only change the default when you have a specific, measured need

3 Handling Deadlocks

Top

Expanding on deadlock prevention, let's implement a production-grade retry pattern.

Application-Level Retry Pattern

PHP (PDO)$maxRetries = 3; $attempt = 0; while ($attempt < $maxRetries) { try { $pdo->beginTransaction(); $pdo->exec("UPDATE accounts SET balance = balance - 500 WHERE id = 1"); $pdo->exec("UPDATE accounts SET balance = balance + 500 WHERE id = 2"); $pdo->commit(); break; // Success — exit the loop } catch (PDOException $e) { $pdo->rollBack(); if (strpos($e->getMessage(), 'Deadlock') !== false && $attempt < $maxRetries - 1) { $attempt++; usleep(100000 * $attempt); // Wait 100ms, 200ms, 300ms } else { throw $e; // Not a deadlock, or max retries exceeded } } }

Monitoring Deadlocks

MySQL-- Enable deadlock logging to the error log SET GLOBAL innodb_print_all_deadlocks = 1; -- View deadlock metrics SHOW STATUS LIKE 'Innodb_deadlocks';

Production pattern: (1) Catch deadlock errors specifically (error 1213 in MySQL). (2) Retry the entire transaction 2-3 times with exponential backoff. (3) Log deadlocks for analysis. (4) If the same tables deadlock frequently, review your lock ordering.

Key Takeaways

  • Deadlocks are expected in busy systems — always implement retry logic
  • Retry 2-3 times with exponential backoff (100ms, 200ms, 400ms)
  • Log deadlocks and monitor frequency to identify systemic issues
  • Enable innodb_print_all_deadlocks for automatic logging

What's next? The final chapter: Working with Large Databases — partitioning strategies, indexing at scale, and backup/restore strategies.