Distributed systems engineers routinely invoke consistency as if it were a monolithic property. In practice, the term obscures a critical distinction that shapes correctness guarantees in fundamentally different ways. Serializability and linearizability are both consistency models, but they govern different abstractions, answer different questions, and fail in different ways when misapplied.

The confusion stems from superficial similarity. Both models constrain the order in which operations appear to execute. Both provide some notion of equivalence to sequential execution. But serializability reasons about transactions—composite operations spanning multiple objects—while linearizability reasons about individual operations on single objects with respect to real time. Conflating them produces systems that satisfy neither guarantee when both are required.

This distinction matters because many systems claim one property while users assume another. A database advertising serializable isolation may permit behaviors that violate linearizability. A linearizable key-value store provides no transaction semantics whatsoever. Understanding precisely what each model promises—and what it explicitly does not promise—is essential for reasoning about correctness in distributed architectures. The formal differences determine whether your system behaves as intended under concurrent access.

Different Domains: Transactions vs Real-Time Operations

Serializability is a property of transaction schedules. Given a set of transactions, each comprising multiple read and write operations, a schedule is serializable if its effects are equivalent to some serial execution of those transactions. The key insight is that serializability makes no claims about which serial order—only that one exists. Two concurrent transactions T₁ and T₂ might appear to execute as T₁→T₂ or T₂→T₁, and either outcome satisfies serializability.

This definition reveals serializability's essential character: it is a permutation invariant over transaction orderings. The model cares about conflict equivalence and view equivalence, not about when operations actually occurred in wall-clock time. A serializable schedule can legally reorder operations in ways that contradict their real-time precedence, provided the final state matches some serial execution.

Linearizability operates in an entirely different domain. It concerns individual operations on shared objects and requires that operations appear to take effect instantaneously at some point between their invocation and response. This linearization point must respect real-time ordering: if operation A completes before operation B begins, A's linearization point must precede B's.

The real-time constraint is what distinguishes linearizability. Where serializability permits arbitrary reordering of non-conflicting transactions, linearizability demands that concurrent operations be consistent with their actual temporal relationships. A read that starts after a write completes must observe that write's effects. This property—sometimes called external consistency—provides the strong intuition that operations happen in the order clients observe them.

Consider the practical difference: a serializable database might commit transaction T₂ before T₁ even though T₁ completed first in real time, as long as the result is equivalent to some serial order. A linearizable register cannot exhibit such behavior—every operation respects the partial order induced by real-time precedence. These are fundamentally different guarantees serving fundamentally different purposes.

Takeaway

Serializability constrains transaction equivalence classes; linearizability constrains operation timing with respect to real time. One permits temporal reordering, the other forbids it.

Strict Serializability: The Composition That Many Systems Lack

The natural question arises: what if we want both guarantees? Strict serializability (sometimes called linearizable transactions or external serializability) combines both models. A schedule is strictly serializable if it is equivalent to some serial order of transactions and that serial order respects the real-time precedence of non-overlapping transactions.

This composition is strictly stronger than either model alone. Strict serializability inherits serializability's multi-object transaction semantics while adding linearizability's real-time ordering constraint. If transaction T₁ commits before transaction T₂ begins, then in the equivalent serial execution, T₁ must precede T₂. No temporal reordering is permitted across transaction boundaries.

Here is where assumptions often fail: most databases providing serializable isolation do not provide strict serializability. PostgreSQL's serializable isolation level uses serializable snapshot isolation (SSI), which permits certain anti-dependency anomalies and makes no real-time guarantees. MySQL's serializable mode is even weaker in practice. Google's Spanner is notable precisely because it does provide strict serializability through TrueTime and commit-wait protocols—a property so unusual it warranted a new term: external consistency.

Conversely, many linearizable systems provide no transaction semantics. ZooKeeper offers linearizable writes and sequentially consistent reads, but atomic multi-key transactions require additional coordination. etcd provides linearizable operations on individual keys but not transactions across keys. A system can be linearizable without being serializable, and serializable without being linearizable.

The gap between claims and guarantees is where systems fail. Engineers assume that a serializable database provides strong real-time ordering, or that a linearizable store supports atomic multi-object operations. Verifying the actual model requires reading specifications carefully and understanding the implementation. Strict serializability is expensive—it typically requires global coordination—and most systems optimize by sacrificing one property or the other.

Takeaway

Strict serializability requires both transaction equivalence and real-time ordering. Few systems actually provide it, and assuming one implies the other leads to incorrect designs.

Choosing Correctly: Decision Criteria and Verification

Selecting the appropriate consistency model requires understanding your system's actual requirements. Serializability is necessary when correctness depends on multi-object invariants maintained across transactions. Bank transfers, inventory systems, and any workflow involving related updates to multiple data items require transaction semantics. If your correctness condition references multiple objects jointly, you need serializability or stronger.

Linearizability is necessary when correctness depends on real-time ordering of individual operations. Leader election, distributed locks, and sequence number generation require that concurrent operations respect temporal precedence. If clients observe operation A complete before invoking operation B, and correctness requires A's effects to be visible to B, you need linearizability.

When both conditions hold—multi-object invariants with real-time ordering requirements—you need strict serializability. This is common in financial systems, distributed coordination services, and anywhere that transactions must reflect the actual order of business events.

Verification requires formal analysis, not trust in documentation. For serializability, construct conflict graphs from concurrent schedules and verify acyclicity. For linearizability, use linearization point analysis: identify where each operation takes effect and verify that the induced total order is consistent with real-time precedence. Tools like Jepsen provide empirical testing, but they demonstrate violations rather than prove correctness.

The critical discipline is stating requirements formally before selecting implementations. Write down the invariants your system must maintain. Identify which operations must observe which others' effects. Determine whether multi-object atomicity is required. Only then evaluate whether a candidate system's consistency model satisfies those requirements. The cost of mismatch is subtle correctness bugs that manifest only under specific concurrency conditions—exactly the failures that escape testing and appear in production.

Takeaway

Decide based on formal requirements: multi-object invariants demand serializability, real-time ordering demands linearizability, and both together demand strict serializability with explicit verification.

Serializability and linearizability are not interchangeable terms for strong consistency. They formalize different properties at different abstraction levels: transaction equivalence versus real-time operation ordering. The distinction determines what correctness guarantees your system actually provides.

Most systems provide one or the other, rarely both. Assuming a serializable database respects real-time ordering, or that a linearizable store supports transactions, leads to designs that fail under concurrency. Strict serializability exists but carries coordination costs that most systems avoid.

The remedy is formal precision. State your consistency requirements explicitly. Verify that candidate systems satisfy them. Treat consistency models as specifications to be proven, not labels to be trusted. The difference between serializability and linearizability is not academic—it determines whether your system is correct.