SystemCity
WorkspaceProblemsCanvasPricing
Sign in
S

SystemCity

AI-powered system design tutor. Learn architecture, ace interviews, build real systems.

Learn

  • Learn System Design
  • Interview Prep Guide
  • All Problems
  • Glossary
  • Compare
  • Design Canvas

Product

  • Pricing
  • Portfolio
  • Support

Legal

  • Terms
  • Privacy
  • Refunds

© 2026 SystemCity. All rights reserved.

Master system design · interview prep · 120+ problems

Back to glossary

Distributed Systems

Eventual Consistency

A consistency model where, given enough time and no new updates, all replicas of a piece of data will converge to the same value.

In depth

Eventual consistency is a relaxation of strong consistency. Instead of guaranteeing that every read returns the most recent write, the system promises only that, in the absence of new writes, all replicas will eventually converge. The convergence window is typically milliseconds in healthy systems but can stretch to seconds or longer during failures.

Many large-scale systems are built on eventual consistency because it provides much better availability and lower latency than strong consistency. Amazon DynamoDB, Cassandra, DNS, and most CDN caches are eventually consistent by default. The tradeoff is that clients must tolerate occasionally seeing stale data — a refresh might show the previous value before the new one appears.

There are stricter variants: read-your-writes consistency (a client always sees its own writes), monotonic reads (a client never sees data move backward in time), and causal consistency (operations that are causally related are seen in order by everyone).

When to use

Use eventual consistency for caches, view counters, social-network feeds, product catalogs, recommendations, and any data where slight staleness is harmless.

Tradeoffs

Eventually consistent systems are easier to scale and remain available during partitions, but force application code to handle stale reads, conflicts, and reconciliation logic.

Related terms

CAP Theorem

A principle stating that a distributed data store can provide at most two of: Consistency, Availability, and Partition tolerance.

Replication

Maintaining multiple copies of the same data across different nodes for fault tolerance, read scalability, and lower latency.

Consensus

The process by which a group of distributed nodes agree on a single value or sequence of values, even in the presence of failures.

Leader Election

A protocol by which a group of nodes selects one node as the coordinator or "leader" responsible for a given task.

Quorum

The minimum number of nodes that must agree on an operation for it to be considered successful in a distributed system.

Sharding

Splitting a large dataset across multiple machines so that each shard holds a subset of the data and handles a subset of the load.

Practice this concept

HardInfrastructure

Design a Key Value Store

MediumInfrastructure

Design a Distributed Counter

HardInfrastructure

Design a Distributed File System

HardInfrastructure

Design a Distributed Linked List