Modern applications live in a messy world.
Users go offline. Networks lie. Devices crash. Two people edit the same record at the same time. Files are uploaded halfway and then abandoned. Yet expectations are brutal: the app must feel instant, never lose data, and always recover.
This post describes TS‑LWW Sync (Timestamped Last‑Writer‑Wins Sync), a synchronization algorithm designed for real-world CRUD applications — especially offline‑first systems like a dental clinic CRM (Apexo) — where correctness, speed, and simplicity matter more than academic perfection.
The goal of TS‑LWW Sync is not to be clever.
The goal is to be reliable, fast, cheap, and understandable.
The Core Idea
TS‑LWW Sync is built on one simple observation:
If every change has a timestamp, then synchronization can be reduced to “what changed after time X”.
Instead of tracking per‑record versions, diffs, or complex merge trees, the system tracks:
- A single version number per store (the latest known timestamp)
- A local deferred queue for offline changes
- A deterministic conflict rule (Last Writer Wins)
Everything else falls out naturally.
Naming the Algorithm
TS‑LWW Sync stands for:
- TS — Timestamp‑based
- LWW — Last Writer Wins
It is best described as a:
Store‑versioned, timestamp‑driven, offline‑first synchronization algorithm with deferred writes.
How It Works (Conceptually)
1. Store‑Level Versioning
Instead of versioning every record individually, the system keeps one version number for the entire dataset:
- The version is the maximum timestamp of all known records
- This timestamp is updated only after a successful sync
This allows the client to ask a very cheap question:
“Give me everything that changed after version X.”
No scans. No diffs. No reconciliation passes.
2. Local‑First Writes
All user actions are applied locally first:
- UI updates instantly
- Data is persisted locally
- Changes are marked as pending
If the network is available, the system attempts to push immediately.
If not, the change is safely deferred.
From the user’s perspective:
The app never blocks. Ever.
3. Deferred Queue (Offline Engine)
When offline, changes are placed into a deferred queue:
- Each deferred item is stored with its local timestamp
- Deferred operations survive restarts
- Multiple edits collapse into the latest value
This queue acts as a lightweight write‑ahead log.
When connectivity returns, deferred changes are replayed deterministically.
4. Pull‑Then‑Resolve Sync
During synchronization:
- The client fetches all remote updates since its last version
- Deferred local changes are compared against remote updates
- Conflicts are detected only where necessary
Conflict rule:
Whichever side has the newer timestamp wins
No heuristics. No guessing. No silent corruption.
Conflicts are counted, observable, and predictable.
5. Files Are First‑Class Citizens
Binary files (images, documents) are not treated as records.
Instead, they are handled as:
- Explicit operations (upload / delete)
- Independently retryable
- Idempotent
This avoids the most common sync bug in business apps:
“The data synced, but the images didn’t.”
6. Realtime Is a Signal, Not a Transport
Realtime updates do not push data directly.
They simply say:
“Something changed — you might want to sync.”
This keeps the system robust even when realtime messages are dropped, duplicated, or delayed.
Why This Is Fast
CPU Usage
- No diff computation
- No tree merging
- No per‑record version checks
- No background reconciliation loops
Sync work is O(changes), not O(total records).
In idle state, CPU usage is effectively zero.
Network Usage
TS‑LWW Sync is extremely network‑efficient:
- Pulls only records updated since last sync
- Pushes only modified records
- Batches writes
- Avoids redundant retries
Typical sync payloads are tiny, even with large datasets.
This makes it ideal for:
- Mobile networks
- Unreliable Wi‑Fi
- Clinics with weak infrastructure
Memory Footprint
- No full history retained
- No shadow copies
- No conflict trees
Memory usage scales linearly with current data size, not edit history.
Real‑World Scenarios
Scenario 1: Dentist Goes Offline
A dentist edits multiple patient records while offline.
- UI responds instantly
- Changes are saved locally
- No sync attempts block the workflow
When connectivity returns, everything syncs automatically.
No lost work. No manual recovery.
Scenario 2: Two Assistants Edit the Same Patient
- Assistant A edits the record at 10:01
- Assistant B edits the same record at 10:03
Result:
- Assistant B’s change wins
- Assistant A’s client pulls the update
- Conflict is counted, not hidden
Predictable. Transparent. Debbugable.
Scenario 3: Image Upload Fails Midway
- Data sync succeeds
- Image upload fails
The image operation is retried independently.
No corrupted state.
No dangling references.
Why This Fits a Dental Clinic CRM (Apexo)
Dental clinic systems have unique requirements:
- Intermittent connectivity
- Multiple devices per clinic
- Small teams editing shared data
- Heavy use of images (X‑rays, photos)
- Zero tolerance for data loss
TS‑LWW Sync is a natural fit because:
- Conflicts are rare and acceptable
- Data volume is moderate
- Latency matters more than perfect merges
- Staff should never think about syncing
In practice, the system feels:
Instant when offline, invisible when online.
What This Algorithm Is (and Isn’t)
It Is:
- Deterministic
- Fast
- Cheap to operate
- Easy to reason about
- Easy to debug
It Is Not:
- A CRDT system
- A collaborative editor
- A full event‑sourcing engine
And that’s a feature, not a limitation.
Final Thoughts
TS‑LWW Sync embraces a simple truth:
Most business apps do not need perfect merges.
They need reliability, speed, and clarity.
By choosing store‑level versioning, timestamp‑based pulls, and deferred writes, this algorithm delivers exactly that — without complexity tax.
For systems like Apexo, where real people rely on real data every day, this approach keeps the focus where it belongs:
on the work — not on the sync.
If you’re building an offline‑first business application and want a sync system that behaves like a calm adult instead of a fragile genius, TS‑LWW Sync is worth considering.
How the synchronization algorithm works (conceptual overview)
This synchronization algorithm is designed for offline-first applications where multiple clients may modify the same data independently and then reconnect later. The core goals are:
- No central lock or coordination
- Deterministic conflict resolution
- Minimal metadata
- Idempotent and repeatable sync
At a high level, each record carries versioning metadata that allows the system to decide which change should win when conflicts occur.
1. Logical versioning (timestamp-based)
Each record has an updated value that represents the logical time of the last modification. This is not meant to be a perfect wall-clock time; it is simply a monotonic value that increases whenever the record is modified.
This timestamp is the primary comparison tool during synchronization.
2. Store identity (tie-breaker)
Every client or store has a unique storeId. When two records have the same logical timestamp, the storeId is used as a deterministic tie-breaker.
This guarantees that:
- Conflicts are resolved the same way on all devices
- No additional coordination is needed
3. Last-write-wins with deterministic ordering
When syncing two versions of the same record:
- Compare
updated - The record with the higher value wins
- If equal, compare
storeId - The record with the higher
storeIdwins
This creates a total ordering across all updates, which is crucial for convergence.
4. Soft deletes (tombstones)
Instead of physically deleting records, the algorithm uses a deleted flag.
- Deletions are treated as normal updates
- A delete can override an older update
- Tombstones prevent deleted data from reappearing during sync
This makes deletion sync-safe and reversible.
5. Incremental synchronization
Rather than syncing the entire dataset every time, each client tracks the last known sync point per store.
During sync:
- Only records updated after the last sync point are exchanged
- This keeps bandwidth usage low
- The process is idempotent: syncing twice produces the same result
6. Convergence guarantee
Because conflict resolution is:
- Deterministic
- Based on immutable metadata
- Applied symmetrically on all peers
All replicas will eventually converge to the same state, regardless of sync order or frequency.
Leave a Reply