0.0· Offline POS & mesh

The POS that runs when Wi-Fi dies.

Mesh-first architecture, not a cloud POS with an offline tab. Every terminal is a peer. CRDTs keep state consistent across the venue. Multi-uplink fallback authorizes cards live through any device with signal.

1.0 · Why Wi-Fi fails

Crowds collapse networks. Physics, not bad luck.

The minute thousands of phones try to associate with the same access points, DHCP exhausts and airtime saturates. Every cloud-tied POS terminal slows or drops. The merchant eats the risk.

Wi-Fi collapse

Access points die at peak.

Thousands of phones associating with shared APs exhausts DHCP pools and saturates 2.4 and 5 GHz airtime. Cloud POS terminals drop with the rest.

Cellular saturation

LTE doesn’t save you.

The same crowd that killed the Wi-Fi is on the same cell towers. Bonding multiple LTE links does nothing when the towers themselves are saturated.

Store-and-forward

Risk shifts to the merchant.

Cloud POS queues transactions and authorizes when the network returns. Every declined card during the queue is the merchant’s loss, not the processor’s.

2.0 · The mesh

Every device is a peer. The venue is the network.

Zerobeat doesn’t have an offline mode. It has an offline-first architecture. There’s a difference.

Traditional cloud POS runs a star: every terminal calls a single cloud server. If the server is unreachable, the terminal queues locally and hopes. The hub is the bottleneck and the single point of failure in the same breath.

Zerobeat is a mesh. Every iOS device on the floor is a full peer. Devices discover each other automatically over Wi-Fi, peer Wi-Fi Direct, and Bluetooth. They share a consistent view of menus, inventory, and orders. No one has to be the master.

When a terminal needs to authorize a card, it doesn’t have to use its own uplink. The mesh routes through any peer with connectivity. We call it multi-uplink fallback. In our pilot deployments it cuts store-and-forward exposure by roughly 60% versus a flat cellular fallback, with no extra hardware.

When the venue is fully dark — no Wi-Fi, no cellular for any device — orders, menus, and inventory still stay consistent. Reconciliation runs automatically when connectivity returns. No lost sales. No manual cleanup.

3.0 · CRDTs in plain English

Concurrent edits that never conflict.

A CRDT (conflict-free replicated data type) is a math trick that lets multiple devices edit the same piece of data at the same time without coordinating, and still end up with the same final answer.

Cashier A rings a beer. Cashier B rings a beer. Beer inventory was 100. Both terminals are offline from each other. With a normal database, you’d either lose one of those decrements or have to lock the row before each ring. With a CRDT, the count converges: 100 minus 1 minus 1 equals 98, no matter which terminal reconciled first.

Same logic for menu price changes, comp toggles, sponsor activation flips. Every state mutation is commutative, associative, and idempotent. Mesh peers can sync in any order, any number of times, and arrive at the same answer.

4.0 · An outage, hour by hour

What happens when the network gives up.

Six moments from a real venue’s outage timeline. No drama. No paper backup. The line keeps moving.

  1. 01
    T+0

    All clear

    Every terminal authorizing on its own uplink. Mesh exchanging order and inventory state in the background.

  2. 02
    T+0:30

    Wi-Fi degrades

    AP saturation. Some terminals drop. The mesh notices and starts routing through peers with healthy uplinks.

  3. 03
    T+1:00

    Multi-uplink fallback

    A terminal at the back of the bar can't see the venue AP. Its next order routes through a peer at the gate that still has signal. Cleared live.

  4. 04
    T+2:00

    Cellular saturates

    LTE bonding gives up. The mesh consolidates whoever still has a packet path. Cards clear over whatever wire we can find.

  5. 05
    T+2:30

    Venue dark

    No uplink anywhere. Terminals stop sending auths but keep taking orders. CRDTs hold menu, inventory, and order state consistent across every device.

  6. 06
    T+3:15

    Reconnect

    A peer reconnects to LTE. The mesh syncs. Queued auths submit in order. Operator dashboard catches up. No manual reconciliation.

5.0 · Mesh vs store-and-forward

Why architecture matters.

Store-and-forward
  • Terminal queues the transaction locally.
  • Card is not authorized until the network returns.
  • Declined cards during the queue are the merchant’s loss.
  • Inventory and menu diverge across terminals.
  • Manual reconciliation after every outage.
Zerobeat mesh
  • Every terminal is a peer.
  • Cards authorize live through any peer with signal.
  • Merchant exposure stays minimal even mid-outage.
  • CRDTs keep menu, inventory, and orders consistent.
  • Reconciliation is automatic when connectivity returns.
6.0 · FAQ

What architects ask first.

How does a terminal authorize a card when it has no uplink?

The mesh routes the authorization request through any peer terminal that does have an uplink — Wi-Fi, cellular, or a wired drop. The card processor sees a live authorization request, not a queued one, so the decline-vs-approval decision happens in real time. The terminal that originated the request gets the response back through the mesh in the same hop.

What if NO device on the venue has connectivity?

Cards stop authorizing live; that's true of any system. What's different on Zerobeat: orders still ring, menus stay consistent, inventory keeps decrementing across every terminal, comps and tickets still redeem. When any device reconnects, the queue submits in order and the operator dashboard catches up. No manual reconciliation.

Is this Bluetooth mesh or Wi-Fi mesh?

Both, depending on what's reachable. Peer discovery uses Wi-Fi Direct primarily, falls back to Bluetooth LE for low-bandwidth state sync when needed. Most peer traffic at a venue runs over the local Wi-Fi where AP saturation isn't a problem (it usually isn't for low-rate traffic).

How big can the mesh scale?

Hundreds of peers per venue, validated in our pilot deployments. CRDT state stays bounded because each entity (menu, order, inventory line) has a logical clock and gossip is targeted, not flooded. The architectural limit is well past any current customer's terminal count.

What's the latency of a peer-routed authorization?

Typically 80–250 ms end-to-end versus 50–150 ms for a direct uplink — meaningfully under the threshold a cashier or customer notices. The mesh adds one hop; the card network is the same path.

Do you use a third-party mesh networking library?

Some pieces, yes — Apple's MultipeerConnectivity for local peer transport on iOS, a CRDT implementation we built on top of standard primitives, and our own routing layer. The integration is in-house, but we didn't reinvent the radio.

How does this compare to Square's offline mode?

Square's offline mode is store-and-forward per device. Every terminal queues independently. When the network returns, the queue flushes. The merchant carries the risk of every declined card. Zerobeat clears authorizations live through the mesh wherever possible and reconciles state automatically when connectivity returns.

Where can I read the architecture in detail?

The /live-event-pos page has the long-form deep-dive — outage anatomy, buyer's criteria, a full vendor comparison matrix, and the implementation timeline.