Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

2) Concurrency and Parallelism

This is the key architectural choice:

  • Matching engine core stays single-threaded (single task).
  • Parallelism happens around it.

Parallel edges include:

  • inbound parsing
  • auth/risk checks
  • persistence (append-only event log)
  • outbound market-data fanout

Implementation pattern

  • One engine_task owns all in-memory book state.
  • Everyone else sends commands through one bounded mpsc queue.
  • gRPC edge gets tower middleware (timeout, rate_limit, load_shed).
#![allow(unused)]
fn main() {
use tokio::sync::mpsc;

pub enum EngineCmd {
    Submit(NewOrder),
    Cancel { order_id: u64 },
    Replace { order_id: u64, new_px: u64, new_qty: u64 },
}

pub async fn engine_task(mut rx: mpsc::Receiver<EngineCmd>) -> anyhow::Result<()> {
    let mut book = OrderBook::new();

    while let Some(cmd) = rx.recv().await {
        match cmd {
            EngineCmd::Submit(order) => {
                let trades = book.submit(order);
                publish_trades(trades).await?;
            }
            EngineCmd::Cancel { order_id } => {
                book.cancel(order_id);
            }
            EngineCmd::Replace { order_id, new_px, new_qty } => {
                book.replace(order_id, new_px, new_qty);
            }
        }
    }

    Ok(())
}
}

Why this works

Deterministic matching comes from one serialization point. If command order is fixed, outcomes are reproducible.

Deliverable

A minimal engine that:

  • matches crossing orders,
  • emits trade events,
  • updates top-of-book deterministically.