2) Concurrency and Parallelism
This is the key architectural choice:
- Matching engine core stays single-threaded (single task).
- Parallelism happens around it.
Parallel edges include:
- inbound parsing
- auth/risk checks
- persistence (append-only event log)
- outbound market-data fanout
Implementation pattern
- One
engine_taskowns all in-memory book state. - Everyone else sends commands through one bounded
mpscqueue. - gRPC edge gets
towermiddleware (timeout,rate_limit,load_shed).
#![allow(unused)]
fn main() {
use tokio::sync::mpsc;
pub enum EngineCmd {
Submit(NewOrder),
Cancel { order_id: u64 },
Replace { order_id: u64, new_px: u64, new_qty: u64 },
}
pub async fn engine_task(mut rx: mpsc::Receiver<EngineCmd>) -> anyhow::Result<()> {
let mut book = OrderBook::new();
while let Some(cmd) = rx.recv().await {
match cmd {
EngineCmd::Submit(order) => {
let trades = book.submit(order);
publish_trades(trades).await?;
}
EngineCmd::Cancel { order_id } => {
book.cancel(order_id);
}
EngineCmd::Replace { order_id, new_px, new_qty } => {
book.replace(order_id, new_px, new_qty);
}
}
}
Ok(())
}
}
Why this works
Deterministic matching comes from one serialization point. If command order is fixed, outcomes are reproducible.
Deliverable
A minimal engine that:
- matches crossing orders,
- emits trade events,
- updates top-of-book deterministically.