The Blanc LOB Engine (BQL) is a high-performance, open-source limit order book engine designed for financial market simulations and trading systems. It provides robust features for order matching, market data replay, and telemetry, making it ideal for students, hobbyists, and professionals exploring algorithmic trading and market microstructure.
For inquiries related to trading applications or professional use cases, please feel free to reach out.
The Blanc LOB Engine (BQL) is a high-performance, open-source limit order book engine designed for financial market simulations and trading systems. It provides robust features for order matching, market data replay, and telemetry, making it ideal for students, hobbyists, and professionals exploring algorithmic trading and market microstructure. Inquire for FULL proprietary features (600+ unique clones as of 12/07/25).
- Deterministic replay: Byte-for-byte golden-state checks over ITCH binaries and synthetic bursts.
- Patent-pending Dynamic Execution Gates (DEG): Breaker-style gate policies wrap the datapath with explicit safety and tail-latency controls. (Open-source release includes the core breaker state machine; some advanced DEG features remain proprietary.)
- Tail SLO enforcement:
scripts/verify_bench.pytreats p50/p95/p99 budgets as release gates, not suggestions. - Structured observability: Every run emits JSONL and Prometheus-compatible text files for diffing, dashboards, and CI.
If you care about “can we replay this exactly, under load, and prove it didn’t get slower or weirder at the tails?”—this engine is the answer.
- Guarantees byte-for-byte identical results across runs.
- FNV-1a digest verification: Every replay produces a cryptographic fingerprint of the final order book state.
- Automated dual-run CI: GitHub Actions runs the same input twice and fails if digests differ—catching non-determinism instantly.
- Environment normalization: Fixed timezone, locale, and compiler ensure reproducibility.
- Same workflow proves determinism and measures p50/p95/p99 tail latency.
- Release gates enforce SLO budgets: if p99 regresses, CI fails.
- Structured artifacts (
bench.jsonl,metrics.prom) enable historical tracking and automated dashboards.
- Breaker-style state machine (Fuse → Local → Feeder → Main → Kill).
- Preserves deterministic replay while containing pathological scenarios.
- Explicit publish control: corrupted runs are flagged, not silently trusted.
- Every run produces machine-readable, CI-auditable artifacts.
- Structured outputs: JSONL event logs + Prometheus textfiles.
- Release gates as code:
scripts/verify_bench.pytreats performance budgets as pass/fail gates. - Artifact packaging: Automated artifact creation with provenance metadata.
- Structure-of-Arrays (SoA) layout for cache efficiency.
- Fixed iteration order regardless of insertion sequence.
- FNV-1a rolling hash captures exact state, not approximations.
┌──────────────────────────────────────────────────────────────────────────────┐
│ QUANT LOB ENGINE — Deterministic Replay & Benchmark Harness │
├──────────────────────────────────────────────────────────────────────────────┤
│ Inputs Core Outputs │
│ ┌─────────────────┐ ┌──────────────────────────────┐ ┌───────────┐│
│ │ trace_loader │───▲───▶│ Deterministic Replay │───┬─▶│ Stdout ││
│ │ (ITCH bin; CSV/ │ │ │ Scheduler (ST; MT optional) │ │ │ summary ││
│ │ PCAP→bin bridge)│ │ └──────────────┬───────────────┘ │ └───────────┘│
│ └─────────────────┘ │ │ │ │
│ ┌─────────────────┐ │ │ │ ┌───────────┐│
│ │ gen_synth │───┘ Fault Injection / Gates (DEG‑compatible; ││
│ │ (synthetic) │ breaker‑style, optional) └─▶│ Artifacts ││
│ └─────────────────┘ ▲ │ bench.jsonl│
│ │ │ metrics.prom│
│ ┌──────────────────────────────┐└───────────┘│
│ │ Golden-state Checker │◀──────┘
│ │ (byte-for-byte digest_fnv) │
│ └──────────────────────────────┘
│ │
│ ▼
│ ┌──────────────────────────────┐ ┌──────────────────────────────┐
│ │ Benchmark Harness │ │ Structured Observability │
│ │ • msgs/s throughput │ │ • JSONL event logs │
│ │ • p50/p95/p99 latency │ │ • Prometheus textfile │
│ │ • config matrix sweeps │ │ • CI artifacts (goldens) │
│ └──────────────────────────────┘ └──────────────────────────────┘
└──────────────────────────────────────────────────────────────────────────────┘
- ITCH binaries and synthetic
gen_synthbursts feed a deterministic scheduler that enforces DEG-compatible gate policies before emitting telemetry. - Golden digest checks ensure byte-for-byte stability, while the bench harness
sweeps configs to publish
bench.jsonl, Prometheus textfiles, and CI-ready artifacts. - Structured observability (JSONL + textfile) makes it easy to diff runs, enforce SLOs, and root-cause tail spikes.
- Dynamic Execution Gates (DEG) model tail behavior as first-class policy, making “breaker-style” protections and SLO checks part of the engine instead of bolted-on monitoring.
┌────────────────────────────────────────────────────────────────────┐
│ QUANT LOB ENGINE (HFT SYSTEM) │
├────────────────────────────────────────────────────────────────────┤
│ ITCH 5.0 parser ──▶ L2/L3 order book (SoA) ──▶ Price levels │
│ │ │ │
│ ▼ ▼ │
│ Dynamic Execution Gates (DEG) ──▶ Telemetry exporter │
│ │ │ │
│ ▼ ▼ │
│ gen_synth fixtures Golden determinism tests │
└────────────────────────────────────────────────────────────────────┘
Gate policy details live in docs/gates.md; CI wiring is under
.github/workflows/verify-bench.yml.
- Golden digest + explicit tail budgets so regressions fail CI early.
- Observability-first artifacts:
bench.jsonl+metrics.promfor diffing, dashboards, and automated SLO checks. - Conformance + bench scripts are wired for cron / CI, not just local runs.
- CI-ready: determinism, bench, and CodeQL workflows pinned to SHAs.
- Designed to slot into HFT / research pipelines as a replay + guardrail module rather than a one-off benchmark toy.
Prereqs: CMake ≥ 3.20, Ninja, modern C++20 compiler, Boost, and
nlohmann-json.
cmake -S . -B build -G Ninja -DCMAKE_BUILD_TYPE=Release
cmake --build build -j
ls build/bin/replayNotes:
build/compile_commands.jsonaids IDEs.- Release builds add stack protector, FORTIFY, PIE when supported.
- Enable sanitizers via
-DENABLE_SANITIZERS=ONon Debug builds.
# Default run
build/bin/replay
# Custom input and limits
build/bin/replay --input path/to/input.bin \
--gap-ppm 0 --corrupt-ppm 0 --skew-ppm 0 --burst-ms 0Artifacts land in artifacts/bench.jsonl, artifacts/metrics.prom, and the
new HTML analytics dashboard at artifacts/report/index.html.
Deterministic fixtures live under data/golden/; regenerate with gen_synth
as needed.
Build the image and run the containerized replay:
# Build (from repo root)
docker build -t blanc-quant-lob-engine:local .
# Run default golden replay inside the container
docker run --rm blanc-quant-lob-engine:local /app/replay --input /app/data/golden/itch_1m.bin
# Pass a custom file mounted from host
docker run --rm -v "$PWD/data:/data" blanc-quant-lob-engine:local \
/app/replay --input /data/your_trace.binscripts/verify_golden.sh # digest determinism check
scripts/bench.sh 9 # multi-run benchmark harness
scripts/prom_textfile.sh ... # emit metrics.prom schema
scripts/verify_bench.py # release gate enforcement
scripts/bench_report.py # render HTML latency/digest dashboard- Golden digest resides at
data/golden/itch_1m.fnv. ctest -R golden_stateplusscripts/verify_golden.shensure reproducibility.- Use
cmake --build build -t golden_sample(ormake golden) to refresh fixtures after new traces are accepted.
Ubuntu:
sudo apt-get update
sudo apt-get install -y cmake ninja-build libboost-all-dev \
libnlohmann-json3-dev jqmacOS:
brew update
brew install cmake ninja jq nlohmann-jsonEnable tests with -DBUILD_TESTING=ON and run ctest --output-on-failure -R book_snapshot from build/. Tests expect ./bin/replay within the working
directory.
./scripts/release_package.sh creates rights-marked zips plus manifests.
cmake -S . -B build -G Ninja -DCMAKE_BUILD_TYPE=Release
cmake --build build -j
./scripts/release_package.sh --build-dir build --art-dir artifacts \
--out-dir artifacts/release --git-sha "$(git rev-parse --short HEAD)"Add --sign for optional detached GPG signatures. The snapshot-nightly
workflow runs this and uploads the bundle automatically.
scripts/pin_actions_by_shas.shkeeps workflowuses:entries pinned..github/workflows/verify-bench.ymlexposes a manual/cron gate run..github/workflows/determinism.ymlsurfaces p50/p95/p99 in the job summary and emits notices for easy viewing..github/workflows/ci.ymlmirrors bench summary surfacing in the job summary..github/workflows/container-scan.ymlpins Trivy to v0.67.2, runs fs & image scans non-blocking, and uploads SARIF to the Security tab.docs/technology_transition.md+docs/deliverable_marking_checklist.mdcover gov delivery and rights-marking guidance.
build/bin/replay --input data/golden/itch_1m.bin --cpu-pin 3
# or
CPU_PIN=3 make benchPinning reduces tail variance on some hosts; measure on your hardware.
include/ # headers
src/ # replay engine, detectors, telemetry
scripts/ # bench, verify, release, pin helpers
artifacts/ # generated outputs (gitignored)
SECURITY.md documents coordinated disclosure. CI integrates detect-secrets
and CodeQL. Signing helpers live under scripts/ if you need to stamp
artifacts. Blanc LOB Engine is opinionated toward safety-by-default: determinism,
repeatable benches, and explicit tail SLOs are non-negotiable controls rather
than after-the-fact monitoring.
See CONTRIBUTING.md for workflow expectations. Pull requests should pin new
dependencies, ship matching tests, and update docs for externally visible
changes.
Distributed under the Business Source License 1.1 (LICENSE.txt). Research and
non-commercial evaluation are permitted; production use requires a commercial
license until the change date defined in COMMERCIAL_LICENSE.md.
Research users can clone and run the engine today; commercial or production
deployment should follow the terms in COMMERCIAL_LICENSE.md.
This release includes the prebuilt binaries and necessary artifacts for version 1.00 of the Blanc LOB Engine. If you are interested in accessing the full source code, please reach out directly for further details. The project is fully open and available for students and hobbyists to explore and use.
This section documents the HTML analytics report generated by
scripts/bench_report.py and visitor tracking integration.
Run the benchmark report generator after completing benchmark runs:
python3 scripts/bench_report.py --bench-file artifacts/bench.jsonl \
--metrics-file artifacts/metrics.prom --output-dir artifacts/reportThe report will be generated at artifacts/report/index.html.
The repository uses visitor badges to track page views. Badge format:
Project badge:
Issue-specific badge:
Replace <issue_id> with the GitHub issue number.