Skip to content

Blanc Quant LOB Engine (Original) is a C++20 synthetic microbenchmark focused on loop and memory performance—not a full market replay/matching engine. It uses synthetic data, computes FNV-based digests for basic validation, and emits limited telemetry. BQL 2.0 (Patent-Pending) is the production system: real ITCH market-data replay and etc.

License

Notifications You must be signed in to change notification settings

jblanc86-maker/blanc-quant-lob-engine

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

434 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Blanc Quant LOB Engine (BQL Engine)

p50/p95/p99 Reproducible Golden-state Deterministic Replay Determinism Verify Bench CI CodeQL Container Scan (Trivy v0.67.2) Detect Secrets Smoke SITREP Snapshot Nightly Workflow Usage Report PRs Welcome GitHub Last Commit GitHub Release CMake + Ninja Code Size Top Language C++20 CMake Docker License: BSL-1.1 Visitors

About Blanc LOB Engine

The Blanc LOB Engine (BQL) is a high-performance, open-source limit order book engine designed for financial market simulations and trading systems. It provides robust features for order matching, market data replay, and telemetry, making it ideal for students, hobbyists, and professionals exploring algorithmic trading and market microstructure.

For inquiries related to trading applications or professional use cases, please feel free to reach out.

Deterministic C++20 Limit Order Book (LOB) Replay & Benchmarking Engine

The Blanc LOB Engine (BQL) is a high-performance, open-source limit order book engine designed for financial market simulations and trading systems. It provides robust features for order matching, market data replay, and telemetry, making it ideal for students, hobbyists, and professionals exploring algorithmic trading and market microstructure. Inquire for FULL proprietary features (600+ unique clones as of 12/07/25).

  • Deterministic replay: Byte-for-byte golden-state checks over ITCH binaries and synthetic bursts.
  • Patent-pending Dynamic Execution Gates (DEG): Breaker-style gate policies wrap the datapath with explicit safety and tail-latency controls. (Open-source release includes the core breaker state machine; some advanced DEG features remain proprietary.)
  • Tail SLO enforcement: scripts/verify_bench.py treats p50/p95/p99 budgets as release gates, not suggestions.
  • Structured observability: Every run emits JSONL and Prometheus-compatible text files for diffing, dashboards, and CI.

If you care about “can we replay this exactly, under load, and prove it didn’t get slower or weirder at the tails?”—this engine is the answer.

What Makes This Innovative

1. Golden-State Deterministic Replay

  • Guarantees byte-for-byte identical results across runs.
  • FNV-1a digest verification: Every replay produces a cryptographic fingerprint of the final order book state.
  • Automated dual-run CI: GitHub Actions runs the same input twice and fails if digests differ—catching non-determinism instantly.
  • Environment normalization: Fixed timezone, locale, and compiler ensure reproducibility.

2. Integrated Determinism + Performance Testing

  • Same workflow proves determinism and measures p50/p95/p99 tail latency.
  • Release gates enforce SLO budgets: if p99 regresses, CI fails.
  • Structured artifacts (bench.jsonl, metrics.prom) enable historical tracking and automated dashboards.

3. Dynamic Execution Gates (Patent-Pending)

  • Breaker-style state machine (Fuse → Local → Feeder → Main → Kill).
  • Preserves deterministic replay while containing pathological scenarios.
  • Explicit publish control: corrupted runs are flagged, not silently trusted.

4. Telemetry-Driven Golden-State Validation

  • Every run produces machine-readable, CI-auditable artifacts.
  • Structured outputs: JSONL event logs + Prometheus textfiles.
  • Release gates as code: scripts/verify_bench.py treats performance budgets as pass/fail gates.
  • Artifact packaging: Automated artifact creation with provenance metadata.

5. Canonical Serialization for Order Books

  • Structure-of-Arrays (SoA) layout for cache efficiency.
  • Fixed iteration order regardless of insertion sequence.
  • FNV-1a rolling hash captures exact state, not approximations.

System architecture

┌──────────────────────────────────────────────────────────────────────────────┐
│   QUANT LOB ENGINE — Deterministic Replay & Benchmark Harness                │
├──────────────────────────────────────────────────────────────────────────────┤
│ Inputs                              Core                           Outputs    │
│ ┌─────────────────┐        ┌──────────────────────────────┐      ┌───────────┐│
│ │ trace_loader    │───▲───▶│ Deterministic Replay         │───┬─▶│ Stdout    ││
│ │ (ITCH bin; CSV/ │   │    │ Scheduler (ST; MT optional)  │   │  │ summary   ││
│ │ PCAP→bin bridge)│   │    └──────────────┬───────────────┘   │  └───────────┘│
│ └─────────────────┘   │                   │                   │              │
│ ┌─────────────────┐   │                   │                   │  ┌───────────┐│
│ │ gen_synth       │───┘   Fault Injection / Gates (DEG‑compatible;         ││
│ │ (synthetic)     │           breaker‑style, optional)           └─▶│ Artifacts ││
│ └─────────────────┘                                     ▲            │ bench.jsonl│
│                                                         │            │ metrics.prom│
│                                         ┌──────────────────────────────┐└───────────┘│
│                                         │ Golden-state Checker         │◀──────┘
│                                         │ (byte-for-byte digest_fnv)   │
│                                         └──────────────────────────────┘
│                                                 │
│                                                 ▼
│ ┌──────────────────────────────┐     ┌──────────────────────────────┐
│ │ Benchmark Harness            │     │ Structured Observability     │
│ │ • msgs/s throughput          │     │ • JSONL event logs           │
│ │ • p50/p95/p99 latency        │     │ • Prometheus textfile        │
│ │ • config matrix sweeps       │     │ • CI artifacts (goldens)     │
│ └──────────────────────────────┘     └──────────────────────────────┘
└──────────────────────────────────────────────────────────────────────────────┘

Flow summary

  • ITCH binaries and synthetic gen_synth bursts feed a deterministic scheduler that enforces DEG-compatible gate policies before emitting telemetry.
  • Golden digest checks ensure byte-for-byte stability, while the bench harness sweeps configs to publish bench.jsonl, Prometheus textfiles, and CI-ready artifacts.
  • Structured observability (JSONL + textfile) makes it easy to diff runs, enforce SLOs, and root-cause tail spikes.
  • Dynamic Execution Gates (DEG) model tail behavior as first-class policy, making “breaker-style” protections and SLO checks part of the engine instead of bolted-on monitoring.

Classic HFT datapath

┌────────────────────────────────────────────────────────────────────┐
│                    QUANT LOB ENGINE (HFT SYSTEM)                   │
├────────────────────────────────────────────────────────────────────┤
│  ITCH 5.0 parser  ──▶  L2/L3 order book (SoA) ──▶  Price levels    │
│            │                             │                        │
│            ▼                             ▼                        │
│      Dynamic Execution Gates (DEG) ──▶ Telemetry exporter          │
│            │                             │                        │
│            ▼                             ▼                        │
│     gen_synth fixtures          Golden determinism tests          │
└────────────────────────────────────────────────────────────────────┘

Gate policy details live in docs/gates.md; CI wiring is under .github/workflows/verify-bench.yml.

Highlights

  • Golden digest + explicit tail budgets so regressions fail CI early.
  • Observability-first artifacts: bench.jsonl + metrics.prom for diffing, dashboards, and automated SLO checks.
  • Conformance + bench scripts are wired for cron / CI, not just local runs.
  • CI-ready: determinism, bench, and CodeQL workflows pinned to SHAs.
  • Designed to slot into HFT / research pipelines as a replay + guardrail module rather than a one-off benchmark toy.

Build

Prereqs: CMake ≥ 3.20, Ninja, modern C++20 compiler, Boost, and nlohmann-json.

cmake -S . -B build -G Ninja -DCMAKE_BUILD_TYPE=Release
cmake --build build -j
ls build/bin/replay

Notes:

  • build/compile_commands.json aids IDEs.
  • Release builds add stack protector, FORTIFY, PIE when supported.
  • Enable sanitizers via -DENABLE_SANITIZERS=ON on Debug builds.

Run

# Default run
build/bin/replay

# Custom input and limits
build/bin/replay --input path/to/input.bin \
  --gap-ppm 0 --corrupt-ppm 0 --skew-ppm 0 --burst-ms 0

Artifacts land in artifacts/bench.jsonl, artifacts/metrics.prom, and the new HTML analytics dashboard at artifacts/report/index.html. Deterministic fixtures live under data/golden/; regenerate with gen_synth as needed.

Run in Docker

Build the image and run the containerized replay:

# Build (from repo root)
docker build -t blanc-quant-lob-engine:local .

# Run default golden replay inside the container
docker run --rm blanc-quant-lob-engine:local /app/replay --input /app/data/golden/itch_1m.bin

# Pass a custom file mounted from host
docker run --rm -v "$PWD/data:/data" blanc-quant-lob-engine:local \
  /app/replay --input /data/your_trace.bin

Scripts

scripts/verify_golden.sh     # digest determinism check
scripts/bench.sh 9           # multi-run benchmark harness
scripts/prom_textfile.sh ... # emit metrics.prom schema
scripts/verify_bench.py      # release gate enforcement
scripts/bench_report.py      # render HTML latency/digest dashboard

Golden-state validation

  • Golden digest resides at data/golden/itch_1m.fnv.
  • ctest -R golden_state plus scripts/verify_golden.sh ensure reproducibility.
  • Use cmake --build build -t golden_sample (or make golden) to refresh fixtures after new traces are accepted.

Developer setup

Ubuntu:

sudo apt-get update
sudo apt-get install -y cmake ninja-build libboost-all-dev \
  libnlohmann-json3-dev jq

macOS:

brew update
brew install cmake ninja jq nlohmann-json

Enable tests with -DBUILD_TESTING=ON and run ctest --output-on-failure -R book_snapshot from build/. Tests expect ./bin/replay within the working directory.

Release packaging

./scripts/release_package.sh creates rights-marked zips plus manifests.

cmake -S . -B build -G Ninja -DCMAKE_BUILD_TYPE=Release
cmake --build build -j
./scripts/release_package.sh --build-dir build --art-dir artifacts \
  --out-dir artifacts/release --git-sha "$(git rev-parse --short HEAD)"

Add --sign for optional detached GPG signatures. The snapshot-nightly workflow runs this and uploads the bundle automatically.

Tooling helpers

  • scripts/pin_actions_by_shas.sh keeps workflow uses: entries pinned.
  • .github/workflows/verify-bench.yml exposes a manual/cron gate run.
  • .github/workflows/determinism.yml surfaces p50/p95/p99 in the job summary and emits notices for easy viewing.
  • .github/workflows/ci.yml mirrors bench summary surfacing in the job summary.
  • .github/workflows/container-scan.yml pins Trivy to v0.67.2, runs fs & image scans non-blocking, and uploads SARIF to the Security tab.
  • docs/technology_transition.md + docs/deliverable_marking_checklist.md cover gov delivery and rights-marking guidance.

CPU pinning (Linux)

build/bin/replay --input data/golden/itch_1m.bin --cpu-pin 3
# or
CPU_PIN=3 make bench

Pinning reduces tail variance on some hosts; measure on your hardware.

Repository layout

include/        # headers
src/            # replay engine, detectors, telemetry
scripts/        # bench, verify, release, pin helpers
artifacts/      # generated outputs (gitignored)

Security & safety

SECURITY.md documents coordinated disclosure. CI integrates detect-secrets and CodeQL. Signing helpers live under scripts/ if you need to stamp artifacts. Blanc LOB Engine is opinionated toward safety-by-default: determinism, repeatable benches, and explicit tail SLOs are non-negotiable controls rather than after-the-fact monitoring.

Contributing

See CONTRIBUTING.md for workflow expectations. Pull requests should pin new dependencies, ship matching tests, and update docs for externally visible changes.

License

Distributed under the Business Source License 1.1 (LICENSE.txt). Research and non-commercial evaluation are permitted; production use requires a commercial license until the change date defined in COMMERCIAL_LICENSE.md.

Research users can clone and run the engine today; commercial or production deployment should follow the terms in COMMERCIAL_LICENSE.md.

Release Information

This release includes the prebuilt binaries and necessary artifacts for version 1.00 of the Blanc LOB Engine. If you are interested in accessing the full source code, please reach out directly for further details. The project is fully open and available for students and hobbyists to explore and use.

Analytics Report Output

This section documents the HTML analytics report generated by scripts/bench_report.py and visitor tracking integration.

Generating the Report

Run the benchmark report generator after completing benchmark runs:

python3 scripts/bench_report.py --bench-file artifacts/bench.jsonl \
  --metrics-file artifacts/metrics.prom --output-dir artifacts/report

The report will be generated at artifacts/report/index.html.

Visitor Badge Integration

The repository uses visitor badges to track page views. Badge format:

Project badge:

![Visitors](https://visitor-badge.laobi.icu/badge?page_id=jblanc86-maker.blanc-quant-lob-engine)

Issue-specific badge:

![Issue Visitors](https://visitor-badge.laobi.icu/badge?page_id=jblanc86-maker.blanc-quant-lob-engine.issue.<issue_id>)

Replace <issue_id> with the GitHub issue number.

About

Blanc Quant LOB Engine (Original) is a C++20 synthetic microbenchmark focused on loop and memory performance—not a full market replay/matching engine. It uses synthetic data, computes FNV-based digests for basic validation, and emits limited telemetry. BQL 2.0 (Patent-Pending) is the production system: real ITCH market-data replay and etc.

Topics

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Sponsor this project

Packages

No packages published

Contributors 3

  •  
  •  
  •