A high-performance bi-temporal graph database in Rust, designed for LLM integration and temporal reasoning.
AletheiaDB tracks both valid time (when facts were true in reality) and transaction time (when facts were recorded in the database). This enables powerful time-traveling queries and historical analysis, making it ideal for LLM applications that need to understand how knowledge evolves over time.
- Bi-Temporal Model: Track both valid time and transaction time for full temporal reasoning
- Hybrid Storage: Separate current state (fast path) from historical data (temporal path)
- Tiered Storage: Hot/warm/cold architecture for unlimited historical depth with disk-backed cold storage
- Anchor+Delta Compression: 5-6X storage reduction while maintaining query performance
- ACID Transactions: Full snapshot isolation with write conflict detection
- Write-Ahead Log (WAL): Striped lock-free ring buffer architecture, ~100K+ writes/sec (GroupCommit)
- Index Persistence: Fast cold starts (6-30x faster) with Zstd compression and memory-mapped loading
- Vector Search: HNSW indexing for k-NN semantic search with full temporal versioning
- Multi-Property Vector Indexes: Multiple independent vector properties per database
- Hybrid Query API: Combine graph traversal + vector similarity + bi-temporal queries
- Query Language: Cypher-like AQL with temporal and vector extensions
- MCP Server: Model Context Protocol server for LLM integration (Claude, etc.)
- Graph Sharding: Domain-based horizontal scaling with 2PC distributed transactions
- Semantic Drift Tracking: Detect how embeddings evolve over time for knowledge evolution analysis
- Production Observability: Distributed tracing, metrics, and profiling (optional)
- High Performance: Sub-microsecond traversals (~22ns node lookup, ~23ns edge traversal)
- LLM-Friendly API: Natural query patterns for reasoning about temporal knowledge
- Rust 1.92+ (edition 2024)
- just - Command runner (optional but recommended)
- cargo-llvm-cov - For coverage reports
- Tracy Profiler - For performance profiling (optional)
# Clone the repository
git clone https://github.com/madmax983/AletheiaDB
cd AletheiaDB
# Install development tools
cargo install just cargo-llvm-cov
# Build the project
cargo build
# Run tests
cargo test
# Or use just
just test# Run tests
just test
# Check code coverage (must meet 85% threshold)
just coverage-check
# Generate coverage report (HTML)
just coverage
# Run linter
just lint
# Format code
just fmt
# Run all pre-commit checks
just pre-commit
# Full quality check (format, lint, test, coverage)
just check-all
# Run benchmarks
just bench
# Run benchmarks and generate HTML tables
just bench-tablesSee justfile for all available commands.
AletheiaDB uses Cargo feature flags for optional functionality:
[dependencies]
aletheiadb = "0.1" # Includes config-toml by default| Feature | Description | Default |
|---|---|---|
config-toml |
TOML configuration file support | ✅ Yes |
[dependencies]
aletheiadb = { version = "0.1", features = ["observability"] }| Feature | Description | Dependencies |
|---|---|---|
observability |
Core observability (tracing + metrics) | tracing, tracing-subscriber |
observability-tracy |
Tracy CPU profiling integration | tracing-tracy, tracy-client |
observability-honeycomb |
Honeycomb distributed tracing | tracing-honeycomb, libhoney-rust |
observability-prometheus |
Prometheus metrics HTTP server | metrics, metrics-exporter-prometheus |
[dependencies]
aletheiadb = { version = "0.1", features = ["embedding-openai"] }| Feature | Description | Dependencies |
|---|---|---|
embeddings |
Core embedding types and service | tokio, async-trait, serde |
embedding-openai |
OpenAI embedding provider | embeddings, reqwest |
embedding-huggingface |
HuggingFace embedding provider | embeddings, reqwest |
embedding-ollama |
Ollama local embedding provider | embeddings, reqwest |
embedding-onnx |
ONNX local inference ( |
embeddings, ort, tokenizers |
embedding-all |
Enable all embedding providers | All of the above |
Note: Embedding features are completely optional and add zero overhead when disabled. The database core has no embedding dependencies.
[dependencies]
aletheiadb = { version = "0.1", features = ["mcp-server"] }| Feature | Description | Dependencies |
|---|---|---|
mcp-server |
Model Context Protocol server for LLM integration | rmcp, tokio, serde |
[dependencies]
aletheiadb = { version = "0.1", features = ["sharding-rpc"] }| Feature | Description | Dependencies |
|---|---|---|
sharding-rpc |
RPC client for sharding coordination | reqwest, serde |
[dependencies]
aletheiadb = { version = "0.1", features = ["nova"] }| Feature | Description |
|---|---|
nova |
Experimental features (Narrative Generator, Fishing, Semantic Pathfinding) |
Note: Tiered storage with Redb cold storage backend is included by default (no feature flag needed).
AletheiaDB is designed for high performance with minimal temporal overhead. View live benchmark results:
- 📊 Latest Benchmarks - Comprehensive tables with all metrics
- 📈 Historical Trends - Performance over time with regression tracking
| Operation | Target | Actual |
|---|---|---|
| Current-state node lookup | <1µs | ~22ns ✅ |
| Current-state edge traversal | <1µs | ~23ns ✅ |
| 3-hop traversal | <100µs | ~20ns per hop ✅ |
| k-NN search (k=10, 1M vectors) | <10ms | ~4-8ms ✅ |
| Graph+Vector hybrid query | <20ms | ~15ms ✅ |
| Time-travel reconstruction | <10ms | TBD |
Note: Time-travel query benchmarks are being improved to measure realistic historical reconstruction scenarios.
Benchmarks are automatically run on every push to trunk and published to GitHub Pages. See docs/BENCHMARKING.md for detailed benchmarking guide.
Current Phase: Vector Search Complete (Phases 1-4), Core Features Complete ✅
- Core ID types (NodeId, EdgeId, VersionId)
- Temporal primitives (BiTemporalInterval, TimeRange)
- Property system with Arc-based deduplication
- String interning for memory efficiency
- Error types and Result handling
- Test coverage infrastructure (85%+ threshold enforced)
- Current storage layer with CSR adjacency indexes
- Historical storage with anchor+delta compression
- ACID transactions with snapshot isolation
- Write conflict detection
- Write-Ahead Log (WAL) with striped lock-free ring buffers
- Index persistence with Zstd compression and memory-mapped loading
- Time-travel queries (as_of, get_node_at_time)
- Public API with read/write transactions
- Vector type with validation (VS-001 to VS-010)
- Similarity functions: cosine, Euclidean, dot product
- Vector normalization utilities
- Distance metric abstraction
- Property-attached vector embeddings
- Historical vector versioning (temporal vectors)
- HNSW indexing for k-NN search
- Auto-indexing on create/update with rollback
- Vector similarity search API
- Multi-property vector indexes (VS-072)
- Optional embedding providers (OpenAI, HuggingFace, Ollama, ONNX)
- Temporal vector indexes with snapshot/delta architecture
- Pre-anchor hooks for provenance tracking
- Post-commit observers for extensibility
- Semantic drift tracking (detect embedding evolution)
- Point-in-time and range vector queries
- Full/delta snapshot strategies with retention policies
- Query builder with type-safe state machine
- Graph + Vector hybrid queries (traverse then rank)
- Temporal + Vector queries (semantic time-travel)
- Full hybrid queries (graph + vector + temporal)
- Predicate filtering and property-specific operations
- Direct functions, builder API, and convenience methods
- Structured logging with
tracing - Tracy profiler integration for CPU profiling
- Honeycomb distributed tracing (via git dependency - see #271)
- Prometheus metrics HTTP server (stub - see #272)
- Critical error detection (lock poisons, timestamp violations, WAL checksum failures)
- Error categorization metrics
- Model Context Protocol server binary (
aletheia-mcp) - Node operations (get, create, update, delete, list, count)
- Edge operations (get, create, update, delete, list, count)
- Graph traversal (outgoing, incoming, multi-hop)
- Vector search (find similar, enable/list indexes)
- Temporal queries (get at time)
- Hybrid queries (graph + vector + temporal)
- Cypher-like parser (MATCH, WHERE, RETURN, ORDER BY, LIMIT)
- Vector search syntax (SIMILAR TO, RANK BY SIMILARITY)
- Bi-temporal syntax (AS OF, BETWEEN)
- AST-to-IR converter with planner integration
- Comprehensive query documentation
- Domain-based node partitioning by label
- Edge replication for cross-shard traversal
- Two-Phase Commit (2PC) distributed transactions
- Circuit breakers for fault tolerance
- Online migration with dual-write support
- Connection pooling and query executor
- Three-tier architecture (hot/warm/cold)
- File-based cold storage backend
- Redb cold storage backend (pure Rust, built-in)
- Configurable migration policies
- Latency metrics with percentiles
- LSN-based WAL truncation
- Vector Search Phase 5: Streaming and incremental updates
- Custom Honeycomb client wrapper (#271)
- Comprehensive Prometheus metrics suite (#272)
- GraphQL/REST API layer
- Distributed replication
Test Coverage: 671+ tests passing, 86%+ line coverage (enforced: 85% minimum)
AletheiaDB uses a hybrid storage architecture:
┌─────────────────────────────────────────────────────┐
│ Query Engine │
│ - Temporal Query Planner │
│ - Graph Traversal Engine │
│ - Hybrid Query Optimizer │
└─────────────────────────────────────────────────────┘
│
┌───────────────┴───────────────┐
│ │
┌───────▼─────────┐ ┌─────────▼─────────┐
│ Current Storage │ │ Historical Storage │
│ - Live Graph │ │ - Anchor+Delta │
│ - Hot Indexes │ │ - Compressed │
│ - Vector HNSW │ │ - Time Indexes │
│ - Fast Path │ │ - Vector Snapshots │
└─────────────────┘ └────────────────────┘
Key Design Decisions:
- Current state separated for zero-overhead queries
- Anchor+delta compression for 5-6X storage savings
- Copy-on-write properties with Arc for deduplication
- String interning for memory efficiency
- Lock-free concurrent access (DashMap)
- Hybrid pre-anchor hooks + post-commit observers for temporal vector integration
See docs/ARCHITECTURE.md for complete architecture documentation.
use aletheiadb::{AletheiaDB, PropertyMap, PropertyMapBuilder, WriteOps};
// Create a new database
let db = AletheiaDB::new().unwrap();
// Create nodes using write transactions
let alice_id = db.write(|tx| {
tx.create_node("Person", PropertyMapBuilder::new()
.insert("name", "Alice")
.insert("age", 30)
.build()
)
})?;
let bob_id = db.write(|tx| {
tx.create_node("Person", PropertyMapBuilder::new()
.insert("name", "Bob")
.build()
)
})?;
// Create relationships
db.write(|tx| {
tx.create_edge(alice_id, bob_id, "KNOWS", PropertyMap::new())
})?;
// Read current state
let alice = db.get_node(alice_id)?;
println!("Created Alice: {:?}", alice);use aletheiadb::core::temporal::time;
// Get current time
let now = time::now();
// Get node at a specific point in time
let historical_alice = db.get_node_at_time(
alice_id,
now, // valid time
now, // transaction time
)?;
// Track how properties changed
println!("Alice's age was: {:?}", historical_alice.properties.get("age"));use aletheiadb::{AletheiaDB, PropertyMapBuilder};
use aletheiadb::index::vector::{HnswConfig, DistanceMetric};
use aletheiadb::index::vector::temporal::TemporalVectorConfig;
let db = AletheiaDB::new().unwrap();
// Enable vector indexing with temporal support
db.vector_index("embedding")
.hnsw(HnswConfig::new(384, DistanceMetric::Cosine))
.temporal(TemporalVectorConfig::default())
.enable()?;
let embedding = vec![0.1f32; 384];
// Store node with embedding - automatically indexed!
let doc_id = db.create_node("Document",
PropertyMapBuilder::new()
.insert("title", "Introduction to Rust")
.insert_vector("embedding", &embedding)
.build()
)?;
// Find similar nodes
let similar = db.find_similar(doc_id, 10)?;use aletheiadb::query::ir::Predicate;
// Setup query parameters
let query_embedding = vec![0.1f32; 384];
let valid_time = aletheiadb::core::temporal::time::now();
let tx_time = aletheiadb::core::temporal::time::now();
// Simple: Graph + Vector hybrid
let results = db.traverse_and_rank(alice_id, "KNOWS", &query_embedding, 10)?;
// Complex: Full hybrid with builder
let results = db.query()
.as_of(valid_time, tx_time) // Temporal: point-in-time
.start(alice_id) // Graph: start node
.traverse("KNOWS") // Graph: traverse edges
.rank_by_similarity(&embedding, 10) // Vector: rank by similarity
.filter(Predicate::gt("score", 0.8)) // Filter: high similarity only
.with_provenance() // Include metadata
.execute(&db)?;
// Property-specific vector queries
let results = db.query()
.find_similar_builder(&embedding, 10)
.property("embedding") // Query specific property
.metric(DistanceMetric::Cosine)
.finish()
.execute(&db)?;See docs/guides/hybrid-query-guide.md for complete API reference.
use aletheiadb::index::vector::temporal::DriftMetric;
use aletheiadb::core::temporal::TimeRange;
// Define time range
let timestamp_2023 = aletheiadb::core::temporal::time::from_secs(1672531200);
let timestamp_2024 = aletheiadb::core::temporal::time::from_secs(1704067200);
// Find all nodes with significant semantic drift
let time_range = TimeRange::new(timestamp_2023, timestamp_2024)?;
let drifted_nodes = db.find_drift_in(
"embedding", // Property name
0.3, // Cosine distance threshold
time_range,
DriftMetric::Cosine,
)?;
for (node_id, drift_score) in drifted_nodes {
println!("Node {} drifted by {:.3}", node_id, drift_score);
}Requires
novafeatureRun the demo:
cargo run --example story_demo --features nova
use aletheiadb::experimental::temporal_narrative::NarrativeGenerator;
// Generate natural language history of a node
let generator = NarrativeGenerator::new(&db);
let narrative = generator.generate_node_narrative(node_id)?;
for event in narrative {
println!("Version {}: {}", event.version_number, event.description);
// Output: "Version 1: Node created with label 'Person'."
for change in event.changes {
println!(" - {}", change);
// Output: " - Initial property 'name': 'Alice'"
}
}use aletheiadb::{AletheiaDB, config::AletheiaDBConfig};
use aletheiadb::storage::index_persistence::PersistenceConfig;
// Enable index persistence for 6-30x faster startup
let config = AletheiaDBConfig::builder()
.persistence(PersistenceConfig {
enabled: true,
data_dir: "data/my-database".into(),
load_on_startup: true, // Load indexes on startup
use_mmap: true, // Memory-map large indexes
..Default::default()
})
.build();
let db = AletheiaDB::with_unified_config(config);
// Indexes automatically persist in background
// On restart: 2-5s cold start vs 30-60s WAL replay (1M nodes)See docs/guides/index-persistence-guide.md for complete guide.
use aletheiadb::{AletheiaDB, config::AletheiaDBConfig, WalConfigBuilder};
use aletheiadb::storage::wal::DurabilityMode;
// Load from TOML file
let config = AletheiaDBConfig::from_toml_file("config/production.toml")?;
let db = AletheiaDB::with_unified_config(config);
// Or programmatic configuration
let config = AletheiaDBConfig::builder()
.wal(WalConfigBuilder::new()
.num_stripes(64).unwrap() // High concurrency
.durability_mode(DurabilityMode::group_commit_default())
.build())
.build();See docs/CONFIGURATION.md for all configuration options and presets.
Run the MCP server for LLM integration:
# Start the MCP server (communicates over stdio)
cargo run --bin aletheia-mcp --features mcp-serverAvailable MCP tools for LLMs:
- Node Operations:
get_node,create_node,update_node,delete_node,list_nodes,count_nodes - Edge Operations:
get_edge,create_edge,update_edge,delete_edge,get_outgoing_edges,get_incoming_edges - Traversal:
traverse(multi-hop graph traversal) - Vector Search:
find_similar,enable_vector_index,list_vector_indexes - Temporal:
get_node_at_time,get_edge_at_time - Hybrid:
hybrid_query(combined graph + vector + temporal)
AletheiaDB supports a Cypher-like query language with temporal and vector extensions:
-- Basic graph query
MATCH (n:Person {name: "Alice"})-[:KNOWS]->(friend:Person)
RETURN friend
-- Vector similarity search
SIMILAR TO $embedding LIMIT 10
-- Hybrid graph + vector query
MATCH (a:Person {name: "Alice"})-[:KNOWS]->(friend)
RANK BY SIMILARITY TO $bob_embedding TOP 10
RETURN friend
-- Bi-temporal query (point-in-time)
AS OF '2024-01-15T10:00:00Z'
MATCH (n:Person {name: "Alice"})
RETURN n
-- Full hybrid: temporal + graph + vector
AS OF '2024-06-01T00:00:00Z'
MATCH (user:User {id: $user_id})-[:VIEWED]->(item:Product)
RANK BY SIMILARITY TO $recommendation_embedding TOP 20
WHERE item.price < 100
RETURN item
ORDER BY score DESC
LIMIT 10See docs/query-language-design.md for complete grammar and examples.
For horizontal scaling with datasets exceeding single-machine capacity:
use aletheiadb::storage::sharding::{
ShardConfig, ShardDefinition, ShardCoordinator,
};
// Define shard topology
let config = ShardConfig::new(vec![
ShardDefinition::new(0, "shard0:9000", vec!["Person", "User"]),
ShardDefinition::new(1, "shard1:9000", vec!["Place", "Location"]),
ShardDefinition::new(2, "shard2:9000", vec!["Event", "Activity"]),
]);
// Create coordinator
let coordinator = ShardCoordinator::new(config);
// Route queries to appropriate shards
let shard = coordinator.router().route_node("Person");See docs/guides/sharding-guide.md for complete guide.
For unlimited historical depth with disk-backed cold storage:
use aletheiadb::storage::{
HistoricalStorage, TieredStorage, TieredStorageConfig,
RedbColdStorage, RedbConfig,
};
use std::sync::Arc;
// Create cold storage backend
let cold = RedbColdStorage::new("data/cold.redb", RedbConfig::default())?;
// Create tiered storage
let tiered = TieredStorage::with_default_config(Arc::new(cold));
// Configure historical storage
let mut historical = HistoricalStorage::new();
historical.set_tiered_storage(Arc::new(tiered));See docs/guides/tiered-storage-guide.md for complete guide.
// Explicit read transaction
let result = db.read(|tx| {
let node = tx.get_node(alice_id)?;
Ok(node.label.clone())
})?;
// Explicit write transaction with multiple operations
db.write(|tx| {
let node1 = tx.create_node("Event", PropertyMap::new())?;
let node2 = tx.create_node("Event", PropertyMap::new())?;
tx.create_edge(node1, node2, "FOLLOWS", PropertyMap::new())?;
Ok(())
})?;AletheiaDB includes an optional embedding generation system for semantic search:
use aletheiadb::{AletheiaDB, PropertyMapBuilder};
use aletheiadb::embeddings::{EmbeddingService, providers::openai::*};
use std::sync::Arc;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Enable in Cargo.toml: features = ["embedding-openai"]
// 1. Create embedding service
let config = OpenAIConfig::from_env(OpenAIModel::TextEmbedding3Small)?;
let provider = Arc::new(OpenAIProvider::new(config)?);
let service = EmbeddingService::new(provider);
// 2. Generate embeddings
let documents = vec![
"AletheiaDB is a bi-temporal graph database",
"It tracks both valid time and transaction time",
];
let embeddings = service.embed_batch(&documents).await?;
// 3. Store with vectors
let db = AletheiaDB::new()?;
for (text, embedding) in documents.iter().zip(embeddings.iter()) {
db.create_node(
"Document",
PropertyMapBuilder::new()
.insert("content", *text)
.insert_vector("embedding", embedding)
.build(),
)?;
}
Ok(())
}Available Providers:
- OpenAI: Best quality, API-based (~100-200ms)
- HuggingFace: Open-source models, free tier (~200-500ms)
- Ollama: Local inference, privacy-focused (~20-50ms)
- ONNX: Ultra-fast local, requires setup (~1-10ms)
See docs/EMBEDDINGS.md for complete documentation.
AletheiaDB includes comprehensive observability features for production deployments:
# Enable in Cargo.toml:
features = [
"observability", # Core: structured logging + metrics
"observability-tracy", # Tracy CPU profiling
"observability-honeycomb", # Honeycomb distributed tracing
"observability-prometheus", # Prometheus metrics HTTP server
]Basic usage:
use aletheiadb::observability;
fn main() {
// Initialize observability (call once at startup)
let config = observability::Config::from_env();
observability::init(config);
let db = aletheiadb::AletheiaDB::new().unwrap();
// Metrics automatically collected
// Check for critical errors
let metrics = observability::metrics();
if metrics.has_critical_errors() {
panic!("Data corruption detected!");
}
}Environment Variables:
RUST_LOG: Control log level (e.g.,aletheiadb=debug)HONEYCOMB_API_KEY: Enable Honeycomb tracingHONEYCOMB_DATASET: Dataset name (default: "aletheiadb")PROMETHEUS_BIND_ADDR: Prometheus HTTP endpoint (e.g., "127.0.0.1:9090")
Critical Metrics (should NEVER be >0):
lock_poison_count: Thread panicked while holding locktimestamp_violations: Transaction time not monotonicwal_checksum_failures: WAL corruption detected
Backends:
- Stdout: Structured JSON logging (always available)
- Tracy: CPU profiling with flamegraphs and zone tracking
- Honeycomb: Distributed tracing for span analysis (
⚠️ uses git dependency, see #271) - Prometheus:
/metricsHTTP endpoint (⚠️ stub implementation, see #272)
Run the demo:
export HONEYCOMB_API_KEY="your-key"
export PROMETHEUS_BIND_ADDR="127.0.0.1:9090"
cargo run --example observability_demo --all-features- CLAUDE.md - Quick reference for AI assistants and contributors
- docs/ARCHITECTURE.md - Architecture principles, design patterns, system design
- docs/CONFIGURATION.md - Configuration options, presets, tuning guide
- docs/DEVELOPMENT_WORKFLOW.md - Complete development workflow
- docs/CODING_STANDARDS.md - Rust coding standards and best practices
- TESTING.md - Testing, coverage, and profiling guide
- WORKTREE_WORKFLOW.md - Parallel development workflow with git worktrees
- docs/VECTOR_SEARCH_DESIGN.md - Vector search architecture (Phases 1-5)
- docs/EMBEDDINGS.md - Embedding generation guide (optional providers)
- docs/WAL.md - Write-Ahead Log format and architecture
- docs/query-language-design.md - Query language grammar and semantics
- docs/guides/vector-search-integration.md - Complete vector search API
- docs/guides/vector-search-performance.md - Performance tuning
- docs/guides/hybrid-query-guide.md - Hybrid query API reference
- docs/guides/index-persistence-guide.md - Index persistence details
- docs/guides/sharding-guide.md - Graph sharding and distributed deployment
- docs/guides/tiered-storage-guide.md - Tiered storage configuration
- docs/guides/query-pipeline-guide.md - Query execution pipeline
- docs/adr/0013-tiered-storage-architecture.md - Tiered storage architecture
- docs/adr/0014-graph-sharding-strategy.md - Graph sharding strategy
- docs/adr/0016-embedding-providers.md - Embedding provider architecture
- docs/adr/0018-temporal-vector-historical-integration.md - Temporal vector integration
- docs/adr/0019-hybrid-query-planner.md - Hybrid query architecture
- docs/adr/0020-concurrent-wal-architecture.md - Concurrent WAL design
- docs/adr/0022-multi-property-vector-index.md - Multi-property vector indexes
- docs/adr/0023-index-persistence-layer.md - Index persistence architecture
- docs/adr/0024-hybrid-logical-clock-timestamps.md - HLC timestamp design
See docs/adr/ for all architectural decisions.
Recovery Examples:
examples/recovery/basic_recovery.rs- Automatic database recovery after crashexamples/recovery/manual_recovery.rs- Manual recovery control with statisticsexamples/recovery/progress_callback.rs- Recovery with progress tracking
Other Examples:
examples/observability_demo.rs- Production observability featuresexamples/doctor_who_demo.rs- Temporal graph modeling exampleexamples/story_demo.rs- Narrative generation example (Run:cargo run --example story_demo --features nova)
Enable LLMs to:
- Query "What did we know about X at time T?"
- Track how relationships evolved over time
- Detect contradictions through provenance
- Reason about causality and change
- Track semantic drift in knowledge over time
- Combine graph structure, semantic similarity, and temporal queries
Track how your knowledge graph changes:
- Audit trails for compliance
- Historical analysis and trend detection
- Rollback capabilities
- Provenance tracking
- Semantic evolution analysis
Advanced RAG patterns:
- Multi-property semantic search (title, content, image embeddings)
- Hybrid graph+vector queries (traverse then rank by similarity)
- Temporal RAG (retrieve knowledge as it existed at specific times)
- Semantic drift detection (identify when knowledge changed)
- Fork the repository
- Create a feature branch (use worktrees:
just worktree-new feature/name) - Run tests:
just test - Check coverage:
just coverage-check - Run pre-commit checks:
just pre-commit - Submit a pull request
All contributions must:
- Pass all tests
- Maintain ≥85% code coverage (line, function, and region)
- Follow coding guidelines in docs/CODING_STANDARDS.md
- Include appropriate documentation
- Never commit directly to trunk (use worktrees and PRs)
See docs/DEVELOPMENT_WORKFLOW.md for complete workflow documentation.
# Run all tests
just test
# Generate coverage report
just coverage
# Profile with Tracy
just profile-tracy
# Run benchmarks
just benchSee TESTING.md for detailed testing guidelines.
Licensed under the MIT License. See LICENSE for details.