The ruLake accelerator plane is the runtime + crate layout that lets the same RuLake binary route the popcount-scan + L2² rerank inner loop to a CPU-naive baseline, an AVX-512 host kernel, or a portable GPU kernel via wgpu — without forcing any of those backends into the core dependency graph and without ever letting a non-deterministic kernel feed a witness-sealed answer. The contract is the VectorKernel trait at crates/core/src/kernel.rs (id, caps, scan), the dispatch policy lives in RuLake::pick_kernel and consults Consistency + batch + dim, and conformance against assert_kernel_conformant is the only path past experimental. Two implementations ship today as standalone sibling crates — crates/kernel-avx512/ (bit-equal, ~2.5% faster than CpuNaive on the headline grid where sort dominates) and crates/kernel-wgpu/ (auto-detects Vulkan/Metal/DX12/GL/WebGPU adapters; deterministic on the popcount path, coarse-deterministic on L2 because
| //+------------------------------------------------------------------+ | |
| //| HammerStarSweep.mq5 | | |
| //| Hammer / Shooting-Star Liquidity-Sweep EA | | |
| //| | | |
| //| Mirrors v3/docs/examples/mt5-hammer-star-sweep/bridge/strategy | | |
| //| Two operating modes: | | |
| //| 1) Native: detects sweeps + candle pattern in MQL5 | | |
| //| 2) Bridge: reads ruflo_signals.json written by ruflo bridge | | |
| //+------------------------------------------------------------------+ | |
| #property copyright "Ruflo - issue #1720" |
150-char summary: First production Rust impl of FINGER (WWW 2023) — skips 80%+ of graph ANN distance computations using precomputed residual bases, reducing per-query cost in vector search pipelines.
Modern vector databases — Qdrant, Milvus, Weaviate, Pinecone — all rely on graph-based approximate nearest neighbor (ANN) algorithms like HNSW. The shared bottleneck: every graph edge traversal requires an O(D) exact distance computation against a D-dimensional embedding. At D=128 (SIFT, visual features) or D=1536 (GPT-4 text embeddings), over 80% of these computations are wasted — the neighbor is too far to ever enter the top-k result set.
FINGER (Chen et al., WWW 2023, arXiv:2206.11408) solves this by precomputing a K-dimensional orthonormal basis from each graph node's edge-residual vectors at build time. During beam search, it projects the query residual onto this K-dimensional subspace o
| Field | Value |
|---|---|
| Date | 2026-04-26 |
| Domain | NV-diamond magnetometry × 60 GHz mmWave radar × WiFi CSI × multistatic fusion |
| Status | Research spec — speculative architecture, not a delivered system. Educational + safety-critical use cases only. |
| Refines | ADR-089 (nvsim simulator), ADR-029 (RuvSense multistatic), ADR-021 (vitals), ADR-022 (wifiscan) |
Keywords: filtered vector search · ACORN · HNSW · approximate nearest neighbor (ANN) · predicate filter · vector database · Rust · RAG · semantic search · low selectivity recall · embedding search · k-NN graph · ruvector
TL;DR.
ruvector-acornis a pure-Rust implementation of ACORN (Patel et al., SIGMOD 2024, arXiv:2403.04871) that solves filtered HNSW's recall-collapse problem at low predicate selectivity. 96% recall@10 at 1% selectivity, 99 K QPS at 50% selectivity, 23 ms parallel index build for 5 K × 128. Drop-inFilteredIndextrait — works with anyFn(u32) -> boolpredicate (equality, range, geo, ACL, composite). No C/C++ FFI, no unsafe, no BLAS dependency. Source: github.com/ruvnet/RuVector.
RuView now has a cheap similarity sensor baked into the pipeline — a way to ask "have I seen something like this before?" for any embedding (poses, CSI features, room signatures) without paying the full floating-point cost. It uses a technique called RaBitQ-style binary sketching: each embedding gets compressed to one bit per dimension (32× smaller in memory) and compared with a single CPU instruction (POPCNT on Intel/AMD, NEON vcnt on ARM/Pi/Mac).
The architectural decision is ADR-084 (merged on main, status: Proposed). The first implementation pass — the foundation Sketch / SketchBank API — is on branch feat/adr-084-pass-1-sketch-module (commits 6fd5b7d, 1df9d5f7d).
A lot of what RuView does is the same shape of question over and over:
Give your AI agents fast, trustworthy memory — without standing up a vector database.
ruLake is the layer between your agents and the data they remember. Plug in the storage you already have (S3, BigQuery, Snowflake, Parquet, files), expose it through one MCP tool, and every agent on every host gets the same low-latency, content-addressed view of memory.
Created by rUv. Part of the RuVector ecosystem alongside
ruvector-rabitq(1-bit compression kernel) and RVF (durable segment format). Repo: ruvnet/RuLake.
Two 1-bit ANN distance estimators (symmetric Charikar-LSH + asymmetric RaBitQ-2024), 32× per-vector code compression, 21× full-index memory compression, 3.13× end-to-end speedup at 100% recall@10 on n = 100,000 clustered D=128 vectors — pure Rust, no C/C++ deps, no SIMD intrinsics yet.
Part of the ruvector vector search library. Branch:
research/nightly/2026-04-23-rabitq(head6c6e04554) · ADR-154 · PR #370
A debugging and control layer for embodied graph systems whose structure is knowable — the connectome — rather than learned.
"OS" in the Linux sense: infrastructure for introspection and intervention, not a mystical claim about emergent mind. Not "mind upload." Not "digital consciousness." Connectome OS is the graph-native runtime that mounts on top of a connectome + a spiking engine and lets you probe, perturb, and reason about the structure — cut the wiring, measure the fracture, ask what substructure carried the failure.
Built as a Rust example crate (examples/connectome-fly/) on the RuVector graph primitives stack. Tier-1 demonstrator is the fruit fly (10⁴–10⁵ neurons, FlyWire v783-compatible); Tier 2 is mouse cortical regions (~29 engineer-weeks of named follow-up work to scale the in-tree substrate); Tier 3 is explicitly not on the roadmap.