From rules to governed AI architectures
We’re extending discovered rules into symbolic priors for sparse neural networks—a fast, deterministic decision layer that sits between perception and action.
Perception layer
(vision models, LLMs, sensor fusion)
Interprets the world, outputs structured state and uncertainty estimates. Statistical and adaptive.
↓
Basiliac decision layer
(sparse network seeded with symbolic rules)
Deterministic, auditable, real-time action logic with explicit triggers. Every decision path is traceable.
↓
Governance layer
Monitors rule firing, detects drift, enforces safe abstention, escalates when needed. Fully observable.
This creates a hybrid system:
- statistical flexibility where you need it (perception)
- deterministic logic where you need it (action)
- human oversight where you need it (escalation)
Why rules must cooperate, not just fire
In real systems, decision logic is rarely a single trigger. Multiple valid rules can activate at once—some pushing toward action, others signaling that action is unnecessary or unsafe.
A governed decision layer must coordinate these signals: suppress actions when context makes them irrelevant, prioritize safety when constraints tighten, and abstain when the state is ambiguous.
For example, one rule might indicate “take protective action” based on proximity and speed, while another recognizes a contextual constraint that removes the risk pathway (e.g., the situation is resolving away from the hazard, or an alternative safe trajectory exists).
The goal isn’t “more rules.” The goal is structured interaction between rules so the system chooses the right outcome deterministically—and can explain that choice.
That’s why we’re exploring sparse networks seeded with symbolic rules: to create a decision layer where rule interactions form a controlled, traceable decision graph, not a black box.
Why determinism matters: When a system brakes, approves a medical procedure, or shuts down a reactor, regulators and operators need to know exactly what fired and why—not probabilistic explanations, but traceable, reproducible decision paths. The Basiliac layer provides that certainty.
The result: AI systems that can perceive with neural networks but act with deterministic logic humans can inspect, audit, and trust.