Decisions that generalize. Logic that survives. Governance that scales.

Basiliac builds decision logic you can inspect, monitor, and trust—so when an AI system takes an action, you can answer three everyday questions: Why did it do that? Will it do it again under similar conditions? And does it know when to pause and ask for help? Our combinatorial reasoning engine discovers compact rule sets from massive search spaces, designed for stability under drift and safe abstention in high-stakes environments.

  • Makes decisions legible: explicit triggers you can audit (not “the model said so”)
  • Keeps behavior stable: logic designed to remain coherent as conditions change
  • Knows when to abstain: defers or escalates instead of guessing under uncertainty

We’re looking for mission-driven researchers, engineers, and domain experts focused on governed decision systems. Collaboration and employment conversations only — this site is not a product offer.

Basiliac does not provide financial or investment advice. References to financial deployments describe technology validation, not performance guarantees or investment recommendations.

AI can perceive. But can we trust it to act?

Modern AI excels at interpretation—vision systems detect pedestrians, language models extract intent, predictive models forecast risk. But decisions are where liability concentrates.

When a system decides to brake, approve a loan, or route a patient, three questions matter:

  • Why did it act? (traceability)
  • Will it act the same way tomorrow? (stability under drift)
  • Does it know when NOT to act? (safe abstention)

Most AI systems still struggle to answer these questions reliably. Basiliac is built to.

A reasoning engine that discovers governing logic from chaos

Our system ingests massive libraries of candidate conditions (typically 50,000+) and navigates search spaces of 1040+ possibilities—fundamentally impossible to brute-force—and extracts simple, human-readable logic.

It outputs compact, auditable decision rules: 5–9 simple conditions joined by AND logic.

Inputs

  • Structured state + uncertainty (from sensors, models, or enterprise data)
  • Candidate condition library (often 50,000+)

Outputs

  • Compact rule sets (5–9 conditions) + abstention triggers
  • Traceable decision paths with explicit firing conditions

Governance artifacts

  • Walk-forward and stress-slice evaluation protocol
  • Monitoring hooks: firing rates, drift signals, escalation thresholds
  • Versioned rule packages for audit and rollback

IF pedestrian_distance < 12m
AND relative_velocity > 15 km/h
AND brake_response_time > 0.4s
AND road_friction < 0.6
AND no_escape_lane_available
THEN emergency_brake

These aren’t hand-coded. They’re discovered from data—but unlike black-box ML, every trigger is explicit, monitorable, and explainable.

Proven where the world changes fast

Before we apply governed decision logic to safety-critical domains, we validated the underlying discovery approach in an environment that is non-stationary, adversarial, and punishes overfitting: live financial markets (since 2011), through partnerships with licensed and regulated entities.

What this does not claim: This is not a statement about investment performance, trading returns, or future outcomes.

What it does validate: That compact decision logic—5–9 explicit conditions plus safe abstention—can be discovered, monitored, and kept operationally coherent even as regimes shift and participants adapt.

We chose this proving ground because:

  • Markets are adversarial learning environments
  • Patterns degrade as participants adapt
  • Transaction costs and execution reality destroy theoretical edges
  • Regime changes invalidate assumptions overnight
  • Markets demand governance:
    • regulators require auditable decision logic
    • risk managers need explainable triggers
    • operators must know when systems should abstain

What we validated: Compact, relational logic (5–9 conditions) exhibits forward stability—the structural relationships captured during discovery remain coherent in unseen data, even as surface statistics shift.

This validation wasn’t about financial returns. It was about generalization—proving that discovered rules encode durable structure rather than transient noise.

Why this matters beyond finance: If decision logic can remain coherent in non-stationary, adversarial environments, it can remain coherent in autonomous systems navigating evolving road conditions, medical protocols adapting to new variants and treatment knowledge, and security systems facing novel attack vectors.

We’re not marketing a product. We’re demonstrating a capability—one we believe is foundational for governing AI in high-stakes domains.

Governing decisions in domains that matter

The underlying technology is domain-agnostic, but each deployment requires:

  • Feature engineering: defining candidate conditions relevant to the domain
  • Stability screening: walk-forward validation and stress testing
  • Governance integration: monitoring, drift detection, escalation protocols

Near-term focus areas

Autonomous systems (vehicles, drones, robotics)

When to brake, yield, or escalate to human oversight. Rules that remain valid as road conditions, sensor accuracy, and traffic patterns evolve.

  • emergency braking under degraded visibility
  • lane-change safety with uncertainty in sensor readings
  • escalation to human when conditions exceed training envelope

Medical triage and routing

Which patients need immediate specialist attention. Which cases can be deferred safely. Explicit logic auditable by clinicians and regulators.

  • ICU admission based on vital trend dynamics
  • specialist escalation for ambiguous diagnostic patterns
  • safe discharge criteria with post-treatment monitoring

Cybersecurity response

When to block, quarantine, alert, or escalate. Rules that adapt to new attack vectors without catastrophic false positives.

  • automated quarantine for anomalous lateral movement
  • escalation thresholds for novel threat signatures
  • safe abstention when pattern confidence is low

Industrial safety and operations

Shutdown triggers for manufacturing, energy systems, chemical processes. Logic that regulators can inspect and certify.

  • emergency shutdown for multi-parameter deviation
  • preventive intervention before threshold breach
  • maintenance escalation based on degradation trajectories

From rules to governed AI architectures

We’re extending discovered rules into symbolic priors for sparse neural networks—a fast, deterministic decision layer that sits between perception and action.

Perception layer

(vision models, LLMs, sensor fusion)

Interprets the world, outputs structured state and uncertainty estimates. Statistical and adaptive.

Basiliac decision layer

(sparse network seeded with symbolic rules)

Deterministic, auditable, real-time action logic with explicit triggers. Every decision path is traceable.

Governance layer

Monitors rule firing, detects drift, enforces safe abstention, escalates when needed. Fully observable.

This creates a hybrid system:

  • statistical flexibility where you need it (perception)
  • deterministic logic where you need it (action)
  • human oversight where you need it (escalation)

Why rules must cooperate, not just fire

In real systems, decision logic is rarely a single trigger. Multiple valid rules can activate at once—some pushing toward action, others signaling that action is unnecessary or unsafe. A governed decision layer must coordinate these signals: suppress actions when context makes them irrelevant, prioritize safety when constraints tighten, and abstain when the state is ambiguous.

For example, one rule might indicate “take protective action” based on proximity and speed, while another recognizes a contextual constraint that removes the risk pathway (e.g., the situation is resolving away from the hazard, or an alternative safe trajectory exists). The goal isn’t “more rules.” The goal is structured interaction between rules so the system chooses the right outcome deterministically—and can explain that choice.

That’s why we’re exploring sparse networks seeded with symbolic rules: to create a decision layer where rule interactions form a controlled, traceable decision graph, not a black box.

Why determinism matters: When a system brakes, approves a medical procedure, or shuts down a reactor, regulators and operators need to know exactly what fired and why—not probabilistic explanations, but traceable, reproducible decision paths. The Basiliac layer provides that certainty.

The result: AI systems that can perceive with neural networks but act with deterministic logic humans can inspect, audit, and trust.

From research to infrastructure

Basiliac was founded in 2025 to commercialize and scale a combinatorial reasoning technology that has been developed and validated over more than a decade.

The foundation: The core methodology emerged from sustained research and live deployment in financial markets, where it has operated continuously since 2011 through partnerships with licensed and regulated entities.

The journey: Our founder has been building and refining this technology for over 13 years, iterating through live deployments, regime changes, and the unforgiving feedback loops of adversarial environments. Basiliac represents the formalization of that research—properly structured to scale beyond finance.

Why now: As AI capabilities accelerate, the need for governed action layers has become urgent. Basiliac exists to adapt this battle-tested approach to domains where decisions carry legal, safety, or societal consequences—and where “the model said so” isn’t good enough.

What we bring:

  • a proven discovery process (validated in adversarial, non-stationary environments)
  • 13+ years of iteration on what makes rules survive versus fail
  • deep expertise in stability testing, drift detection, and safe abstention
  • a clear vision for how symbolic logic and neural networks can coexist

These principles guide our research and engineering

They are also the basis for how collaborators evaluate and operate the system.

Causality first, determinism always

Routines must be computable in real time using only information available at decision time. Every decision path is deterministic and reproducible—essential for audits, certification, and operational trust.

Parsimony is a feature

Compact logic reduces degrees of freedom, improves governance, and makes failure modes legible.

Stability over cleverness

We prefer routines that survive drift and stress slices; if it doesn’t generalize, it doesn’t ship.

Traceability as a requirement

If we can’t explain what fired and why, we can’t operate it safely.

Abstention is strength

Safe systems know when not to act—and how to defer, escalate, or gather more information.

Measured claims

We report evaluation protocols and failure modes. We avoid performance marketing and guarantees.

If this mission resonates, reach out

We’re building governed decision systems for high-stakes AI—systems that can explain their actions, remain stable under drift, and abstain when they don’t know.

If your instinct is to make powerful automation accountable, and you’re motivated by the idea that “trust” must be engineered—not asserted—email us.

When you write, include:

  • your background (2–3 lines)
  • the domain you care about
  • what you want to build, test, or publish
  • links to relevant work (papers, GitHub, portfolio)

Collaboration and employment conversations only — this site is not a product offer.

Email: info@basiliac.ai

FAQ

If this works so well in finance, why build Basiliac instead of just running a fund?

Because the real opportunity isn’t optimizing another portfolio—it’s building technology that touches lives at scale. Financial markets provided a proving ground: adversarial, unforgiving, and non-stationary. Surviving there validated that our approach to governed decision-making can remain coherent under drift. But that was never the end goal.

We’re building toward tangible impact: autonomous systems that brake reliably, medical triage logic that routes patients safely, and industrial safety systems that prevent catastrophic failures. Finance stress-tested the foundations. Now we’re building the infrastructure for AI systems society actually needs to trust.

Why rules instead of end-to-end deep learning?

End-to-end models are powerful for perception but opaque for action. When a decision has legal, safety, or operational consequences, “the model said so” isn’t good enough. Rules provide traceability, enable monitoring, and allow humans to intervene at the right layer.

Don’t rules overfit just like ML models?

They can—which is why we obsess over stability. Our search process aggressively prunes for forward survivability. Multi-year validation in live deployments suggests the approach can work, though each domain requires its own stability discipline.

How do you handle concept drift?

We build drift detection into the governance layer. When rule coherence degrades, the system can abstain, escalate, or trigger a refresh cycle. This is governance by design, not an afterthought.

Is Basiliac offering a commercial product today?

No. We’ve built and validated core technology in live environments. This site exists to describe our research direction and to invite mission-aligned collaboration on governed decision systems.

When was Basiliac founded?

Basiliac was incorporated in 2025. The underlying technology has been in development and live deployment since 2011, refined through multiple iterations in financial markets.

Does Basiliac provide trading signals or investment advice?

No. This site does not offer, and should not be construed as offering, any financial or investment advice.

Disclaimer

This site does not provide financial or investment advice. Past performance in any domain does not guarantee future results. References to financial market deployments describe technology validation conducted with licensed and regulated entities, not performance guarantees or investment recommendations.