In ODIN (Omni-Domain Intelligence Network), every user intent flows through the Router Kernel, gets classified with confidence scoring, routes to specialized hubs, and produces governed memory writes with full audit trail. No black boxes, complete traceability.
Intent-driven orchestration with governance at every step
Voice, text, or work order
Every interaction starts with intent. Voice via Assistant Hub (local-first with Whisper.cpp), text via Command Center, or structured work orders. Intent can be a question, a task, or a decision that needs context.
Intent classification & confidence scoring
The Router Kernel classifies intent, scores confidence, and determines which hub(s) should handle it. Priority rules apply: Legal can veto Sales promises. Sentinel can gate risky dependencies. Compass can require justification for RED decisions.
Specialized engine execution
Six specialized hubs, each with a single responsibility. Compass scores decisions. Academy generates training. Assistant captures context. Sentinel gates dependencies. Legal handles governance. Coding executes work orders. Hubs don't leak responsibilities.
Audit events + memory writes
Every hub output includes mandatory audit events and governed memory writes. Each memory write requires rationale (why this exists), ownership (who can change it), and dependencies (what relies on it). No silent changes.
Memory as governance
Memory is not optional. Memory is governance. Seven namespaces from global identity to session memory. Every write is queryable forever. The complete decision log, assumptions record, and audit trail live here.
The technical architecture that makes governance-first operations possible
Your high-level goal becomes a structured work order with objectives, constraints, and success criteria. Describe it once, get it done.
Complex work orders decompose into smaller, focused sub-work orders that can execute independently. Each tracked with full context.
Each sub-work order breaks into atomic tasks, the smallest unit of work. Every task traceable back to its source.
When multiple hubs respond, priority rules apply. Legal Hub can veto Sales promises. Sentinel Hub can gate risky dependencies. Compass Hub can require justification for RED decisions. Conflicts are resolved, not hidden.
Watch how intent flows through Router, Hub, and BrainDB
Architecture Demo
Intent → Router → Hub → BrainDB
6
Hub Engines
11
Audit Events
31
Core Packages
Understand how intent flows through the system and how governance is enforced at every step.
See how ODIN transforms your requirements into production-ready code with full traceability
1{2 "id": "wo-auth-jwt",3 "objective": "Add JWT authentication to the Express API",4 "requirements": [5 "Implement JWT token generation on login",6 "Add token verification middleware",7 "Support refresh token rotation"8 ],9 "constraints": {10 "noBreakingChanges": true,11 "targetFiles": ["src/auth/**", "src/middleware/**"],12 "testCoverage": ">90%"13 },14 "successCriteria": [15 "All existing tests pass",16 "New auth tests achieve 90%+ coverage",17 "Security audit passes"18 ]19}Understanding the ODIN architecture and governance model
A chatbot is a conversational interface. ODIN is infrastructure. It has six specialized hubs with strict contracts, governance-aware memory (BrainDB), and complete audit trails. Every output includes rationale, ownership, and dependencies. It's designed to be left behind: transferable, documentable, operable by others.
"If ODIN depends on the founder to function, ODIN isn't done yet." Everything must be transferable, documentable (not explainable), and operable by others without quality degradation. This is how we measure if ODIN is ready: not features, but independence.
Compass Hub scores every decision by whether it increases or decreases dependency on the founder. GREEN = reduces dependency (proceed). YELLOW = neutral (document and proceed). RED = increases dependency (requires explicit justification). This prevents bottleneck accumulation.
Local-first by default. Assistant Hub uses Whisper.cpp for speech-to-text and Ollama for LLM, both run locally. Cloud is optional with explicit opt-in only. No background processing without consent. Every capture creates an audit event with full provenance.
router.route (intent classification), hub.process (hub execution), memory.write/read, approval.request/grant/deny, assistant.capture (voice/text), sentinel.scan/block/allow. Every event includes correlation IDs, session attribution, and risk flags. Queryable forever.