Most AI interfaces are glorified text boxes. You type a question, you get an answer, and the system immediately forgets the entire interaction. The next time you need help, you start from scratch.
LUNA — Local Understanding & Navigation Assistant — takes a fundamentally different approach. It is not a destination you visit. It is the connective tissue between your team and ODIN's entire hub ecosystem.
What LUNA Actually Does
LUNA serves as the primary interface for interacting with ODIN. But "interface" undersells it. LUNA is the entry point where unstructured human intent gets transformed into structured action across specialized hubs.
When you speak to LUNA or type a message, three things happen:
- Capture: Your input is transcribed (if voice) and preserved with full provenance — who said it, when, in what context.
- Classification: ODIN's Router analyzes your intent and determines which hub or combination of hubs should handle the request.
- Routing: Your request is dispatched to the appropriate hub with the relevant context from BrainDB attached.
This means a single natural language request like "draft a contract for the Acme proposal we discussed yesterday" triggers a chain: LUNA captures the request, the Router identifies Legal Hub as the target, BrainDB provides the context from yesterday's discussion, and the Legal Hub generates the artifact.
The Local-First Stack
LUNA runs entirely on your infrastructure. This is not a philosophical preference; it is an architectural decision with concrete implications.
Voice: Whisper (Local)
Speech-to-text uses OpenAI's Whisper model, running locally. The 150MB model provides production-quality transcription without sending audio to any external service. Your voice interactions — which often contain the most candid and context-rich information — never leave your servers.
Language: Ollama (Local)
LUNA's conversational intelligence runs on Ollama with Llama 3.2 (3B parameters). This local model handles intent classification, context extraction, and conversational flow with approximately 2GB of memory overhead. It is fast, private, and does not require internet connectivity.
Embeddings: nomic-embed-text (Local)
Semantic understanding uses the nomic-embed-text model (274MB) for local embedding generation. When LUNA needs to find relevant context from BrainDB or match your request against organizational knowledge, it does so without external API calls.
Fallback: Cloud (When Necessary)
For tasks that genuinely require frontier model capabilities — complex reasoning, nuanced document analysis, creative generation — LUNA can fall back to Claude or other cloud providers. But this fallback is explicit, audited, and opt-in. The default path is always local.
Surfaces
LUNA is available wherever your team works:
- Command Center: Integrated chat interface within ODIN's web application
- REST and WebSocket APIs: For custom integrations and automated workflows
- CLI: The
cliffcommand-line tool for developers who live in the terminal
Each surface connects to the same underlying LUNA service, which means context is shared across all interaction modes. A conversation started in the Command Center can be continued from the CLI without losing context.
Governance by Default
Every interaction with LUNA creates an audit event. This is non-negotiable and cannot be disabled. The rationale is simple: if your AI interface is capturing organizational context and routing decisions, there must be a record of what happened.
Specifically, LUNA logs:
- Every transcription with source attribution
- Every routing decision with the classification rationale
- Every hub dispatch with the context provided
- Every response returned to the user
This audit trail is not about surveillance. It is about organizational learning. When a routing decision was wrong, when a hub response was inadequate, when context was missing — the audit trail tells you exactly what happened so you can improve the system.
What LUNA Is Not
LUNA is not an autonomous agent that takes actions without approval. When a request requires action — generating a contract, writing code, making a purchase decision — LUNA routes to the appropriate hub, which applies its own governance rules including approval workflows and risk flagging.
LUNA is not a replacement for thinking. It captures and routes, but the judgment calls remain with your team. The Compass Hub can flag that a decision increases founder dependency. The Legal Hub can veto a Sales promise. LUNA facilitates these governance checks; it does not override them.
LUNA is not a data silo. Everything it captures flows into BrainDB's governed namespace structure, where it becomes part of the organization's queryable memory with full rationale, ownership, and dependency tracking.
The Interface Your Organization Deserves
The difference between LUNA and a conventional chatbot is the difference between a switchboard operator and a sticky note. One understands context, routes intelligently, and creates a record. The other gives you an answer and moves on.
Your organization's collective intelligence deserves an interface that treats every interaction as valuable context — not disposable chat.
Want to see LUNA in action? Request a demo and experience the difference.