Core-X Documentation
Core-X is a modular AI ecosystem designed for Apple Silicon Macs. It runs a constellation of MLX-accelerated services — LLM, RAG, vision, audio, embeddings, image generation, video generation, and speech-driven animation — all locally, with no data leaving the machine. A unified React + Three.js interface adapts to the active context (chat, canvas, voice avatar, image workflow, code, research, etc.), communicating with services through a central gateway and an SSE-based event bus.
The system is built around independent houses (domain-specific workspaces) that share infrastructure but maintain creative autonomy, an anthology of documentation and research, and a model zoo that tracks every model artifact used across the ecosystem.
Spacebot is the orchestrator and MCP client — it provides the Studio and Stories UI surfaces and proxies tool calls to Narrative MCP and Stories MCP servers. A shared media runtime layer (TTS, frames, video, voice resolution) is reused by both MCPs.
Design Principles
- Local-first, private by default — All inference on Apple Silicon via MLX. No cloud APIs, no telemetry, no data exfiltration.
- Configuration-driven — 70 JSON schemas govern every entity. Services, agents, flows, skills, and houses are all declared, never ad-hoc.
- Tier-graduated resource management — Start with 3 core services on 16 GB; scale to 10+ services on 64 GB+.
- OpenResponses as canonical protocol — Every LLM-capable service speaks
POST /v1/responseswith SSE streaming. Legacy/v1/chat/completionsis removed. - House modularity — 24 creative houses operate independently on shared infra. Each registers capabilities via the core registry system.
Quick Facts
- Core-X is a local-first, privacy-preserving AI platform running entirely on Apple Silicon via MLX
- All LLM inference, vision, audio, embeddings, and animation happen on-device — zero external API calls
- Services are organized into tiers (core / standard / full) that scale with your RAM budget
- The Unified UI (React + Three.js) adapts across 14 modes (chat, canvas, voice, image, video, etc.)
- The canonical LLM protocol is OpenResponses (
POST /v1/responses) — not/v1/chat/completions - Spacebot orchestrates narrative and story workflows as the MCP client for Narrative MCP and Stories MCP
- Media stack: mflux for stills/styleframes, LTX-2 MLX for motion, mlx-audio + Qwen cloning for voice
- Canonical local model path is
model-zoo/models/<category>/<model-name>/(legacymlx-models/removed)