Architecture

The Worldmodel platform follows a straightforward pipeline: data sources feed into ingestion pipelines, which update a temporal knowledge graph, which feeds a continuous reasoning loop powered by large language models, which triggers actions through a tool-use interface, which presents decisions and results to humans at the edge. Every step is logged, auditable, and reversible.

Ingestion

The platform connects to the systems the company already uses: Slack, Gmail, Google Drive, Notion, Linear, Jira, Salesforce, HubSpot, ERP systems, and others. Connectors are incremental, not batch. When a message is sent or a deal moves, the model updates within seconds. We don't ask companies to migrate data or change workflows. The world model builds itself from the systems that already exist.

Each connector is scoped to the minimum permissions required. Ingestion pipelines handle deduplication, normalization, and entity resolution across sources. A mention of "the Renault project" in Slack, a record in Salesforce, and a thread in Gmail are resolved to the same entity in the graph.

Storage

The core data structure is a temporal knowledge graph: entities, relationships, and attributes, each with a full history of changes over time. The default storage layer is Postgres with pgvector for embedding-based retrieval. For customers with specific requirements, we support managed graph databases as an alternative. In all cases, data lives in the customer's own cloud, in the customer's own account.

Temporality is not optional. The ability to reason about how the organization has changed over time is what separates a world model from a snapshot. The graph stores not just the current state of every entity and relationship, but the full sequence of states, with timestamps, provenance, and causal links.

Reasoning

The reasoning loop runs continuously. It processes incoming signals against the full context of the temporal graph, identifies patterns, detects anomalies, surfaces insights, and generates action plans. The loop is powered by large language models, accessed via the customer's chosen provider: Anthropic on Vertex AI, Amazon Bedrock, or Azure OpenAI. All inference runs through private endpoints with zero data retention, contractually enforced.

The quality of the reasoning is not determined by the model's general capabilities. It is determined by the quality of the context. And the context comes from the temporal graph, not from a prompt. This is why a world model produces better reasoning than a general-purpose AI assistant: the context is structured, complete, and temporal, not ad hoc and ephemeral.

AI workers

We build custom autonomous AI employees for each customer. These workers live inside the messaging apps the company already uses: Slack, Teams, WhatsApp, or any other channel where work happens. They talk with humans at the edge, the people closest to the work, in natural language, in the tools those people already have open.

Each AI worker is connected to the world model. When a worker receives information from a human, it updates the world model. When the world model identifies something that requires human attention, it routes that signal through the right worker to the right person. The workers are the interface between the world model and the organization. They distribute tasks, surface context, collect updates, and keep the model in sync with reality.

Every action a worker takes is executed through an MCP-style tool interface: API calls, writes to downstream systems, message sending, document creation. Every tool invocation is logged with full context: what triggered it, what reasoning produced it, what data it accessed. A kill switch is available at all times. One button freezes all workers instantly, across the entire deployment. No action is taken without a complete audit trail, and every action can be rolled back.

Observability

Every read, every reasoning step, every action is logged and exportable. The platform provides a full audit trail that can be piped to the customer's existing SIEM, exported for compliance review, or queried directly. There is no black box. The world model's reasoning is transparent by design.

Design principles

Every architectural decision follows three principles. First, the customer owns the data plane. We never hold customer data. Second, every action is auditable and reversible. There is no black box anywhere in the stack. Third, the system builds on what already exists. No migrations, no new tools to adopt, no workflow changes required. The world model assembles itself from the infrastructure the company already runs.

Read the security and deployment details →
How this differs from a knowledge graph →