AI Orchestrator
The conversational interface that translates natural language into verified synthetic data through a typed tool surface and multi-provider LLM routing. Engine selection and routing are fully automatic.
Architecture
The AI Orchestrator is a unified system with two surfaces: a single-turn front door for straightforward requests, and an autonomous multi-step executor for more complex work. Both share the same tool surface, LLM routing, policy layer, and audit infrastructure.
| Component | Role |
|---|---|
| Single-turn front door | Handles one user message, makes one tool call, returns an immediate response |
| Autonomous multi-step executor | Plans multi-step pipelines, executes with self-healing, delegates complex goals |
| Typed tool surface | Catalogue of typed, schema-validated operations with approval policy |
| Multi-provider LLM routing | Provider-neutral LLM access (OpenAI, Anthropic, Gemini, Azure, self-hosted) with automatic failover |
| Policy layer | Enforces plan limits, seal boundaries, and per-tenant budgets |
| PII redaction | Detects and redacts sensitive data before it leaves the session |
Automatic Intent Routing
The orchestrator analyses every user message to determine intent and automatically routes to the appropriate engine and tool set. There is no manual selection step.
| User Intent | Routing | Surface |
|---|---|---|
| Generate data from a description | Mock Engine | Single-turn |
| Train and synthesize from uploaded data | AI Synthesize | Single-turn or autonomous |
| Generate with hard domain constraints | Constrained Synthesis | Single-turn or autonomous |
| Simulate SCADA/OT plant telemetry | Virtual SCADA | Single-turn or autonomous |
| Create ICS attack datasets | ICS Security | Single-turn or autonomous |
| Complex multi-step pipeline | Auto (cross-engine) | Autonomous |
ℹSingle-turn delegates to autonomous automatically
Tool Categories
Tools are classified into 4 categories that determine approval requirements:
| Category | Approval | Examples |
|---|---|---|
| Read-only | None | Status and listing operations over your own resources |
| Deterministic | When artefacts created | Seed-reproducible data operations that do not require training |
| Authoring | Required | Operations that draft or modify job specifications and scenarios |
| Compute-heavy | Always | Training, large-scale synthesis, and constraint-safe sampling |
Plan and Approval Protocol
When a tool requires approval, the orchestrator returns a plan — a structured description of the operations to be executed, along with estimates and expected outputs. The user must approve or reject the plan before execution.
A plan includes the selected operations, estimated latency and cost class, the determinism seed, and the list of artefacts the job will produce. Approval flow: the orchestrator stores the plan, the user approves or rejects, on approval the orchestrator executes, persists audit logs, and returns job IDs and evidence references.
Multi-provider LLM routing
The platform provides multi-provider LLM support with automatic failover:
- OpenAI, Anthropic, Gemini, Azure OpenAI, and OpenAI-compatible self-hosted models
- Per-tenant bring-your-own-key support (bypasses the AI request cap)
- Provider priority list with deterministic fallback ordering
- Centralised token accounting for budget enforcement
- Every call emits provider, model, token usage, latency, and request identifier
Session Endpoints
| Method | Path | Description |
|---|---|---|
| POST | /v1/client/chat/sessions | Create a new chat session |
| POST | /sessions/{id}/messages | Send message, receive response and optional recipe |
| POST | /sessions/{id}/turns/{tid}/approve | Approve recipe and execute |
| POST | /sessions/{id}/turns/{tid}/reject | Reject recipe |
| GET | /sessions/{id}/messages | Paged message history |
| GET | /sessions/{id}/state | Session state and resource summary |
| GET | /sessions/{id}/stream | SSE live stream (events: connected, tool_call, message, done, approval_required) |
Audit Trail
Every orchestrator session produces a complete audit trail stored across 6 tables: chat turns (recipe + approval state), tool call logs (name, category, args, timing), tool result logs, assistant trace artifacts (prompt hash, output hash), resource references, and LLM usage logs (token counts for billing).