Sign In

AI Orchestrator

The conversational interface that translates natural language into verified synthetic data through a typed tool surface and multi-provider LLM routing. Engine selection and routing are fully automatic.

Architecture

The AI Orchestrator is a unified system with two surfaces: a single-turn front door for straightforward requests, and an autonomous multi-step executor for more complex work. Both share the same tool surface, LLM routing, policy layer, and audit infrastructure.

ComponentRole
Single-turn front doorHandles one user message, makes one tool call, returns an immediate response
Autonomous multi-step executorPlans multi-step pipelines, executes with self-healing, delegates complex goals
Typed tool surfaceCatalogue of typed, schema-validated operations with approval policy
Multi-provider LLM routingProvider-neutral LLM access (OpenAI, Anthropic, Gemini, Azure, self-hosted) with automatic failover
Policy layerEnforces plan limits, seal boundaries, and per-tenant budgets
PII redactionDetects and redacts sensitive data before it leaves the session

Automatic Intent Routing

The orchestrator analyses every user message to determine intent and automatically routes to the appropriate engine and tool set. There is no manual selection step.

User IntentRoutingSurface
Generate data from a descriptionMock EngineSingle-turn
Train and synthesize from uploaded dataAI SynthesizeSingle-turn or autonomous
Generate with hard domain constraintsConstrained SynthesisSingle-turn or autonomous
Simulate SCADA/OT plant telemetryVirtual SCADASingle-turn or autonomous
Create ICS attack datasetsICS SecuritySingle-turn or autonomous
Complex multi-step pipelineAuto (cross-engine)Autonomous

Single-turn delegates to autonomous automatically

When the single-turn surface detects that a user goal requires multiple steps (train, then synthesize, then validate), it automatically delegates to the autonomous executor. The user sees a seamless experience.

Tool Categories

Tools are classified into 4 categories that determine approval requirements:

CategoryApprovalExamples
Read-onlyNoneStatus and listing operations over your own resources
DeterministicWhen artefacts createdSeed-reproducible data operations that do not require training
AuthoringRequiredOperations that draft or modify job specifications and scenarios
Compute-heavyAlwaysTraining, large-scale synthesis, and constraint-safe sampling

Plan and Approval Protocol

When a tool requires approval, the orchestrator returns a plan — a structured description of the operations to be executed, along with estimates and expected outputs. The user must approve or reject the plan before execution.

A plan includes the selected operations, estimated latency and cost class, the determinism seed, and the list of artefacts the job will produce. Approval flow: the orchestrator stores the plan, the user approves or rejects, on approval the orchestrator executes, persists audit logs, and returns job IDs and evidence references.

Multi-provider LLM routing

The platform provides multi-provider LLM support with automatic failover:

  • OpenAI, Anthropic, Gemini, Azure OpenAI, and OpenAI-compatible self-hosted models
  • Per-tenant bring-your-own-key support (bypasses the AI request cap)
  • Provider priority list with deterministic fallback ordering
  • Centralised token accounting for budget enforcement
  • Every call emits provider, model, token usage, latency, and request identifier

Session Endpoints

MethodPathDescription
POST/v1/client/chat/sessionsCreate a new chat session
POST/sessions/{id}/messagesSend message, receive response and optional recipe
POST/sessions/{id}/turns/{tid}/approveApprove recipe and execute
POST/sessions/{id}/turns/{tid}/rejectReject recipe
GET/sessions/{id}/messagesPaged message history
GET/sessions/{id}/stateSession state and resource summary
GET/sessions/{id}/streamSSE live stream (events: connected, tool_call, message, done, approval_required)

Audit Trail

Every orchestrator session produces a complete audit trail stored across 6 tables: chat turns (recipe + approval state), tool call logs (name, category, args, timing), tool result logs, assistant trace artifacts (prompt hash, output hash), resource references, and LLM usage logs (token counts for billing).