Synthetic data with cryptographic evidence.
From a natural-language prompt to audit-ready datasets — with privacy guarantees, statistical fidelity, and industrial-control simulation built in. Pick your interface, mint a key, and ship in minutes.
Built on
High-fidelity
the flagship engine, parametric and diffusion engines — trained on your data, benchmarked against utility scores you can reproduce.
Auditable
Every run ships a 9-artifact evidence bundle: distribution reports, correlation matrices, privacy metrics, Merkle-rooted BLAKE3 proofs.
Private by default
Tenant isolation, least-privilege IAM, SSO/SCIM, bring-your-own CA, offline signing, and an on-prem/air-gapped deployment path.
Pick your path
Every interface reaches the same backend and produces the same evidence bundle. Start with whichever matches your workflow — you can switch later.
GUI
→Click-through the whole platform in a browser.
open https://app.radmah.aiBest for Domain experts who want the visual dashboard, approval gates, and live evidence previews.
REST API
→Every endpoint, schema, and error code.
curl https://api.radmah.ai/v1/jobs \
-H "Authorization: Bearer $RADMAH_API_KEY" \
-d '{ "engine": "mock",
"rows": 10000,
"prompt": "customer support tickets" }'Best for Anything that speaks HTTP — Go, Rust, Ruby, .NET, curl, Postman, custom Lambdas.
Python SDK
→Typed, retry-aware, async-first client.
from radmah_sdk import RadMah
client = RadMah(api_key="sl_live_…")
job = client.jobs.submit(
engine="mock",
rows=10_000,
prompt="customer support tickets",
)
bundle = client.jobs.wait(job.id).download_evidence()Best for Data scientists, ML engineers, and data pipelines that already live in Python.
CLI (rady)
→Terminal-first — everything the SDK does, from your shell.
rady auth login
rady mock run --rows 10000 \
--prompt "customer support tickets"
rady evidence verify ./bundle-*.tar.zstBest for Shell-native workflows, CI jobs, one-off investigations, and ops playbooks.
Popular guides
Generate mock data
Describe a dataset in plain English and receive structured CSV. No upload, no training.
Synthesize from your data
Upload a CSV, train the flagship engine, and produce high-fidelity synthetic data that preserves statistical properties.
Verify an evidence bundle
Offline verifier, Merkle root reconstruction, BLAKE3 proofs, Ed25519 signatures when configured.
Build a cyber-range dataset
Blend Virtual SCADA benign traffic with labelled attacks for IDS/SOC training.
Delegate to the agent
Hand a multi-step analysis to the Agentic Data Scientist — review the plan before execution.
Deploy on-premises
SSO/SCIM, bring-your-own CA, air-gapped install, tenant isolation, compliance-ready infrastructure.
Every run ships an evidence bundle
Nine artifacts, Merkle-rooted, verifiable offline. This is what auditors, compliance reviewers, and ML validation teams actually need.
bundle-job_01h…abc.tar.zst
├── manifest.json # Merkle root, artifact hashes, run metadata
├── contract.json # frozen generation contract (ContractK)
├── sample.parquet # generated rows — schema-stable columnar
├── distributions.html # marginals vs. source (KS / chi-square)
├── correlations.html # pairwise correlation heatmap + delta
├── privacy.json # k-anonymity, l-diversity, membership-inference
├── fidelity.json # MostlyAI-QA score, the flagship engine reconstruction loss
├── lineage.json # every engine step, parameters, durations
└── signature.sig # Ed25519 signature when signing key configuredPlatform status
Live dashboard for API availability, worker health, and scheduled maintenance.
status.radmah.ai →