Quickstart
Install the SDK, generate your first synthetic dataset, and download an evidence bundle in under 5 minutes.
Prerequisites
- A RadMah AI account (free tier works for this guide)
- An API key from Settings → API Keys in the dashboard
ℹAPI key prefix
Live keys use the sl_live_ prefix. Test keys use sl_test_. Both work for this guide.
Install the SDK
pip install radmah-sdkAuthenticate
Create a client instance with your API key. The SDK sends it via the X-API-Key header automatically.
from radmah_sdk import RadMahClient
client = RadMahClient(api_key="sl_live_your_key_here")⚠Keep your key secret
Never commit API keys to source control. Use environment variables or a secrets manager in production.
Generate synthetic data
Describe the data you need in plain English. The AI Orchestrator translates your description into a sealed job specification (“sealed contract”), selects the best generation engine, and builds a plan automatically.
job = client.jobs.create(
description="customer orders for an e-commerce platform",
rows=100_000,
)
print(f"Job {job.id} created — status: {job.status}")✦No engine selection needed
You never need to pick a generation engine. The platform analyses your description and automatically selects the optimal engine, parameters, and statistical profile.
Approve the plan
Before generation begins, the platform produces a plan showing the schema, row count, and engine selection. Review and approve it programmatically or let it auto-approve.
# Retrieve the plan
plan = client.jobs.get_plan(job_id=job.id)
print(f"Engine: {plan.engine}")
print(f"Columns: {[c.name for c in plan.columns]}")
print(f"Rows: {plan.rows}")
# Approve to start generation
client.jobs.approve(job_id=job.id)ℹAuto-approval
Pass auto_approve=True in the Python SDK create call, or include "auto_approve": true in the REST JSON body, to skip the approval step.
Wait for completion
Poll until the job finishes. The SDK handles polling automatically.
# Blocks until succeeded or failed (polls every 2s)
job.wait()
print(f"Final status: {job.status}") # "succeeded"Download results and evidence
Every job produces a CSV dataset and a cryptographically-chained signed evidence bundle. Download both.
# Download CSV
job.download("output.csv")
# Or load directly into pandas
df = job.to_dataframe()
print(df.head())
# Download the full signed evidence bundle
client.artifacts.download_bundle(job_id=job.id, path="evidence/")The signed evidence bundle
| # | Artifact | Purpose |
|---|---|---|
| 1 | Sealed Job Specification | Machine-readable record of the approved job |
| 2 | Schema Manifest | Column definitions and constraints |
| 3 | Quality Report | Statistical fidelity metrics |
| 4 | Determinism Proof | Cryptographic hash proving reproducibility |
| 5 | Privacy Analysis | Differential privacy audit |
| 6 | Bias Report | Demographic parity metrics |
| 7 | Lineage Graph | Upstream provenance chain |
| 8 | Relation-Closure Certificate | Referential integrity proof |
| 9 | Cryptographic Seal | Binding hash over every prior artifact in the bundle |
✦Evidence is always produced
Every job produces the full signed evidence bundle unconditionally, regardless of plan tier or dataset size. This cannot be disabled.
Next steps
- Read Core Concepts to understand sealed job specifications, cryptographic seals, and signed evidence bundles
- Set up Virtual SCADA for industrial simulation
- Review the REST API Reference for direct HTTP integration