R
RadMah AIDOCS
Sign In

Quickstart

Install the SDK, generate your first synthetic dataset, and download an evidence bundle in under 5 minutes.

Prerequisites

  • A RadMah AI account (free tier works for this guide)
  • An API key from Settings → API Keys in the dashboard

API key prefix

Live keys use the sl_live_ prefix. Test keys use sl_test_. Both work for this guide.

1

Install the SDK

Install
pip install radmah-sdk
2

Authenticate

Create a client instance with your API key. The SDK sends it via the X-API-Key header automatically.

Create client
from radmah_sdk import RadMahClient

client = RadMahClient(api_key="sl_live_your_key_here")

Keep your key secret

Never commit API keys to source control. Use environment variables or a secrets manager in production.

3

Generate synthetic data

Describe the data you need in plain English. The AI Orchestrator translates your description into a sealed job specification (“sealed contract”), selects the best generation engine, and builds a plan automatically.

Create a job
job = client.jobs.create(
    description="customer orders for an e-commerce platform",
    rows=100_000,
)

print(f"Job {job.id} created — status: {job.status}")

No engine selection needed

You never need to pick a generation engine. The platform analyses your description and automatically selects the optimal engine, parameters, and statistical profile.

4

Approve the plan

Before generation begins, the platform produces a plan showing the schema, row count, and engine selection. Review and approve it programmatically or let it auto-approve.

Approve
# Retrieve the plan
plan = client.jobs.get_plan(job_id=job.id)

print(f"Engine:  {plan.engine}")
print(f"Columns: {[c.name for c in plan.columns]}")
print(f"Rows:    {plan.rows}")

# Approve to start generation
client.jobs.approve(job_id=job.id)

Auto-approval

Pass auto_approve=True in the Python SDK create call, or include "auto_approve": true in the REST JSON body, to skip the approval step.

5

Wait for completion

Poll until the job finishes. The SDK handles polling automatically.

Wait
# Blocks until succeeded or failed (polls every 2s)
job.wait()

print(f"Final status: {job.status}")  # "succeeded"
6

Download results and evidence

Every job produces a CSV dataset and a cryptographically-chained signed evidence bundle. Download both.

Download
# Download CSV
job.download("output.csv")

# Or load directly into pandas
df = job.to_dataframe()
print(df.head())

# Download the full signed evidence bundle
client.artifacts.download_bundle(job_id=job.id, path="evidence/")

The signed evidence bundle

#ArtifactPurpose
1Sealed Job SpecificationMachine-readable record of the approved job
2Schema ManifestColumn definitions and constraints
3Quality ReportStatistical fidelity metrics
4Determinism ProofCryptographic hash proving reproducibility
5Privacy AnalysisDifferential privacy audit
6Bias ReportDemographic parity metrics
7Lineage GraphUpstream provenance chain
8Relation-Closure CertificateReferential integrity proof
9Cryptographic SealBinding hash over every prior artifact in the bundle

Evidence is always produced

Every job produces the full signed evidence bundle unconditionally, regardless of plan tier or dataset size. This cannot be disabled.

Next steps