LocalRadMah AI
Run supported engines fully offline without a network round-trip, useful for air-gapped proofs-of-value, CI sanity checks that must not spend real credits, and compliance environments where data cannot leave the customer's network.
1.2.0 replaces the previous broken local path (which silently raised AttributeError on every engine) with an honest dispatcher. Engines that genuinely need the SaaS infrastructure raise LocalExecutionNotSupportedError with a clear next step.#Engine support matrix
| Engine | Mode | Local support | Detail |
|---|---|---|---|
| mock | mock | ✅ fully local | Runs radmah_core.engines.mock.mock.mock inside the SDK process. No network call. Ideal for smoke tests, CI sanity checks, and air-gapped PoVs. |
| mock | simulate / synthesize | ❌ SaaS only | Requires the worker pool and GPU training infrastructure. Submit via RadMah(api_key=...). |
| virtual_scada | * | ❌ SaaS only | Requires the OpenPLC simulator fleet, VPLC orchestrator, and protocol fuzzers that live in the RadMah AI worker cluster. |
| ics_security | * | ❌ SaaS only | Requires the baseline+attack scenario pipeline with MITRE coverage registry resolution. |
| dvfm | * | ❌ SaaS only | Requires a GPU-backed fitted checkpoint. Training cannot happen inside a single-process SDK caller. |
| cctp | * | ❌ SaaS only | Requires live connector adapters to customer systems. The SDK has no mechanism to make outbound connections on the customer's behalf. |
#Installation
The local engine uses compiled native extensions shipped with radmah-core, which is an optional dependency:
pip install "radmah-sdk[local]"#Fully-local mock run
from radmah_sdk import LocalRadMah AI
with LocalRadMah(data_dir="./.radmah") as local:
seal = local.create_seal(
{
"entity_types": [
{"name": "transactions", "columns": [
{"name": "amount", "dtype": "float"},
{"name": "category", "dtype": "category"},
]}
]
},
engine="mock",
mode="mock",
)
job = local.run_job(seal.id, seed=42, records=1_000)
print(job.status) # "succeeded"
for artifact in local.list_artifacts(job.id):
data = local.download_artifact(job.id, artifact.id)
print(artifact.name, len(data))
#Hybrid: local scaffolding + SaaS execution
Pass api_key= and base_url= so LocalRadMah AI can delegate SaaS-only engines. When an unsupported engine is requested, the error message points directly at the SaaS fallback:
from radmah_sdk import LocalRadMah AI, LocalExecutionNotSupportedError
with LocalRadMah(
api_key="sl_live_…",
base_url="https://api.radmah.ai",
data_dir="./.radmah",
) as local:
seal = local.create_seal(contract, engine="virtual_scada", mode="stream")
try:
job = local.run_job(seal.id)
except LocalExecutionNotSupportedError as exc:
# exc.detail = { engine, reason, saas_available: True }
# Fall back to the SaaS API automatically.
remote_seal = local.remote.create_seal(contract, engine="virtual_scada")
job = local.remote.jobs.create(seal_id=remote_seal.id, engine="virtual_scada")
#Error shape
# LocalExecutionNotSupportedError inherits from RadMah AIError so
# existing catch blocks still work. It carries structured detail:
{
"engine": "dvfm",
"reason": "the flagship engine requires a GPU-backed fitted checkpoint that "
"cannot be trained inside a single-process SDK caller",
"saas_available": true, # whether local.remote is configured
}
#Evidence compatibility
Local runs emit the same evidence bundle shape as SaaS runs (seal hash, contract hash, data hash, determinism proof, per-artifact SHA-256). Re-running the same seal with the same seed inside LocalRadMah AI produces byte-identical artifacts, verifiable via radmah_sdk.verify.verify_bundle().