Sign In

Pagination & Auto-Iteration

The SDK auto-paginates every list endpoint so you never hand-roll an offset= loop. Three pagination bugs routinely show up in customer code — the iter_* helpers make all three unreachable.

Available from SDK 1.2.0. The original list_jobs(offset=..., limit=...) style still works for small queries but returns at most 200 rows; use iter_jobs() whenever the result set could grow beyond that.

#The bugs we prevent

  • Off-by-one overflow. A hand-rolled loop that stops on items == [] issues one extra HTTP call after the server has already returned every row.
  • Infinite loop on partial pages. Filters can make the server return fewer than limit rows even when more rows exist behind later offsets. Callers who check len(page) == 0 loop forever.
  • Silent filter drop. Hand-written second-page calls often forget to re-include status= / kind=query params, so page 2 returns unfiltered rows. The customer's UI then shows seemingly-contradictory results.

#Streaming iteration

from radmah_sdk import RadMahClient

client = RadMahClient(api_key="sl_live_…")

# Yields every matching row without ever holding more than one page in RAM.
for job in client.iter_jobs(status="succeeded", kind="synthesize"):
    process_job(job)

# Cap the iteration at 500 rows — short-circuits before the next HTTP call
for job in client.iter_jobs(status="running", max_items=500):
    reassign(job)

#Eager collection

When you genuinely need every row materialised (e.g. building a cross-job comparison table), list_all_* drains the iterator and returns a list:

all_datasets = client.list_all_datasets()
all_seals = client.list_all_seals(page_size=200)
all_connectors = client.list_all_connectors()
all_synthetic = client.list_all_synthetic_datasets()
page_size is clamped to the server cap of 200. Passmax_items= if you have an upper bound — the iterator stops issuing HTTP calls as soon as the cap is reached.

#Async iteration

from radmah_sdk import AsyncRadMah AI

async with AsyncRadMah(api_key="sl_live_…") as client:
    async for job in client.iter_jobs(status="succeeded"):
        await process_job(job)

    # Or eager:
    jobs = await client.list_all_jobs(kind="virtual_scada")

#Termination guarantees

The iterator stops on any of three conditions, in order:

  1. Server-reported total reached — the envelope carries { items, total, offset, limit } so the SDK knows when it has seen every row.
  2. Short trailing page — the server returned fewer rows than the requested limit, so no further page can exist.
  3. Empty page — defensive: if total was missing and a later page comes back empty, stop.