API reference¶
Auto-generated from the chap_client docstrings.
Client¶
ChapClient
¶
Bases: SystemEndpoints, DatasetsEndpoints, ModelsEndpoints, ConfiguredModelsWithDataSourceEndpoints, EvaluationsEndpoints, PredictionsEndpoints
Calls chap REST endpoints.
Composed from the mixins under chap_client.endpoints so each
resource cluster lives in its own file. The class itself is
intentionally empty: the methods come from the mixins and the HTTP
plumbing comes from ChapClientBase. Use as a context manager
so the underlying connection pool is closed.
with ChapClient(base_url, auth=(user, pw)) as client:
client.system_info()
client.list_evaluations()
Schemas¶
Errors¶
ChapHttpError
¶
Bases: Exception
Raised when a chap call returns a non-2xx response.
Carries the chap response body (parsed JSON when possible, otherwise the raw text) so the caller can surface it in logs / the run report.
__init__(method, path, status, detail)
¶
Capture the request shape and the chap response body.
System¶
ChapSystemInfo
¶
Bases: BaseModel
Payload returned by GET <chap>/system/info.
The revision field that chap occasionally returns is intentionally
omitted — it is almost never set in practice.
Datasets¶
ChapDataset
¶
Bases: BaseModel
A dataset chap stores: org units + period range + data sources.
Datasets are tagged with a type (evaluation for
evaluations / backtests, prediction for predictions) so
callers picking a dataset to evaluate against know which ones are
eligible.
ChapDataSource
¶
Bases: BaseModel
A covariate ↔ DHIS2 data-element mapping inside a configured model.
Models / configured models¶
ChapFeature
¶
Bases: BaseModel
A named feature reference -- a model's target or one of its covariates.
chap's ModelSpecRead carries each feature as a small object
({displayName, description, name}) rather than a bare string.
The OpenAPI schema currently types these as string -- the wire
truth is the object form, so trust the wire.
ChapModelSpec
¶
Bases: BaseModel
Read shape returned by /v1/crud/models and /v1/crud/configured-models.
chap's own naming is somewhat overloaded (the same ModelSpecRead
schema describes both lists). We model the fields callers reach for
today; chap may add or change others over time -- extra="ignore"
keeps unknown fields from breaking parsing.
ChapConfiguredModel
¶
Bases: BaseModel
The configuredModel block embedded in each configured-model row.
ChapConfiguredModelCreate
¶
Bases: BaseModel
Request body for POST /v1/crud/configured-models.
chap_client.endpoints.models.ModelsEndpoints.create_configured_model
preflights model_template_id against list_model_templates
by default (validate=True) so a wrong-id-space mistake
surfaces synchronously instead of as chap's leaky 500 with an
AssertionError in the body. See CHAP_SPEC_DRIFT.md finding #3.
Extra fields are forbidden; user_option_values defaults to
{} so chap doesn't crash with the "None is not of type
'object'" error documented as CHAP_SPEC_DRIFT.md finding #10.
ChapConfiguredModelDB
¶
Bases: BaseModel
Response shape from POST /v1/crud/configured-models.
Smaller than ChapModelSpec -- this is the row chap stored,
not the merged read view. modelTemplateId is exposed here.
ChapConfiguredModelWithDataSource
¶
Bases: BaseModel
One row from GET .../v1/crud/configured-models-with-data-source.
Carries everything we need to construct a DHIS2 analytics query: the
DHIS2 data elements (per covariate), the org units, and the period
range (start_period → present, in period_type granularity).
ChapModelTemplate
¶
Bases: BaseModel
The chap model template a configured model is based on.
Only the fields we actively use or surface in logs are modelled — the
rest (URLs, archived flags, hpoSearchSpace, etc.) are left to extra.
id is optional because the embedded form (under
ChapConfiguredModel.modelTemplate) sometimes omits it; the
standalone form returned by GET /v1/crud/model-templates always
has it. Callers that need a guaranteed id should look it up via
list_model_templates().
Evaluations¶
ChapMakeEvaluationRequest
¶
Bases: BaseModel
Body for POST /v1/analytics/create-backtest (UI: "Create Evaluation").
Note: model_id is the configured-model name (a string),
not the integer id from /v1/crud/configured-models. chap's
OpenAPI types it as string -- confusing but consistent with
what the API actually accepts. chap_client validates this string
against the live configured-model list in
EvaluationsEndpoints.create_evaluation (preflight, can be
disabled with validate=False); see CHAP_SPEC_DRIFT.md
finding #5.
Extra fields are forbidden so a mistyped key (nPriods)
errors at validation rather than silently falling through to
chap's default; numeric fields are bounded to > 0. See
CHAP_SPEC_DRIFT.md findings #14-#16.
ChapEvaluationRead
¶
Bases: BaseModel
Read shape for GET /v1/crud/backtests / /{id}/info (UI: "Evaluation").
Once an evaluation finishes, aggregate_metrics carries the
summary metrics chap computed (CRPS, MAE, RMSE, coverage, etc.)
-- this is the "evaluation result" most callers want.
ChapEvaluationEntry
¶
Bases: BaseModel
One predicted value from GET /v1/analytics/evaluation-entry.
Looks like ChapPredictionEntry plus a split_period --
evaluations run multiple splits per dataset, and each entry knows
which split produced it.
Predictions¶
ChapMakePredictionRequest
¶
Bases: BaseModel
Body for POST /v1/analytics/make-prediction-with-data-source.
We use the -with-data-source variant (vs the stable make-prediction)
because it carries configuredModelWithDataSourceId, which chap stores
on the resulting prediction so the UI can link it back to the configured
model that produced it.
Extra fields are forbidden so a mistyped key (nPriods) errors at
validation rather than silently falling through to chap's default. See
CHAP_SPEC_DRIFT.md finding #16.
ChapObservation
¶
Bases: BaseModel
One value-per-(period, org_unit, covariate) cell, sent to chap as input.
ChapFetchRequest
¶
Bases: BaseModel
Tells chap to fetch a covariate from an external data source itself.
ChapJobResponse
¶
Bases: BaseModel
Sync response from POST /v1/analytics/make-prediction -- just the id.
ChapJobDescription
¶
Bases: BaseModel
Row from GET /v1/jobs / shape we'd build polling job status.
ChapPredictionEntry
¶
Bases: BaseModel
One predicted value from GET /v1/analytics/prediction-entry/{id}?quantiles=....
Structured errors¶
ChapRejection
¶
Bases: BaseModel
One rejected (org_unit, feature_name) cell in a chap 400 response.
ChapMissingValuesDetail
¶
Bases: BaseModel
Structured detail body chap returns when input validation fails.
Today chap returns this shape inside an HTTP 400 body (FastAPI's
{"detail": {...}} envelope, which from_error_body() peels off).
Upstream chap has a pending PR to switch this to a 200 response
with a similar shape; once that lands we'll add a parallel parser
for the success body and treat partially-rejected predictions as
status="succeeded" with a rejection_detail set.
Example payload (the inner detail dict, after the FastAPI envelope
is peeled off):
{
"message": "All regions rejected due to missing values",
"imported_count": 0,
"rejected": [
{"reason": "...", "orgUnit": "...", "featureName": "rainfall", "timePeriods": ["202510", "202511"]}
]
}
from_error_body(body)
classmethod
¶
Try to parse a chap error body. Returns None if shape doesn't match.