chap_core.hpo package¶
Submodules¶
chap_core.hpo.base module¶
- class chap_core.hpo.base.Float(low: float, high: float, step: float | None = None, log: bool = False)[source]¶
Bases:
object- high: float¶
- log: bool = False¶
- low: float¶
- step: float | None = None¶
- class chap_core.hpo.base.Int(low: int, high: int, step: int = 1, log: bool = False)[source]¶
Bases:
object- high: int¶
- log: bool = False¶
- low: int¶
- step: int = 1¶
chap_core.hpo.cli module¶
chap_core.hpo.hpoModel module¶
- class chap_core.hpo.hpoModel.HpoModel(searcher: Searcher, objective: Objective, direction: Literal['maximize', 'minimize'] = 'minimize', model_configuration: dict[str, list] | None = None)[source]¶
Bases:
HpoModelInterface- property get_best_config¶
- get_leaderboard(dataset: Literal['hydro_met_subset', 'hydromet_clean', 'hydromet_10', 'hydromet_5_filtered', 'laos_full_data', 'uganda_data', 'ISIMIP_dengue_harmonized'] | None) list[dict[str, Any]][source]¶
Runs hyperparameter optimization over the search space. Returns a sorted list of configurations together with their score.
- train(dataset: Literal['hydro_met_subset', 'hydromet_clean', 'hydromet_10', 'hydromet_5_filtered', 'laos_full_data', 'uganda_data', 'ISIMIP_dengue_harmonized'] | None) Tuple[str, dict[str, Any]][source]¶
Calls get_leaderboard to find the optimal configuration. Then trains the tuned model on the whole input dataset (train + validation).
chap_core.hpo.hpoModelInterface module¶
- class chap_core.hpo.hpoModelInterface.HpoModelInterface[source]¶
Bases:
ConfiguredModel
chap_core.hpo.objective module¶
- class chap_core.hpo.objective.Objective(model_template: ModelTemplate, metric: str = 'MSE', prediction_length: int = 3, n_splits: int = 4, ignore_environment: bool = False, debug: bool = False, log_file: str | None = None, run_directory_type: Literal['latest', 'timestamp', 'use_existing'] | None = 'timestamp')[source]¶
Bases:
object
chap_core.hpo.searcher module¶
- class chap_core.hpo.searcher.RandomSearcher(max_trials: int)[source]¶
Bases:
SearcherSamples with replacement max_trials number of configurations.
- class chap_core.hpo.searcher.Searcher[source]¶
Bases:
objectAbstract optimizer interface.
Implementations should: - call reset(space) before use - repeatedly return configurations via ask() until None (no more work) - receive feedback via tell(params, result)
- class chap_core.hpo.searcher.TPESearcher(direction: str = 'minimize', max_trials: int | None = None)[source]¶
Bases:
SearcherTree Parzen Estimator. Parallel-safe TPE searcher using Optuna’s ask/tell with native distributions. - ask() returns a params dict that includes a reserved ‘_trial_id’. - tell() extracts ‘_trial_id’ from params to update the correct trial. Supports: - list[…] -> CategoricalDistribution - Float(low, high, step=None|>0, log=bool) -> FloatDistribution - Int(low, high, step>1, log=bool) -> IntDistribution