chap_core.assessment.metrics package¶
Submodules¶
chap_core.assessment.metrics.above_truth module¶
- class chap_core.assessment.metrics.above_truth.RatioOfSamplesAboveTruth[source]¶
Bases:
MetricBase- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(<DataDimension.location: 'location'>, <DataDimension.time_period: 'time_period'>, <DataDimension.horizon_distance: 'horizon_distance'>), metric_name='Ratio of Samples Above Truth (per time & horizon)', metric_id='ratio_samples_above_truth_time_hz', description='Ratio (0.0-1.0) of forecast samples with mean value > truth for each (location, time_period, horizon_distance).')¶
- class chap_core.assessment.metrics.above_truth.SamplesAboveTruth[source]¶
Bases:
MetricBase- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(<DataDimension.location: 'location'>, <DataDimension.time_period: 'time_period'>, <DataDimension.horizon_distance: 'horizon_distance'>), metric_name='Samples Above Truth (per time & horizon)', metric_id='samples_above_truth_count_time_hz', description='Count of forecast samples with mean value > truth for each (location, time_period, horizon_distance).')¶
chap_core.assessment.metrics.base module¶
Base classes for all metrics.
- class chap_core.assessment.metrics.base.MetricBase[source]¶
Bases:
objectBase class for metrics. Subclass this and implement the compute-method to create a new metric. Define the spec attribute to specify what the metric outputs.
- get_metric(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- gives_highest_resolution() bool[source]¶
Returns True if the metric gives one number per location/time_period/horizon_distance combination.
- is_full_aggregate() bool[source]¶
Returns True if the metric gives only one number for the whole dataset
- spec: MetricSpec = MetricSpec(output_dimensions=(), metric_name='metric', metric_id='metric', description='No description provided')¶
- class chap_core.assessment.metrics.base.MetricSpec(output_dimensions: tuple[chap_core.assessment.flat_representations.DataDimension, ...] = (), metric_name: str = 'metric', metric_id: str = 'metric', description: str = 'No description provided')[source]¶
Bases:
object- description: str = 'No description provided'¶
- metric_id: str = 'metric'¶
- metric_name: str = 'metric'¶
- output_dimensions: tuple[DataDimension, ...] = ()¶
chap_core.assessment.metrics.crps module¶
Continuous Ranked Probability Score (CRPS) metrics.
- class chap_core.assessment.metrics.crps.CRPS[source]¶
Bases:
MetricBaseContinuous Ranked Probability Score (CRPS) metric for the entire dataset. Gives one CRPS value across all locations, time periods and horizons.
- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(), metric_name='CRPS', metric_id='crps', description='Overall CRPS across entire dataset')¶
- class chap_core.assessment.metrics.crps.CRPSPerLocation[source]¶
Bases:
MetricBaseContinuous Ranked Probability Score (CRPS) metric aggregated by location. Groups by location to give average CRPS per location across all time periods and horizons.
- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(<DataDimension.location: 'location'>,), metric_name='CRPS', metric_id='crps_per_location', description='Average CRPS per location')¶
- class chap_core.assessment.metrics.crps.DetailedCRPS[source]¶
Bases:
MetricBaseDetailed Continuous Ranked Probability Score (CRPS) metric. Does not group - gives one CRPS value per location/time_period/horizon_distance combination. CRPS measures both calibration and sharpness of probabilistic forecasts.
- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(<DataDimension.location: 'location'>, <DataDimension.time_period: 'time_period'>, <DataDimension.horizon_distance: 'horizon_distance'>), metric_name='CRPS', metric_id='detailed_crps', description='CRPS per location, time period and horizon')¶
chap_core.assessment.metrics.crps_norm module¶
Normalized Continuous Ranked Probability Score (CRPS) metrics.
- class chap_core.assessment.metrics.crps_norm.CRPSNorm[source]¶
Bases:
MetricBaseNormalized Continuous Ranked Probability Score (CRPS) metric aggregated by location. Groups by location to give average normalized CRPS per location across all time periods and horizons.
- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(<DataDimension.location: 'location'>,), metric_name='CRPS Normalized', metric_id='crps_norm', description='Average normalized CRPS per location')¶
- class chap_core.assessment.metrics.crps_norm.DetailedCRPSNorm[source]¶
Bases:
MetricBaseDetailed Normalized Continuous Ranked Probability Score (CRPS) metric. Does not group - gives one normalized CRPS value per location/time_period/horizon_distance combination. CRPS is normalized by the range of observed values to make it comparable across different scales.
- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(<DataDimension.location: 'location'>, <DataDimension.time_period: 'time_period'>, <DataDimension.horizon_distance: 'horizon_distance'>), metric_name='CRPS Normalized', metric_id='detailed_crps_norm', description='Normalized CRPS per location, time period and horizon')¶
chap_core.assessment.metrics.example_metric module¶
Example metric for demonstration purposes.
- class chap_core.assessment.metrics.example_metric.ExampleMetric[source]¶
Bases:
MetricBaseExample metric that computes absolute error per location and time_period. This is a demonstration metric showing how to create custom metrics.
- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(<DataDimension.location: 'location'>, <DataDimension.time_period: 'time_period'>), metric_name='Example Absolute Error', metric_id='example_metric', description='Sum of absolute error per location and time_period')¶
chap_core.assessment.metrics.mae module¶
Mean Absolute Error (MAE) metric.
- class chap_core.assessment.metrics.mae.MAE[source]¶
Bases:
MetricBaseMean Absolute Error metric. Groups by location and horizon_distance to show error patterns across forecast horizons.
- spec: MetricSpec = MetricSpec(output_dimensions=(<DataDimension.location: 'location'>, <DataDimension.horizon_distance: 'horizon_distance'>), metric_name='MAE', metric_id='metric', description='No description provided')¶
chap_core.assessment.metrics.peak_diff module¶
one numeric metric column per metric. get_metric validates this against your MetricSpec. If you try to return extra numeric columns (e.g., both value_diff and week_lag), CHAP will raise a “produced wrong columns” error.
- class chap_core.assessment.metrics.peak_diff.PeakValueDiffMetric[source]¶
Bases:
MetricBase- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(<DataDimension.location: 'location'>, <DataDimension.time_period: 'time_period'>, <DataDimension.horizon_distance: 'horizon_distance'>), metric_name='Peak Value Difference', metric_id='peak_value_diff', description='Truth peak value minus predicted peak value, per horizon.')¶
- class chap_core.assessment.metrics.peak_diff.PeakWeekLagMetric[source]¶
Bases:
MetricBase- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(<DataDimension.location: 'location'>, <DataDimension.time_period: 'time_period'>, <DataDimension.horizon_distance: 'horizon_distance'>), metric_name='Peak Week Lag', metric_id='peak_week_lag', description='Lag in weeks between true and predicted peak (pred - truth), per horizon.')¶
chap_core.assessment.metrics.percentile_coverage module¶
Percentile coverage metrics for evaluating forecast calibration.
- class chap_core.assessment.metrics.percentile_coverage.IsWithin10th90thDetailed[source]¶
Bases:
MetricBaseDetailed metric checking if observation falls within 10th-90th percentile of forecast samples. Does not group - gives one binary value (0 or 1) per location/time_period/horizon_distance combination. Returns 1 if observation is within the 10th-90th percentile range, 0 otherwise.
- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(<DataDimension.location: 'location'>, <DataDimension.time_period: 'time_period'>, <DataDimension.horizon_distance: 'horizon_distance'>), metric_name='Within 10-90 Percentile', metric_id='is_within_10th_90th_detailed', description='Binary indicator if observation is within 10th-90th percentile per location, time period and horizon')¶
- class chap_core.assessment.metrics.percentile_coverage.IsWithin25th75thDetailed[source]¶
Bases:
MetricBaseDetailed metric checking if observation falls within 25th-75th percentile of forecast samples. Does not group - gives one binary value (0 or 1) per location/time_period/horizon_distance combination. Returns 1 if observation is within the 25th-75th percentile range, 0 otherwise.
- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(<DataDimension.location: 'location'>, <DataDimension.time_period: 'time_period'>, <DataDimension.horizon_distance: 'horizon_distance'>), metric_name='Within 25-75 Percentile', metric_id='is_within_25th_75th_detailed', description='Binary indicator if observation is within 25th-75th percentile')¶
- class chap_core.assessment.metrics.percentile_coverage.RatioWithin10th90th[source]¶
Bases:
MetricBaseOverall ratio of observations within 10th-90th percentile for entire dataset. Gives one ratio value across all locations, time periods and horizons.
- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(), metric_name='Ratio Within 10-90 Percentile', metric_id='ratio_within_10th_90th', description='Overall ratio of observations within 10th-90th percentile')¶
- class chap_core.assessment.metrics.percentile_coverage.RatioWithin10th90thPerLocation[source]¶
Bases:
MetricBaseRatio of observations within 10th-90th percentile, aggregated by location. Groups by location to give the proportion of forecasts where observation fell within range.
- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(<DataDimension.location: 'location'>,), metric_name='Ratio Within 10-90 Percentile', metric_id='ratio_within_10th_90th_per_location', description='Ratio of observations within 10th-90th percentile per location')¶
- class chap_core.assessment.metrics.percentile_coverage.RatioWithin25th75th[source]¶
Bases:
MetricBaseOverall ratio of observations within 25th-75th percentile for entire dataset. Gives one ratio value across all locations, time periods and horizons.
- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(), metric_name='Ratio Within 25-75 Percentile', metric_id='ratio_within_25th_75th', description='Overall ratio of observations within 25th-75th percentile')¶
- class chap_core.assessment.metrics.percentile_coverage.RatioWithin25th75thPerLocation[source]¶
Bases:
MetricBaseRatio of observations within 25th-75th percentile, aggregated by location. Groups by location to give the proportion of forecasts where observation fell within range.
- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(<DataDimension.location: 'location'>,), metric_name='Ratio Within 25-75 Percentile', metric_id='ratio_within_25th_75th_per_location', description='Ratio of observations within 25th-75th percentile per location')¶
chap_core.assessment.metrics.rmse module¶
Root Mean Squared Error (RMSE) metrics.
- class chap_core.assessment.metrics.rmse.DetailedRMSE[source]¶
Bases:
MetricBaseDetailed Root Mean Squared Error metric. Does not group - gives one RMSE value per location/time_period/horizon_distance combination. This provides the highest resolution view of model performance.
- spec: MetricSpec = MetricSpec(output_dimensions=(<DataDimension.location: 'location'>, <DataDimension.time_period: 'time_period'>, <DataDimension.horizon_distance: 'horizon_distance'>), metric_name='RMSE', metric_id='metric', description='Detailed RMSE')¶
- class chap_core.assessment.metrics.rmse.RMSE[source]¶
Bases:
MetricBaseRoot Mean Squared Error metric. Groups by location to give RMSE per location across all time periods and horizons.
- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(<DataDimension.location: 'location'>,), metric_name='RMSE', metric_id='metric', description='No description provided')¶
chap_core.assessment.metrics.test_metrics module¶
Test metrics for debugging and verification.
- class chap_core.assessment.metrics.test_metrics.TestMetric[source]¶
Bases:
MetricBaseTest metric that counts the total number of forecast samples in the entire dataset. Returns a single number representing the total sample count.
- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(), metric_name='Sample Count', metric_id='test_sample_count', description='Total number of forecast samples in dataset')¶
- class chap_core.assessment.metrics.test_metrics.TestMetricDetailed[source]¶
Bases:
MetricBaseTest metric that counts the number of forecast samples per location/time_period/horizon_distance. Useful for debugging and verifying data structure correctness. Returns the count of samples for each combination.
- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(<DataDimension.location: 'location'>, <DataDimension.time_period: 'time_period'>, <DataDimension.horizon_distance: 'horizon_distance'>), metric_name='Sample Count', metric_id='test_sample_count_detailed', description='Number of forecast samples per location, time period and horizon')¶
Module contents¶
Metrics submodule for assessment. All metrics are imported here for backwards compatibility.
- class chap_core.assessment.metrics.CRPS[source]¶
Bases:
MetricBaseContinuous Ranked Probability Score (CRPS) metric for the entire dataset. Gives one CRPS value across all locations, time periods and horizons.
- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(), metric_name='CRPS', metric_id='crps', description='Overall CRPS across entire dataset')¶
- class chap_core.assessment.metrics.CRPSNorm[source]¶
Bases:
MetricBaseNormalized Continuous Ranked Probability Score (CRPS) metric aggregated by location. Groups by location to give average normalized CRPS per location across all time periods and horizons.
- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(<DataDimension.location: 'location'>,), metric_name='CRPS Normalized', metric_id='crps_norm', description='Average normalized CRPS per location')¶
- class chap_core.assessment.metrics.CRPSPerLocation[source]¶
Bases:
MetricBaseContinuous Ranked Probability Score (CRPS) metric aggregated by location. Groups by location to give average CRPS per location across all time periods and horizons.
- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(<DataDimension.location: 'location'>,), metric_name='CRPS', metric_id='crps_per_location', description='Average CRPS per location')¶
- class chap_core.assessment.metrics.DetailedCRPS[source]¶
Bases:
MetricBaseDetailed Continuous Ranked Probability Score (CRPS) metric. Does not group - gives one CRPS value per location/time_period/horizon_distance combination. CRPS measures both calibration and sharpness of probabilistic forecasts.
- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(<DataDimension.location: 'location'>, <DataDimension.time_period: 'time_period'>, <DataDimension.horizon_distance: 'horizon_distance'>), metric_name='CRPS', metric_id='detailed_crps', description='CRPS per location, time period and horizon')¶
- class chap_core.assessment.metrics.DetailedCRPSNorm[source]¶
Bases:
MetricBaseDetailed Normalized Continuous Ranked Probability Score (CRPS) metric. Does not group - gives one normalized CRPS value per location/time_period/horizon_distance combination. CRPS is normalized by the range of observed values to make it comparable across different scales.
- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(<DataDimension.location: 'location'>, <DataDimension.time_period: 'time_period'>, <DataDimension.horizon_distance: 'horizon_distance'>), metric_name='CRPS Normalized', metric_id='detailed_crps_norm', description='Normalized CRPS per location, time period and horizon')¶
- class chap_core.assessment.metrics.DetailedRMSE[source]¶
Bases:
MetricBaseDetailed Root Mean Squared Error metric. Does not group - gives one RMSE value per location/time_period/horizon_distance combination. This provides the highest resolution view of model performance.
- spec: MetricSpec = MetricSpec(output_dimensions=(<DataDimension.location: 'location'>, <DataDimension.time_period: 'time_period'>, <DataDimension.horizon_distance: 'horizon_distance'>), metric_name='RMSE', metric_id='metric', description='Detailed RMSE')¶
- class chap_core.assessment.metrics.ExampleMetric[source]¶
Bases:
MetricBaseExample metric that computes absolute error per location and time_period. This is a demonstration metric showing how to create custom metrics.
- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(<DataDimension.location: 'location'>, <DataDimension.time_period: 'time_period'>), metric_name='Example Absolute Error', metric_id='example_metric', description='Sum of absolute error per location and time_period')¶
- class chap_core.assessment.metrics.IsWithin10th90thDetailed[source]¶
Bases:
MetricBaseDetailed metric checking if observation falls within 10th-90th percentile of forecast samples. Does not group - gives one binary value (0 or 1) per location/time_period/horizon_distance combination. Returns 1 if observation is within the 10th-90th percentile range, 0 otherwise.
- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(<DataDimension.location: 'location'>, <DataDimension.time_period: 'time_period'>, <DataDimension.horizon_distance: 'horizon_distance'>), metric_name='Within 10-90 Percentile', metric_id='is_within_10th_90th_detailed', description='Binary indicator if observation is within 10th-90th percentile per location, time period and horizon')¶
- class chap_core.assessment.metrics.IsWithin25th75thDetailed[source]¶
Bases:
MetricBaseDetailed metric checking if observation falls within 25th-75th percentile of forecast samples. Does not group - gives one binary value (0 or 1) per location/time_period/horizon_distance combination. Returns 1 if observation is within the 25th-75th percentile range, 0 otherwise.
- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(<DataDimension.location: 'location'>, <DataDimension.time_period: 'time_period'>, <DataDimension.horizon_distance: 'horizon_distance'>), metric_name='Within 25-75 Percentile', metric_id='is_within_25th_75th_detailed', description='Binary indicator if observation is within 25th-75th percentile')¶
- class chap_core.assessment.metrics.MAE[source]¶
Bases:
MetricBaseMean Absolute Error metric. Groups by location and horizon_distance to show error patterns across forecast horizons.
- spec: MetricSpec = MetricSpec(output_dimensions=(<DataDimension.location: 'location'>, <DataDimension.horizon_distance: 'horizon_distance'>), metric_name='MAE', metric_id='metric', description='No description provided')¶
- class chap_core.assessment.metrics.MetricBase[source]¶
Bases:
objectBase class for metrics. Subclass this and implement the compute-method to create a new metric. Define the spec attribute to specify what the metric outputs.
- get_metric(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- gives_highest_resolution() bool[source]¶
Returns True if the metric gives one number per location/time_period/horizon_distance combination.
- is_full_aggregate() bool[source]¶
Returns True if the metric gives only one number for the whole dataset
- spec: MetricSpec = MetricSpec(output_dimensions=(), metric_name='metric', metric_id='metric', description='No description provided')¶
- class chap_core.assessment.metrics.MetricSpec(output_dimensions: tuple[chap_core.assessment.flat_representations.DataDimension, ...] = (), metric_name: str = 'metric', metric_id: str = 'metric', description: str = 'No description provided')[source]¶
Bases:
object- description: str = 'No description provided'¶
- metric_id: str = 'metric'¶
- metric_name: str = 'metric'¶
- output_dimensions: tuple[DataDimension, ...] = ()¶
- class chap_core.assessment.metrics.PeakValueDiffMetric[source]¶
Bases:
MetricBase- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(<DataDimension.location: 'location'>, <DataDimension.time_period: 'time_period'>, <DataDimension.horizon_distance: 'horizon_distance'>), metric_name='Peak Value Difference', metric_id='peak_value_diff', description='Truth peak value minus predicted peak value, per horizon.')¶
- class chap_core.assessment.metrics.PeakWeekLagMetric[source]¶
Bases:
MetricBase- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(<DataDimension.location: 'location'>, <DataDimension.time_period: 'time_period'>, <DataDimension.horizon_distance: 'horizon_distance'>), metric_name='Peak Week Lag', metric_id='peak_week_lag', description='Lag in weeks between true and predicted peak (pred - truth), per horizon.')¶
- class chap_core.assessment.metrics.RMSE[source]¶
Bases:
MetricBaseRoot Mean Squared Error metric. Groups by location to give RMSE per location across all time periods and horizons.
- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(<DataDimension.location: 'location'>,), metric_name='RMSE', metric_id='metric', description='No description provided')¶
- class chap_core.assessment.metrics.RatioWithin10th90th[source]¶
Bases:
MetricBaseOverall ratio of observations within 10th-90th percentile for entire dataset. Gives one ratio value across all locations, time periods and horizons.
- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(), metric_name='Ratio Within 10-90 Percentile', metric_id='ratio_within_10th_90th', description='Overall ratio of observations within 10th-90th percentile')¶
- class chap_core.assessment.metrics.RatioWithin10th90thPerLocation[source]¶
Bases:
MetricBaseRatio of observations within 10th-90th percentile, aggregated by location. Groups by location to give the proportion of forecasts where observation fell within range.
- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(<DataDimension.location: 'location'>,), metric_name='Ratio Within 10-90 Percentile', metric_id='ratio_within_10th_90th_per_location', description='Ratio of observations within 10th-90th percentile per location')¶
- class chap_core.assessment.metrics.RatioWithin25th75th[source]¶
Bases:
MetricBaseOverall ratio of observations within 25th-75th percentile for entire dataset. Gives one ratio value across all locations, time periods and horizons.
- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(), metric_name='Ratio Within 25-75 Percentile', metric_id='ratio_within_25th_75th', description='Overall ratio of observations within 25th-75th percentile')¶
- class chap_core.assessment.metrics.RatioWithin25th75thPerLocation[source]¶
Bases:
MetricBaseRatio of observations within 25th-75th percentile, aggregated by location. Groups by location to give the proportion of forecasts where observation fell within range.
- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(<DataDimension.location: 'location'>,), metric_name='Ratio Within 25-75 Percentile', metric_id='ratio_within_25th_75th_per_location', description='Ratio of observations within 25th-75th percentile per location')¶
- class chap_core.assessment.metrics.SamplesAboveTruth[source]¶
Bases:
MetricBase- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(<DataDimension.location: 'location'>, <DataDimension.time_period: 'time_period'>, <DataDimension.horizon_distance: 'horizon_distance'>), metric_name='Samples Above Truth (per time & horizon)', metric_id='samples_above_truth_count_time_hz', description='Count of forecast samples with mean value > truth for each (location, time_period, horizon_distance).')¶
- class chap_core.assessment.metrics.TestMetric[source]¶
Bases:
MetricBaseTest metric that counts the total number of forecast samples in the entire dataset. Returns a single number representing the total sample count.
- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(), metric_name='Sample Count', metric_id='test_sample_count', description='Total number of forecast samples in dataset')¶
- class chap_core.assessment.metrics.TestMetricDetailed[source]¶
Bases:
MetricBaseTest metric that counts the number of forecast samples per location/time_period/horizon_distance. Useful for debugging and verifying data structure correctness. Returns the count of samples for each combination.
- compute(observations: FlatObserved, forecasts: FlatForecasts) DataFrame[source]¶
- spec: MetricSpec = MetricSpec(output_dimensions=(<DataDimension.location: 'location'>, <DataDimension.time_period: 'time_period'>, <DataDimension.horizon_distance: 'horizon_distance'>), metric_name='Sample Count', metric_id='test_sample_count_detailed', description='Number of forecast samples per location, time period and horizon')¶