Evaluation

Evaluation(observed, modeled[, join_on, ...])

A dataset consisting of a modeled and observed dataframe.

Evaluation.brier(threshold[, mod_col, ...])

Calculate Brier score using the properscoring package. See :py:fun:`threshold_brier_score() <threshold_brier_score>` in properscoring. Grouping is not necessary because BRIER returns a value per forecast. Grouping would happen when computing BRIERS. The Eval object generally wants one observation per modeled data point, that is overkill for this function but we handle it in a consistent manner with the rest of Evaluation. :Parameters: * mod_col -- Column name of modelled data * obs_col -- Column name of observed data.

Evaluation.contingency(threshold[, ...])

Calculate contingency statistics

Evaluation.crps([mod_col, obs_col, ...])

Calculate CRPS (continuous ranked probability score) using the properscoring package. See :py:fun:`crps_ensemble() <crps_ensemble>` in properscoring.

Evaluation.event(threshold[, mod_col, ...])

TODO: HUH? this is the same description as gof but returns a contingency table? Calculate goodness of fit statistics using the spotpy package. See :py:fun:`calculate_all_functions() <calculate_all_functions>` in spotpy. :Parameters: * mod_col -- Column name of modelled data * obs_col -- Column name of observed data * group_by -- Column names to group by prior to calculating statistics * decimals -- round stats to specified decimal places * threshold -- Threshold value for high flow event or * column name containing threshold value..

Evaluation.gof([mod_col, obs_col, group_by, ...])

Calculate goodness of fit statistics using the spotpy package. See :py:fun:`calculate_all_functions() <calculate_all_functions>` in spotpy. :Parameters: * mod_col -- Column name of modelled data * obs_col -- Column name of observed data * group_by -- Column names to group by prior to calculating statistics * inf_as_na -- convert inf values to na? * decimals -- round stats to specified decimal places.