http://scikit-learn.org/stable/modules/model_evaluation.html
- Scoring parameter: Model-evaluation tools using cross-validation (such as
model_selection.cross_val_scoreandmodel_selection.GridSearchCV) rely on an internal scoring strategy. This is discussed in the section The scoring parameter: defining model evaluation rules. -
For the most common use cases, you can designate a scorer object with the
scoringparameter; the table below shows all possible values. All scorer objects follow the convention that higher return values are better than lower return values. Thus metrics which measure the distance between the model and the data, likemetrics.mean_squared_error, are available as neg_mean_squared_error which return the negated value of the metric.Scoring Function Comment Classification ‘accuracy’ metrics.accuracy_score‘average_precision’ metrics.average_precision_score‘f1’ metrics.f1_scorefor binary targets ‘f1_micro’ metrics.f1_scoremicro-averaged ‘f1_macro’ metrics.f1_scoremacro-averaged ‘f1_weighted’ metrics.f1_scoreweighted average ‘f1_samples’ metrics.f1_scoreby multilabel sample ‘neg_log_loss’ metrics.log_lossrequires predict_probasupport‘precision’ etc. metrics.precision_scoresuffixes apply as with ‘f1’ ‘recall’ etc. metrics.recall_scoresuffixes apply as with ‘f1’ ‘roc_auc’ metrics.roc_auc_scoreClustering ‘adjusted_rand_score’ metrics.adjusted_rand_scoreRegression ‘neg_mean_absolute_error’ metrics.mean_absolute_error‘neg_mean_squared_error’ metrics.mean_squared_error‘neg_median_absolute_error’ metrics.median_absolute_error‘r2’ metrics.r2_score