zoukankan      html  css  js  c++  java
  • How to compute f1 score for each epoch in Keras

    https://medium.com/@thongonary/how-to-compute-f1-score-for-each-epoch-in-keras-a1acd17715a2

    https://datascience.stackexchange.com/questions/13746/how-to-define-a-custom-performance-metric-in-keras/20192

    In training a neural network, f1 score is an important metric to evaluate the performance of classification models, especially for unbalanced classes where the binary accuracy is useless (see Accuracy Paradox).

    Keras used to implement the f1 score in its metrics; however, the developers decided to remove it in Keras 2.0, since this quantity is evaluated for each batch, which is more misleading than helpful. Fortunately, Keras allows us to access the validation data during training via a Callback function, on which we can extend to compute the desired quantities.

    Here is a sample code to compute and print out the f1 score, recall, and precision at the end of each epoch, using the whole validation data:

    import numpy as np
    from keras.callbacks import Callback
    from sklearn.metrics import confusion_matrix, f1_score, precision_score, recall_score
    class Metrics(Callback):
    def on_train_begin(self, logs={}):
    self.val_f1s = []
    self.val_recalls = []
    self.val_precisions = []

    def on_epoch_end(self, epoch, logs={}):
    val_predict = (np.asarray(self.model.predict(self.model.validation_data[0]))).round()
    val_targ = self.model.validation_data[1]
    _val_f1 = f1_score(val_targ, val_predict)
    _val_recall = recall_score(val_targ, val_predict)
    _val_precision = precision_score(val_targ, val_predict)
    self.val_f1s.append(_val_f1)
    self.val_recalls.append(_val_recall)
    self.val_precisions.append(_val_precision)
    print “ — val_f1: %f — val_precision: %f — val_recall %f” %(_val_f1, _val_precision, _val_recall)
    return

    metrics = Metrics()

    on_train_begin is initialized at the beginning of the training. Here we initiate 3 lists to hold the values of the interested quantities, which are computed in on_epoch_end. Later on, we can access these lists as usual instance variables, for example:

    print (metrics.val_f1s)

    Define the model, and add the callback parameter in the fit function:

    model.fit(training_data, training_target, 
    validation_data=(validation_data, validation_target),
    nb_epoch=10,
    batch_size=64,
    callbacks=[metrics])

    The printout during training would look like this:

    Epoch 1/10
    32320/32374 [============================>.] - ETA: 0s - loss: 0.0414 - val_f1: 0.375000 - val_precision: 0.782609 - val_recall 0.246575
    32374/32374 [==============================] - 23s - loss: 0.0414 - val_loss: 0.0430

    That’s it. Have fun training!

  • 相关阅读:
    铁老大:不管你信不信,我是信了的NET代码版
    如何去掉字符串中的空格(转)
    DNN 社交挂件模块和DNN天气模块
    top、postop、scrolltop、scrollHeight、offsetHeight详解以及各浏览器显示效果差异
    vs2008视图菜单栏没有工具箱的解决办法
    DNN资源收集
    LINQ如何实现模糊查询
    Linq to excel
    前端各种出色的弹出层
    repo init 中指定manifest和branch的含义
  • 原文地址:https://www.cnblogs.com/bnuvincent/p/7484342.html
Copyright © 2011-2022 走看看