zoukankan      html  css  js  c++  java
  • How to compute f1 score for each epoch in Keras

    https://medium.com/@thongonary/how-to-compute-f1-score-for-each-epoch-in-keras-a1acd17715a2

    https://datascience.stackexchange.com/questions/13746/how-to-define-a-custom-performance-metric-in-keras/20192

    In training a neural network, f1 score is an important metric to evaluate the performance of classification models, especially for unbalanced classes where the binary accuracy is useless (see Accuracy Paradox).

    Keras used to implement the f1 score in its metrics; however, the developers decided to remove it in Keras 2.0, since this quantity is evaluated for each batch, which is more misleading than helpful. Fortunately, Keras allows us to access the validation data during training via a Callback function, on which we can extend to compute the desired quantities.

    Here is a sample code to compute and print out the f1 score, recall, and precision at the end of each epoch, using the whole validation data:

    import numpy as np
    from keras.callbacks import Callback
    from sklearn.metrics import confusion_matrix, f1_score, precision_score, recall_score
    class Metrics(Callback):
    def on_train_begin(self, logs={}):
    self.val_f1s = []
    self.val_recalls = []
    self.val_precisions = []

    def on_epoch_end(self, epoch, logs={}):
    val_predict = (np.asarray(self.model.predict(self.model.validation_data[0]))).round()
    val_targ = self.model.validation_data[1]
    _val_f1 = f1_score(val_targ, val_predict)
    _val_recall = recall_score(val_targ, val_predict)
    _val_precision = precision_score(val_targ, val_predict)
    self.val_f1s.append(_val_f1)
    self.val_recalls.append(_val_recall)
    self.val_precisions.append(_val_precision)
    print “ — val_f1: %f — val_precision: %f — val_recall %f” %(_val_f1, _val_precision, _val_recall)
    return

    metrics = Metrics()

    on_train_begin is initialized at the beginning of the training. Here we initiate 3 lists to hold the values of the interested quantities, which are computed in on_epoch_end. Later on, we can access these lists as usual instance variables, for example:

    print (metrics.val_f1s)

    Define the model, and add the callback parameter in the fit function:

    model.fit(training_data, training_target, 
    validation_data=(validation_data, validation_target),
    nb_epoch=10,
    batch_size=64,
    callbacks=[metrics])

    The printout during training would look like this:

    Epoch 1/10
    32320/32374 [============================>.] - ETA: 0s - loss: 0.0414 - val_f1: 0.375000 - val_precision: 0.782609 - val_recall 0.246575
    32374/32374 [==============================] - 23s - loss: 0.0414 - val_loss: 0.0430

    That’s it. Have fun training!

  • 相关阅读:
    【664】日常记录
    【663】dataframe 删掉指定行或者列
    【662】TensorFlow GPU 相关配置
    【661】Python split 指定多个分隔符
    【660】TensorFlow 或者 keras 版本问题
    FFMPEG视音频编解码
    cpplint中filter参数
    升级pip之后出现sys.stderr.write(f“ERROR: {exc}“)
    特征点三角化恢复3D点
    VIO——陀螺仪零偏估计
  • 原文地址:https://www.cnblogs.com/bnuvincent/p/7484342.html
Copyright © 2011-2022 走看看