zoukankan      html  css  js  c++  java
  • Keras-过拟合和欠拟合

    1,一般描述

    we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then start decreasing.

    In other words, our model would overfit to the training data. Learning how to deal with overfitting is important. Although it's often possible to achieve high accuracy on the training set, what we really want is to develop models that generalize well to a testing data (or data they haven't seen before).

    The opposite of overfitting is underfitting. Underfitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data

    We need to strike a balance

    2,减少过拟合

    更多数据,简化模型,正则化,丢弃一部分

    3,句子变多热

    NUM_WORDS = 10000

    (train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS)

    def multi_hot_sequences(sequences, dimension):
    # Create an all-zero matrix of shape (len(sequences), dimension)
    results = np.zeros((len(sequences), dimension))
    for i, word_indices in enumerate(sequences):
    results[i, word_indices] = 1.0 # set specific indices of results[i] to 1s
    if i==100:
    print(results)
    return results


    train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS)
    test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS)

    4,检查有多少个单词

    np.sum(test_data[1000])或者np.nonzero(test_data[0])

    5,三个模型

    正好,过复杂,过简单

    6,history可用

    bigger_history = bigger_model.fit(train_data, train_labels,
    epochs=20,
    batch_size=512,
    validation_data=(test_data, test_labels),
    verbose=2)

    7,解决过拟合的手段之L2

    l2_model = keras.models.Sequential([
    keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
    activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
    keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),
    activation=tf.nn.relu),
    keras.layers.Dense(1, activation=tf.nn.sigmoid)
    ])

    l2_model.compile(optimizer='adam',
    loss='binary_crossentropy',
    metrics=['accuracy', 'binary_crossentropy'])

    l2_model_history = l2_model.fit(train_data, train_labels,
    epochs=20,
    batch_size=512,
    validation_data=(test_data, test_labels),
    verbose=2)

    8,解决过拟合的手段之dropout

    dpt_model = keras.models.Sequential([
    keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),
    keras.layers.Dropout(0.5),
    keras.layers.Dense(16, activation=tf.nn.relu),
    keras.layers.Dropout(0.5),
    keras.layers.Dense(1, activation=tf.nn.sigmoid)
    ])

    dpt_model.compile(optimizer='adam',
    loss='binary_crossentropy',
    metrics=['accuracy','binary_crossentropy'])

    dpt_model_history = dpt_model.fit(train_data, train_labels,
    epochs=20,
    batch_size=512,
    validation_data=(test_data, test_labels),
    verbose=2)

  • 相关阅读:
    搭建页面:数据库的增删改查
    阿里云的使用运维安装
    promis:异步编程
    微信开发笔记
    细数那些带打赏功能的平台
    写给自己的TypeScript 入门小纲
    Node.js自学笔记之回调函数
    来简书坚持一个月日更之后
    全选或者单选checkbox的值动态添加到div
    一个前端妹子的悲欢编程之路
  • 原文地址:https://www.cnblogs.com/augustone/p/10507881.html
Copyright © 2011-2022 走看看