zoukankan      html  css  js  c++  java
  • (四) Keras Dropout和正则化的使用

    视频学习来源

    https://www.bilibili.com/video/av40787141?from=search&seid=17003307842787199553

    笔记


    使用dropout是要改善过拟合,将训练和测试的准确率差距变小

    训练集,测试集结果相比差距较大时,过拟合状态


    使用dropout后,每一周期准确率可能不高反而最后一步提升很快,这是训练的时候部分神经元工作,而最后的评估所有神经元工作


    正则化同样是改善过拟合作用


    Softmax一般用在神经网络的最后一层


    import numpy as np
    from keras.datasets import mnist  #将会从网络下载mnist数据集
    from keras.utils import np_utils
    from keras.models import Sequential  #序列模型
    from keras.layers import Dense,Dropout  #在这里导入dropout
    from keras.optimizers import SGD


    C:Program Files (x86)Microsoft Visual StudioSharedAnaconda3_64libsite-packagesh5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
      from ._conv import register_converters as _register_converters
    Using TensorFlow backend.


    #载入数据
    (x_train,y_train),(x_test,y_test)=mnist.load_data()
    #查看格式
    #(60000,28,28)
    print('x_shape:',x_train.shape)
    #(60000)
    print('y_shape:',y_train.shape)
    #(60000,28,28)->(60000,784)
    #行数60000,列-1表示自动设置
    #除以255是做数据归一化处理
    x_train=x_train.reshape(x_train.shape[0],-1)/255.0 #转换数据格式
    x_test=x_test.reshape(x_test.shape[0],-1)/255.0 #转换数据格式
    #label标签转换成 one  hot 形式
    y_train=np_utils.to_categorical(y_train,num_classes=10) #分成10类
    y_test=np_utils.to_categorical(y_test,num_classes=10) #分成10类
    
    
    #创建模型,输入754个神经元,输出10个神经元
    #偏执值初始值设为zeros(默认为zeros)
    model=Sequential([
        Dense(units=200,input_dim=784,bias_initializer='zeros',activation='tanh'), #双曲正切激活函数
        #Dropout(0.4),  #百分之40的神经元不工作
        Dense(units=100,bias_initializer='zeros',activation='tanh'), #双曲正切激活函数
        #Dropout(0.4),  #百分之40的神经元不工作
        Dense(units=10,bias_initializer='zeros',activation='softmax') 
    ])
    
    #也可用下面的方式添加网络层
    ###
    #model.add(Dense(...))
    #model.add(Dense(...))
    ###
    
    
    #定义优化器
    #学习速率为0.2
    sgd=SGD(lr=0.2)
    
    #定义优化器,损失函数,训练效果中计算准确率
    model.compile(
        optimizer=sgd, #sgd优化器
        loss='categorical_crossentropy',  #损失用交叉熵,速度会更快
        metrics=['accuracy'],  #计算准确率
    )
    
    #训练(不同于之前,这是新的训练方式)
    #六万张,每次训练32张,训练10个周期(六万张全部训练完算一个周期)
    model.fit(x_train,y_train,batch_size=32,epochs=10)
    
    #评估模型
    loss,accuracy=model.evaluate(x_test,y_test)
    
    print('
    test loss',loss)
    print('
    test accuracy',accuracy)
    
    loss,accuracy=model.evaluate(x_train,y_train)
    
    print('
    train loss',loss)
    print('
    train accuracy',accuracy)


    x_shape: (60000, 28, 28)
    y_shape: (60000,)
    Epoch 1/10
    60000/60000 [==============================] - 6s 100us/step - loss: 0.2539 - acc: 0.9235
    Epoch 2/10
    60000/60000 [==============================] - 6s 95us/step - loss: 0.1175 - acc: 0.9639
    Epoch 3/10
    60000/60000 [==============================] - 5s 90us/step - loss: 0.0815 - acc: 0.9745
    Epoch 4/10
    60000/60000 [==============================] - 5s 90us/step - loss: 0.0601 - acc: 0.9809
    Epoch 5/10
    60000/60000 [==============================] - 6s 92us/step - loss: 0.0451 - acc: 0.9860
    Epoch 6/10
    60000/60000 [==============================] - 5s 91us/step - loss: 0.0336 - acc: 0.9899
    Epoch 7/10
    60000/60000 [==============================] - 5s 92us/step - loss: 0.0248 - acc: 0.9926
    Epoch 8/10
    60000/60000 [==============================] - 6s 93us/step - loss: 0.0185 - acc: 0.9948
    Epoch 9/10
    60000/60000 [==============================] - 6s 93us/step - loss: 0.0128 - acc: 0.9970
    Epoch 10/10
    60000/60000 [==============================] - 6s 93us/step - loss: 0.0082 - acc: 0.9988
    10000/10000 [==============================] - 0s 39us/step
     
    test loss 0.07058678171953651
     
    test accuracy 0.9786
    60000/60000 [==============================] - 2s 34us/step
     
    train loss 0.0052643890143993
     
    train accuracy 0.9995


    使用后
    (将#Dropout(0.4), 去掉注释)

    model=Sequential([
        Dense(units=200,input_dim=784,bias_initializer='zeros',activation='tanh'), #双曲正切激活函数
        Dropout(0.4),  #百分之40的神经元不工作
        Dense(units=100,bias_initializer='zeros',activation='tanh'), #双曲正切激活函数
        Dropout(0.4),  #百分之40的神经元不工作
        Dense(units=10,bias_initializer='zeros',activation='softmax') #双曲正切激活函数
    ])

    x_shape: (60000, 28, 28)
    y_shape: (60000,)
    Epoch 1/10
    60000/60000 [==============================] - 11s 184us/step - loss: 0.4158 - acc: 0.8753
    Epoch 2/10
    60000/60000 [==============================] - 10s 166us/step - loss: 0.2799 - acc: 0.9177
    Epoch 3/10
    60000/60000 [==============================] - 11s 177us/step - loss: 0.2377 - acc: 0.9302
    Epoch 4/10
    60000/60000 [==============================] - 10s 164us/step - loss: 0.2169 - acc: 0.9356
    Epoch 5/10
    60000/60000 [==============================] - 10s 170us/step - loss: 0.1979 - acc: 0.9413
    Epoch 6/10
    60000/60000 [==============================] - 11s 183us/step - loss: 0.1873 - acc: 0.9439
    Epoch 7/10
    60000/60000 [==============================] - 11s 180us/step - loss: 0.1771 - acc: 0.9472
    Epoch 8/10
    60000/60000 [==============================] - 12s 204us/step - loss: 0.1676 - acc: 0.9501
    Epoch 9/10
    60000/60000 [==============================] - 11s 187us/step - loss: 0.1608 - acc: 0.9527
    Epoch 10/10
    60000/60000 [==============================] - 10s 170us/step - loss: 0.1534 - acc: 0.9542
    10000/10000 [==============================] - 1s 68us/step
     
    test loss 0.09667835112037138
     
    test accuracy 0.9692
    60000/60000 [==============================] - 4s 70us/step
     
    train loss 0.07203661710163578
     
    train accuracy 0.9774666666666667


    PS 本例并不能很好的体现dropout的优化,但是提供示例来使用dropout



    正则化


    Kernel_regularizer 权值正则化

    Bias_regularizer 偏置正则化

    Activity_regularizer 激活正则化

    激活正则化是信号乘以权值加上偏置值得到的激活

    一般设置权值较多

    如果模型对于数据较为复杂,可用dropout和正则化来克服一些过拟合

    如果模型对于数据较为简单,可用dropout和正则化可能会降低训练效果


    import numpy as np
    from keras.datasets import mnist  #将会从网络下载mnist数据集
    from keras.utils import np_utils
    from keras.models import Sequential  #序列模型
    from keras.layers import Dense
    from keras.optimizers import SGD  
    from keras.regularizers import l2  #导入正则化l2(小写L)


    #载入数据
    (x_train,y_train),(x_test,y_test)=mnist.load_data()
    #查看格式
    #(60000,28,28)
    print('x_shape:',x_train.shape)
    #(60000)
    print('y_shape:',y_train.shape)
    #(60000,28,28)->(60000,784)
    #行数60000,列-1表示自动设置
    #除以255是做数据归一化处理
    x_train=x_train.reshape(x_train.shape[0],-1)/255.0 #转换数据格式
    x_test=x_test.reshape(x_test.shape[0],-1)/255.0 #转换数据格式
    #label标签转换成 one  hot 形式
    y_train=np_utils.to_categorical(y_train,num_classes=10) #分成10类
    y_test=np_utils.to_categorical(y_test,num_classes=10) #分成10类
    
    
    #创建模型,输入754个神经元,输出10个神经元
    #偏执值初始值设为zeros(默认为zeros)
    model=Sequential([
        #加上权值正则化
        Dense(units=200,input_dim=784,bias_initializer='zeros',activation='tanh',kernel_regularizer=l2(0.0003)), #双曲正切激活函数
        Dense(units=100,bias_initializer='zeros',activation='tanh',kernel_regularizer=l2(0.0003)), #双曲正切激活函数
        Dense(units=10,bias_initializer='zeros',activation='softmax',kernel_regularizer=l2(0.0003)) 
    ])
    
    #也可用下面的方式添加网络层
    ###
    #model.add(Dense(...))
    #model.add(Dense(...))
    ###
    
    
    #定义优化器
    #学习速率为0.2
    sgd=SGD(lr=0.2)
    
    #定义优化器,损失函数,训练效果中计算准确率
    model.compile(
        optimizer=sgd, #sgd优化器
        loss='categorical_crossentropy',  #损失用交叉熵,速度会更快
        metrics=['accuracy'],  #计算准确率
    )
    
    #训练(不同于之前,这是新的训练方式)
    #六万张,每次训练32张,训练10个周期(六万张全部训练完算一个周期)
    model.fit(x_train,y_train,batch_size=32,epochs=10)
    
    #评估模型
    loss,accuracy=model.evaluate(x_test,y_test)
    
    print('
    test loss',loss)
    print('
    test accuracy',accuracy)
    
    loss,accuracy=model.evaluate(x_train,y_train)
    
    print('
    train loss',loss)
    print('
    train accuracy',accuracy)


    x_shape: (60000, 28, 28)
    y_shape: (60000,)
    Epoch 1/10
    60000/60000 [==============================] - 8s 127us/step - loss: 0.4064 - acc: 0.9202
    Epoch 2/10
    60000/60000 [==============================] - 7s 121us/step - loss: 0.2616 - acc: 0.9603
    Epoch 3/10
    60000/60000 [==============================] - 8s 135us/step - loss: 0.2185 - acc: 0.9683
    Epoch 4/10
    60000/60000 [==============================] - 8s 132us/step - loss: 0.1950 - acc: 0.9723
    Epoch 5/10
    60000/60000 [==============================] - 8s 130us/step - loss: 0.1793 - acc: 0.9754
    Epoch 6/10
    60000/60000 [==============================] - 8s 125us/step - loss: 0.1681 - acc: 0.9775
    Epoch 7/10
    60000/60000 [==============================] - 8s 130us/step - loss: 0.1625 - acc: 0.9783
    Epoch 8/10
    60000/60000 [==============================] - 7s 125us/step - loss: 0.1566 - acc: 0.9797
    Epoch 9/10
    60000/60000 [==============================] - 8s 136us/step - loss: 0.1515 - acc: 0.9811
    Epoch 10/10
    60000/60000 [==============================] - 8s 140us/step - loss: 0.1515 - acc: 0.9808
    10000/10000 [==============================] - 1s 57us/step
    
    test loss 0.17750378291606903
    
    test accuracy 0.9721
    60000/60000 [==============================] - 3s 52us/step
    
    train loss 0.1493431808312734
    
    train accuracy 0.9822666666666666
    
    
    
  • 相关阅读:
    COGNOS10启动服务报错 问题解决
    Linux 下 新增Oracle10g 实例 (转自http://www.cnblogs.com/lan0725/archive/2011/07/18/2109474.html)
    WIN7安装COGNOS8后配置IIS网站后,访问COGNOS站点网页一直显示空白,解决方法(转载)
    Gridview光棒效果 鼠标滑过
    11款实用的一句话网站设计代码
    自定义js方法 (格式化时间)
    测试一下
    UTF7转换GB2312编码的方法(中文)
    HTML注册页面验证注册信息
    android欢迎页
  • 原文地址:https://www.cnblogs.com/XUEYEYU/p/keras-learning-4.html
Copyright © 2011-2022 走看看