zoukankan      html  css  js  c++  java
  • 【593】ResNet残差网络

    参考:inception模型和卷积层的残差连接的keras实现

    参考:Keras Implementation of ResNet-50 (Residual Networks) Architecture from Scratch  

    参考:一文读懂残差网络ResNet

    参考:ResNet Keras实现


      如下图所示,$F(x)$ 是一个或多个卷积层,然后将两者相加 $F(x) + x$。逐个元素对应相加,而不是连接。

      

    from keras.layers import Conv2D, Input, Add
    
    # input tensor for a 3-channel 256x256 image
    x = Input(shape=(256, 256, 3))
    
    # 3x3 conv with 3 output channels (same as input channels)
    y = Conv2D(3, (3, 3), padding='same')(x)
    
    # this returns x + y. (SKIP Connection)
    z = Add()([x, y])
    
    z = Activation('relu')(z) 

      


    1. Identity Block

      The identity block is the standard block used in ResNets and corresponds to the case where the input activation has the same dimension as the output activation.

      

    def identity_block(X, f, filters, stage, block):
       
        conv_name_base = 'res' + str(stage) + block + '_branch'
        bn_name_base = 'bn' + str(stage) + block + '_branch'
        F1, F2, F3 = filters
    
        X_shortcut = X
       
        X = Conv2D(filters=F1, kernel_size=(1, 1), strides=(1, 1), padding='valid', name=conv_name_base + '2a', kernel_initializer=glorot_uniform(seed=0))(X)
        X = BatchNormalization(axis=3, name=bn_name_base + '2a')(X)
        X = Activation('relu')(X)
    
        X = Conv2D(filters=F2, kernel_size=(f, f), strides=(1, 1), padding='same', name=conv_name_base + '2b', kernel_initializer=glorot_uniform(seed=0))(X)
        X = BatchNormalization(axis=3, name=bn_name_base + '2b')(X)
        X = Activation('relu')(X)
    
        X = Conv2D(filters=F3, kernel_size=(1, 1), strides=(1, 1), padding='valid', name=conv_name_base + '2c', kernel_initializer=glorot_uniform(seed=0))(X)
        X = BatchNormalization(axis=3, name=bn_name_base + '2c')(X)
    
        X = Add()([X, X_shortcut])# SKIP Connection
        X = Activation('relu')(X)
    
        return X
    

    2. Convolutional Block

      We can use this type of block when the input and output dimensions don’t match up. The difference with the identity block is that there is a CONV2D layer in the shortcut path.

      

    def convolutional_block(X, f, filters, stage, block, s=2):
       
        conv_name_base = 'res' + str(stage) + block + '_branch'
        bn_name_base = 'bn' + str(stage) + block + '_branch'
    
        F1, F2, F3 = filters
    
        X_shortcut = X
    
        X = Conv2D(filters=F1, kernel_size=(1, 1), strides=(s, s), padding='valid', name=conv_name_base + '2a', kernel_initializer=glorot_uniform(seed=0))(X)
        X = BatchNormalization(axis=3, name=bn_name_base + '2a')(X)
        X = Activation('relu')(X)
    
        X = Conv2D(filters=F2, kernel_size=(f, f), strides=(1, 1), padding='same', name=conv_name_base + '2b', kernel_initializer=glorot_uniform(seed=0))(X)
        X = BatchNormalization(axis=3, name=bn_name_base + '2b')(X)
        X = Activation('relu')(X)
    
        X = Conv2D(filters=F3, kernel_size=(1, 1), strides=(1, 1), padding='valid', name=conv_name_base + '2c', kernel_initializer=glorot_uniform(seed=0))(X)
        X = BatchNormalization(axis=3, name=bn_name_base + '2c')(X)
    
        X_shortcut = Conv2D(filters=F3, kernel_size=(1, 1), strides=(s, s), padding='valid', name=conv_name_base + '1', kernel_initializer=glorot_uniform(seed=0))(X_shortcut)
        X_shortcut = BatchNormalization(axis=3, name=bn_name_base + '1')(X_shortcut)
    
        X = Add()([X, X_shortcut])
        X = Activation('relu')(X)
    
        return X
    
  • 相关阅读:
    sequence——强行推式子+组合意义
    2018-2-25-git-rebase-合并多个提交
    2018-2-25-git-rebase-合并多个提交
    2019-9-2-给博客添加rss订阅
    2019-9-2-给博客添加rss订阅
    2019-10-31-Resharper-去掉注释拼写
    2019-10-31-Resharper-去掉注释拼写
    2018-8-10-win10-sdk-是否向下兼容
    2018-8-10-win10-sdk-是否向下兼容
    2019-8-15-win10-edge-打开闪退问题
  • 原文地址:https://www.cnblogs.com/alex-bn-lee/p/14972481.html
Copyright © 2011-2022 走看看