zoukankan      html  css  js  c++  java
  • 【TensorFlow实战】TensorFlow实现经典卷积神经网络之ResNet

    ResNet

       ResNet(Residual Neural Network)通过使用Residual Unit成功训练152层深的神经网络,在ILSVRC 2015比赛中获得冠军,取得3.57%的top-5错误率,同时参数量却比VGGNet低,效果突出。ResNet的结构可以极快地加速超深神经网络的训练,模型的准确率也有非常大的提升。ResNet是一个推广性非常好的网络结构,可以直接应用到Inception Net中。

      在ResNet之前,Schmidhuber教授提出了Highway Network,原理与ResNet很相似。通常认为网络的深度对其性能非常重要,但是网络越深其训练难度越大,Highway Network的目标就是解决极深的神经网络难以训练的问题。Highway Network 相当于修改了每一层的激活函数,此前的激活函数只是对输入做一个非线性变换,Highway Network则允许保留一定比例的原始输入x,即,其中T为变换系数,C为保留系数,论文中令C=1-T。这样前一层的信息,有一定比例可以不经过矩阵乘法和非线性变换,直接传输到下一层,仿佛一条信息高速公路,因此得名Highway Network。其主要通过gating units学习如何控制网络中的信息流,即学习原始信息应保留的比例。Highway Network的设计在理论上允许其训练任意深度的网络,其优化方法基本上与网络的深度独立,而传统的神经网络结构则对深度非常敏感,训练复杂度随深度增加而急剧增加。

      ResNet也是允许输入信息直接传输到后面的层中。ResNet最初的灵感出自这个问题:在不断加神经网络的深度时,会出现一个Degradation的问题,即准确率会先上升然后达到饱和,再持续增加深度则会导致准确率下降。假设有一个比较浅的网络达到了饱和的准确率,那么后面再加上几个y=x的全等映射层起码误差不会增加,即更深的网络不应该带来训练集上误差上升。而这里提到的使用全等映射直接将前一层输出传到后面的思想,就是ResNet的灵感来源。假定某段神经网络的输入是x,期望输出是H(x),如果我们直接把输入x传到输出作为初试结果,那么此时我们需要学习的目标就是F(x)=H(x)-x。ResNet相当于将学习目标改变了,不再是学习一个完整的输出H(x),只是输出和输入的差别H(x)-x,即残差。下图就是一个ResNet的残差学习单元(Residual Unit):

     

      普通直连的卷积神经网络和ResNet的最大区别在于,ResNet有很多旁路的支线将输入直接连到后面的层,使得后面的层可以直接学习残差,这种结构也被称为shortcut或skip connections。下图展示VGGNet-19以及一个34层深的普通卷积网络,和34层深的ResNet网络的对比图:

      传统的卷积层或全连接层在信息传递时,或多或少会存在信息丢失、耗损等问题。ResNet在某种程度上解决了这个问题,通过直接将输入信息绕道传出输出,保护信息的完整性,整个网络则只需要学习输入、输出差别的那一部分,简化学习目标和难度。

      在ResNet的论文中,提出了两层残差学习单元和三层残差学习单元。两层的残差学习单元包含两个相同输出通道数(因为残差等于目标输出减去输入,即H(x)-x,因此输入、输出维度需保持一致)的3*3卷积;而3层的残差网络则使用了Network In NetWork和Inception Net中的1*1卷积,并且是在中间3*3卷积前后都使用了1*1卷积,有先降维再升维的操作。另外,如果有输入、输出维度不同的情况,我们可以对x做一个线性映射变换维度,再连接到后面的层。如下图所示:


      下图为ResNet在不同层的网络配置,其中基础结构很类似,都是前面提到的两层和三层残差单元的堆叠。

      在使用了ResNet的结构后,可以发现层数不断加深导致的训练集上误差增大的现象被消除了,ResNet网络的训练误差会随着层数增大而逐渐减小,并且测试集上的表现也会变好。

      ResNet V2和ResNet V1的区别在于,作者通过研究ResNet残差学习单元的传播公式,发现前馈和反馈信号可以直接传输,因此skip connection的非线性激活函数(如ReLU)替换为Identity Mappings(y=x)。同时,ResNet v2 在每一层都使用了Batch Normalization。这样处理后,新的残差学习单元将比以前更容易训练且泛化性更强。

    下面用TensorFlow实现ResNet V2:

    """
    
    Typical use:
    
       from tensorflow.contrib.slim.nets import resnet_v2
    
    ResNet-101 for image classification into 1000 classes:
    
       # inputs has shape [batch, 224, 224, 3]
       with slim.arg_scope(resnet_v2.resnet_arg_scope(is_training)):
          net, end_points = resnet_v2.resnet_v2_101(inputs, 1000)
    
    ResNet-101 for semantic segmentation into 21 classes:
    
       # inputs has shape [batch, 513, 513, 3]
       with slim.arg_scope(resnet_v2.resnet_arg_scope(is_training)):
          net, end_points = resnet_v2.resnet_v2_101(inputs,
                                                    21,
                                                    global_pool=False,
                                                    output_stride=16)
    """
    import collections
    import tensorflow as tf
    slim = tf.contrib.slim
    
    
    
    class Block(collections.namedtuple('Block', ['scope', 'unit_fn', 'args'])):
      """A named tuple describing a ResNet block.
    
      Its parts are:
        scope: The scope of the `Block`.
        unit_fn: The ResNet unit function which takes as input a `Tensor` and
          returns another `Tensor` with the output of the ResNet unit.
        args: A list of length equal to the number of units in the `Block`. The list
          contains one (depth, depth_bottleneck, stride) tuple for each unit in the
          block to serve as argument to unit_fn.
      """
    
    
    def subsample(inputs, factor, scope=None):    
    # 降采样
    # inputs 输入;factor 采样因子
      """Subsamples the input along the spatial dimensions.
    
      Args:
        inputs: A `Tensor` of size [batch, height_in, width_in, channels].
        factor: The subsampling factor.
        scope: Optional variable_scope.
    
      Returns:
        output: A `Tensor` of size [batch, height_out, width_out, channels] with the
          input, either intact (if factor == 1) or subsampled (if factor > 1).
      """
      if factor == 1:
        return inputs
      else:
        return slim.max_pool2d(inputs, [1, 1], stride=factor, scope=scope)
    
    
    def conv2d_same(inputs, num_outputs, kernel_size, stride, scope=None):
    # 卷积层
      """Strided 2-D convolution with 'SAME' padding.
    
      When stride > 1, then we do explicit zero-padding, followed by conv2d with
      'VALID' padding.
    
      Note that
    
         net = conv2d_same(inputs, num_outputs, 3, stride=stride)
    
      is equivalent to
    
         net = slim.conv2d(inputs, num_outputs, 3, stride=1, padding='SAME')
         net = subsample(net, factor=stride)
    
      whereas
    
         net = slim.conv2d(inputs, num_outputs, 3, stride=stride, padding='SAME')
    
      is different when the input's height or width is even, which is why we add the
      current function. For more details, see ResnetUtilsTest.testConv2DSameEven().
    
      Args:
        inputs: A 4-D tensor of size [batch, height_in, width_in, channels].
        num_outputs: An integer, the number of output filters.
        kernel_size: An int with the kernel_size of the filters.
        stride: An integer, the output stride.
        rate: An integer, rate for atrous convolution.
        scope: Scope.
    
      Returns:
        output: A 4-D tensor of size [batch, height_out, width_out, channels] with
          the convolution output.
      """
      if stride == 1:
        return slim.conv2d(inputs, num_outputs, kernel_size, stride=1,
                           padding='SAME', scope=scope)
      else:
        #kernel_size_effective = kernel_size + (kernel_size - 1) * (rate - 1)
        pad_total = kernel_size - 1
        pad_beg = pad_total // 2
        pad_end = pad_total - pad_beg
        inputs = tf.pad(inputs,
                        [[0, 0], [pad_beg, pad_end], [pad_beg, pad_end], [0, 0]])
        return slim.conv2d(inputs, num_outputs, kernel_size, stride=stride,
                           padding='VALID', scope=scope)
    
    
    @slim.add_arg_scope
    def stack_blocks_dense(net, blocks,
                           outputs_collections=None):
    # 堆叠Blocks的函数;使用两层循环,逐个Residual Unit堆叠
      """Stacks ResNet `Blocks` and controls output feature density.
    
      First, this function creates scopes for the ResNet in the form of
      'block_name/unit_1', 'block_name/unit_2', etc.
    
    
      Args:
        net: A `Tensor` of size [batch, height, width, channels].
        blocks: A list of length equal to the number of ResNet `Blocks`. Each
          element is a ResNet `Block` object describing the units in the `Block`.
        outputs_collections: Collection to add the ResNet block outputs.
    
      Returns:
        net: Output tensor 
    
      """
      for block in blocks:
        with tf.variable_scope(block.scope, 'block', [net]) as sc:
          for i, unit in enumerate(block.args):
    
            with tf.variable_scope('unit_%d' % (i + 1), values=[net]):
              unit_depth, unit_depth_bottleneck, unit_stride = unit
              net = block.unit_fn(net,
                                  depth=unit_depth,
                                  depth_bottleneck=unit_depth_bottleneck,
                                  stride=unit_stride)
          net = slim.utils.collect_named_outputs(outputs_collections, sc.name, net)
          
      return net
    
    
    def resnet_arg_scope(is_training=True,
                         weight_decay=0.0001,
                         batch_norm_decay=0.997,
                         batch_norm_epsilon=1e-5,
                         batch_norm_scale=True):
    # 用来定义某些函数的参数默认值
      """Defines the default ResNet arg scope.
    
      TODO(gpapan): The batch-normalization related default values above are
        appropriate for use in conjunction with the reference ResNet models
        released at https://github.com/KaimingHe/deep-residual-networks. When
        training ResNets from scratch, they might need to be tuned.
    
      Args:
        is_training: Whether or not we are training the parameters in the batch
          normalization layers of the model.
        weight_decay: The weight decay to use for regularizing the model.
        batch_norm_decay: The moving average decay when estimating layer activation
          statistics in batch normalization.
        batch_norm_epsilon: Small constant to prevent division by zero when
          normalizing activations by their variance in batch normalization.
        batch_norm_scale: If True, uses an explicit `gamma` multiplier to scale the
          activations in the batch normalization layer.
    
      Returns:
        An `arg_scope` to use for the resnet models.
      """
      batch_norm_params = {
          'is_training': is_training,
          'decay': batch_norm_decay,
          'epsilon': batch_norm_epsilon,
          'scale': batch_norm_scale,
          'updates_collections': tf.GraphKeys.UPDATE_OPS,
      }
    
      with slim.arg_scope(
          [slim.conv2d],
          weights_regularizer=slim.l2_regularizer(weight_decay),
          weights_initializer=slim.variance_scaling_initializer(),
          activation_fn=tf.nn.relu,
          normalizer_fn=slim.batch_norm,
          normalizer_params=batch_norm_params):
        with slim.arg_scope([slim.batch_norm], **batch_norm_params):
          # The following implies padding='SAME' for pool1, which makes feature
          # alignment easier for dense prediction tasks. This is also used in
          # https://github.com/facebook/fb.resnet.torch. However the accompanying
          # code of 'Deep Residual Learning for Image Recognition' uses
          # padding='VALID' for pool1. You can switch to that choice by setting
          # slim.arg_scope([slim.max_pool2d], padding='VALID').
          with slim.arg_scope([slim.max_pool2d], padding='SAME') as arg_sc:
            return arg_sc
    
    
    
    
    @slim.add_arg_scope
    def bottleneck(inputs, depth, depth_bottleneck, stride,
                   outputs_collections=None, scope=None):
    # 核心的bottleneck残差学习单元
      """Bottleneck residual unit variant with BN before convolutions.
    
      This is the full preactivation residual unit variant proposed in [2]. See
      Fig. 1(b) of [2] for its definition. Note that we use here the bottleneck
      variant which has an extra bottleneck layer.
    
      When putting together two consecutive ResNet blocks that use this unit, one
      should use stride = 2 in the last unit of the first block.
    
      Args:
        inputs: A tensor of size [batch, height, width, channels].
        depth: The depth of the ResNet unit output.
        depth_bottleneck: The depth of the bottleneck layers.
        stride: The ResNet unit's stride. Determines the amount of downsampling of
          the units output compared to its input.
        rate: An integer, rate for atrous convolution.
        outputs_collections: Collection to add the ResNet unit output.
        scope: Optional variable_scope.
    
      Returns:
        The ResNet unit's output.
      """
      with tf.variable_scope(scope, 'bottleneck_v2', [inputs]) as sc:
        depth_in = slim.utils.last_dimension(inputs.get_shape(), min_rank=4)
        preact = slim.batch_norm(inputs, activation_fn=tf.nn.relu, scope='preact')
        if depth == depth_in:
          shortcut = subsample(inputs, stride, 'shortcut')
        else:
          shortcut = slim.conv2d(preact, depth, [1, 1], stride=stride,
                                 normalizer_fn=None, activation_fn=None,
                                 scope='shortcut')
    
        residual = slim.conv2d(preact, depth_bottleneck, [1, 1], stride=1,
                               scope='conv1')
        residual = conv2d_same(residual, depth_bottleneck, 3, stride,
                                            scope='conv2')
        residual = slim.conv2d(residual, depth, [1, 1], stride=1,
                               normalizer_fn=None, activation_fn=None,
                               scope='conv3')
    
        output = shortcut + residual
    
        return slim.utils.collect_named_outputs(outputs_collections,
                                                sc.name,
                                                output)
    
    
    def resnet_v2(inputs,
                  blocks,
                  num_classes=None,
                  global_pool=True,
                  include_root_block=True,
                  reuse=None,
                  scope=None):
    # 生成ResNetV2的主函数
      """Generator for v2 (preactivation) ResNet models.
    
      This function generates a family of ResNet v2 models. See the resnet_v2_*()
      methods for specific model instantiations, obtained by selecting different
      block instantiations that produce ResNets of various depths.
    
    
      Args:
        inputs: A tensor of size [batch, height_in, width_in, channels].
        blocks: A list of length equal to the number of ResNet blocks. Each element
          is a resnet_utils.Block object describing the units in the block.
        num_classes: Number of predicted classes for classification tasks. If None
          we return the features before the logit layer.
        include_root_block: If True, include the initial convolution followed by
          max-pooling, if False excludes it. If excluded, `inputs` should be the
          results of an activation-less convolution.
        reuse: whether or not the network and its variables should be reused. To be
          able to reuse 'scope' must be given.
        scope: Optional variable_scope.
    
    
      Returns:
        net: A rank-4 tensor of size [batch, height_out, width_out, channels_out].
          If global_pool is False, then height_out and width_out are reduced by a
          factor of output_stride compared to the respective height_in and width_in,
          else both height_out and width_out equal one. If num_classes is None, then
          net is the output of the last ResNet block, potentially after global
          average pooling. If num_classes is not None, net contains the pre-softmax
          activations.
        end_points: A dictionary from components of the network to the corresponding
          activation.
    
      Raises:
        ValueError: If the target output_stride is not valid.
      """
      with tf.variable_scope(scope, 'resnet_v2', [inputs], reuse=reuse) as sc:
        end_points_collection = sc.original_name_scope + '_end_points'
        with slim.arg_scope([slim.conv2d, bottleneck,
                             stack_blocks_dense],
                            outputs_collections=end_points_collection):
          net = inputs
          if include_root_block:
            # We do not include batch normalization or activation functions in conv1
            # because the first ResNet unit will perform these. Cf. Appendix of [2].
            with slim.arg_scope([slim.conv2d],
                                activation_fn=None, normalizer_fn=None):
              net = conv2d_same(net, 64, 7, stride=2, scope='conv1')
            net = slim.max_pool2d(net, [3, 3], stride=2, scope='pool1')
          net = stack_blocks_dense(net, blocks)
          # This is needed because the pre-activation variant does not have batch
          # normalization or activation functions in the residual unit output. See
          # Appendix of [2].
          net = slim.batch_norm(net, activation_fn=tf.nn.relu, scope='postnorm')
          if global_pool:
            # Global average pooling.
            net = tf.reduce_mean(net, [1, 2], name='pool5', keep_dims=True)
          if num_classes is not None:
            net = slim.conv2d(net, num_classes, [1, 1], activation_fn=None,
                              normalizer_fn=None, scope='logits')
          # Convert end_points_collection into a dictionary of end_points.
          end_points = slim.utils.convert_collection_to_dict(end_points_collection)
          if num_classes is not None:
            end_points['predictions'] = slim.softmax(net, scope='predictions')
          return net, end_points
    
    
    
    def resnet_v2_50(inputs,
                     num_classes=None,
                     global_pool=True,
                     reuse=None,
                     scope='resnet_v2_50'):
      """ResNet-50 model of [1]. See resnet_v2() for arg and return description."""
      blocks = [
          Block('block1', bottleneck, [(256, 64, 1)] * 2 + [(256, 64, 2)]),
          Block(
              'block2', bottleneck, [(512, 128, 1)] * 3 + [(512, 128, 2)]),
          Block(
              'block3', bottleneck, [(1024, 256, 1)] * 5 + [(1024, 256, 2)]),
          Block(
              'block4', bottleneck, [(2048, 512, 1)] * 3)]
      return resnet_v2(inputs, blocks, num_classes, global_pool,
                       include_root_block=True, reuse=reuse, scope=scope)
    
    
    def resnet_v2_101(inputs,
                      num_classes=None,
                      global_pool=True,
                      reuse=None,
                      scope='resnet_v2_101'):
      """ResNet-101 model of [1]. See resnet_v2() for arg and return description."""
      blocks = [
          Block(
              'block1', bottleneck, [(256, 64, 1)] * 2 + [(256, 64, 2)]),
          Block(
              'block2', bottleneck, [(512, 128, 1)] * 3 + [(512, 128, 2)]),
          Block(
              'block3', bottleneck, [(1024, 256, 1)] * 22 + [(1024, 256, 2)]),
          Block(
              'block4', bottleneck, [(2048, 512, 1)] * 3)]
      return resnet_v2(inputs, blocks, num_classes, global_pool,
                       include_root_block=True, reuse=reuse, scope=scope)
    
    
    def resnet_v2_152(inputs,
                      num_classes=None,
                      global_pool=True,
                      reuse=None,
                      scope='resnet_v2_152'):
      """ResNet-152 model of [1]. See resnet_v2() for arg and return description."""
      blocks = [
          Block(
              'block1', bottleneck, [(256, 64, 1)] * 2 + [(256, 64, 2)]),
          Block(
              'block2', bottleneck, [(512, 128, 1)] * 7 + [(512, 128, 2)]),
          Block(
              'block3', bottleneck, [(1024, 256, 1)] * 35 + [(1024, 256, 2)]),
          Block(
              'block4', bottleneck, [(2048, 512, 1)] * 3)]
      return resnet_v2(inputs, blocks, num_classes, global_pool,
                       include_root_block=True, reuse=reuse, scope=scope)
    
    
    def resnet_v2_200(inputs,
                      num_classes=None,
                      global_pool=True,
                      reuse=None,
                      scope='resnet_v2_200'):
      """ResNet-200 model of [2]. See resnet_v2() for arg and return description."""
      blocks = [
          Block(
              'block1', bottleneck, [(256, 64, 1)] * 2 + [(256, 64, 2)]),
          Block(
              'block2', bottleneck, [(512, 128, 1)] * 23 + [(512, 128, 2)]),
          Block(
              'block3', bottleneck, [(1024, 256, 1)] * 35 + [(1024, 256, 2)]),
          Block(
              'block4', bottleneck, [(2048, 512, 1)] * 3)]
      return resnet_v2(inputs, blocks, num_classes, global_pool,
                       include_root_block=True, reuse=reuse, scope=scope)
    
      
    from datetime import datetime
    import math
    import time
    def time_tensorflow_run(session, target, info_string):
    # 评测函数
        num_steps_burn_in = 10
        total_duration = 0.0
        total_duration_squared = 0.0
        for i in range(num_batches + num_steps_burn_in):
            start_time = time.time()
            _ = session.run(target)
            duration = time.time() - start_time
            if i >= num_steps_burn_in:
                if not i % 10:
                    print ('%s: step %d, duration = %.3f' %
                           (datetime.now(), i - num_steps_burn_in, duration))
                total_duration += duration
                total_duration_squared += duration * duration
        mn = total_duration / num_batches
        vr = total_duration_squared / num_batches - mn * mn
        sd = math.sqrt(vr)
        print ('%s: %s across %d steps, %.3f +/- %.3f sec / batch' %
               (datetime.now(), info_string, num_batches, mn, sd))
        
    batch_size = 32
    height, width = 224, 224
    inputs = tf.random_uniform((batch_size, height, width, 3))
    with slim.arg_scope(resnet_arg_scope(is_training=False)):
       net, end_points = resnet_v2_152(inputs, 1000)
      
    init = tf.global_variables_initializer()
    sess = tf.Session()
    sess.run(init)  
    num_batches=100
    time_tensorflow_run(sess, net, "Forward") 

      ResNet可以算是深度学习中一个里程碑式的突破,真正意义上支持极深神经网络的训练。

  • 相关阅读:
    20200601:百万级int数据量的一个array求和。
    20200602:千万级数据量的list找一个数据。
    20200531:假如Redis里面有1亿个key,其中有10w个key是以某个固定的已知的前缀开头的,如何将它们全部找出来?
    20200530:主从数据库不一致如何解决?
    [USACO06DEC]Milk Patterns G
    [HAOI2016]找相同字符
    [AHOI2013]差异
    [SCOI2012]喵星球上的点名
    [APIO2014]回文串
    [TJOI2015]弦论
  • 原文地址:https://www.cnblogs.com/Negan-ZW/p/9538414.html
Copyright © 2011-2022 走看看