zoukankan      html  css  js  c++  java
  • Deeplab v3+中的骨干模型resnet(加入atrous)的源码解析,以及普通resnet整个结构的构建过程

    加入带洞卷积的resnet结构的构建,以及普通resnet如何通过模块的组合来堆砌深层卷积网络。

    第一段代码为deeplab v3+(pytorch版本)中的基本模型改进版resnet的构建过程,

    第二段代码为model的全部结构图示,以文字的方式表示,forward过程并未显示其中

      1 import math
      2 import torch.nn as nn
      3 import torch.utils.model_zoo as model_zoo
      4 from modeling.sync_batchnorm.batchnorm import SynchronizedBatchNorm2d
      5 
      6 class Bottleneck(nn.Module):
      7     # 此类为resnet的基本模块,在构建resnet不同层的时候,主要以模块的个数以及参数不同来区分。
      8     expansion = 4
      9 
     10     def __init__(self, inplanes, planes, stride=1, dilation=1, downsample=None, BatchNorm=None):
     11         super(Bottleneck, self).__init__()
     12         self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
     13         self.bn1 = BatchNorm(planes)
     14         self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,
     15                                dilation=dilation, padding=dilation, bias=False)
     16         self.bn2 = BatchNorm(planes)
     17         self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)
     18         self.bn3 = BatchNorm(planes * 4)
     19         self.relu = nn.ReLU(inplace=True)
     20         self.downsample = downsample
     21         self.stride = stride
     22         self.dilation = dilation
     23 
     24     def forward(self, x):
     25         # 传播的过程,在这里设置残差的操作;
     26         residual = x
     27 
     28         out = self.conv1(x)
     29         out = self.bn1(out)
     30         out = self.relu(out)
     31 
     32         out = self.conv2(out)
     33         out = self.bn2(out)
     34         out = self.relu(out)
     35 
     36         out = self.conv3(out)
     37         out = self.bn3(out)
     38 
     39         if self.downsample is not None:
     40             residual = self.downsample(x)
     41 
     42         out += residual
     43         out = self.relu(out)
     44 
     45         return out
     46 
     47 class ResNet(nn.Module):
     48     # 加入atrous的resnet结构,获取不同的感受野以及上下文信息。
     49     def __init__(self, block, layers, output_stride, BatchNorm, pretrained=True):
     50         # 定义resnet的基本结构,通过前面的几层直接设计加上不同参数的bottleneck模块的组合构成;
     51         # layers参数,在创建resnet类对象的时候,赋予一个数组,在构建多层网络模块的时候调用。
     52         # block代表模块结构,在这里指的是bottleneck.
     53         self.inplanes = 64
     54         super(ResNet, self).__init__()
     55         blocks = [1, 2, 4]
     56         if output_stride == 16:
     57             strides = [1, 2, 2, 1]
     58             dilations = [1, 1, 1, 2]
     59         elif output_stride == 8:
     60             strides = [1, 2, 1, 1]
     61             dilations = [1, 1, 2, 4]
     62         else:
     63             raise NotImplementedError
     64 
     65         # Modules
     66         self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,
     67                                 bias=False)
     68         self.bn1 = BatchNorm(64)
     69         self.relu = nn.ReLU(inplace=True)
     70         self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
     71         # 下面通过调用make_layer函数来构造不同参数的bottleneck模块;
     72         self.layer1 = self._make_layer(block, 64, layers[0], stride=strides[0], dilation=dilations[0], BatchNorm=BatchNorm)
     73         self.layer2 = self._make_layer(block, 128, layers[1], stride=strides[1], dilation=dilations[1], BatchNorm=BatchNorm)
     74         self.layer3 = self._make_layer(block, 256, layers[2], stride=strides[2], dilation=dilations[2], BatchNorm=BatchNorm)
     75         self.layer4 = self._make_MG_unit(block, 512, blocks=blocks, stride=strides[3], dilation=dilations[3], BatchNorm=BatchNorm)
     76         # self.layer4 = self._make_layer(block, 512, layers[3], stride=strides[3], dilation=dilations[3], BatchNorm=BatchNorm)
     77         self._init_weight()
     78 
     79         if pretrained:
     80             self._load_pretrained_model()
     81 
     82     def _make_layer(self, block, planes, blocks, stride=1, dilation=1, BatchNorm=None):
     83         # block传入的是bottleneck模块;
     84         # planes参数是改变卷积层参数的重要变量,在这里分别传入64.128.256.512,
     85         # 目的是为了给conv2d(in_channels,out_channels,kernel_size,stride,……)传递各种参数,
     86         
     87         # blocks是定义有几个参数相同的bottleneck模块,即在最下面的Layers参数【3.4.23.3】,在总模型结构图中可以清晰的看出。
     88         # dilation参数是为了设置带洞卷积(atrous),默认为1即普通卷积;
     89         downsample = None
     90         if stride != 1 or self.inplanes != planes * block.expansion:
     91             downsample = nn.Sequential(
     92                 nn.Conv2d(self.inplanes, planes * block.expansion,
     93                           kernel_size=1, stride=stride, bias=False),
     94                 BatchNorm(planes * block.expansion),
     95             )
     96 
     97         layers = []
     98         layers.append(block(self.inplanes, planes, stride, dilation, downsample, BatchNorm))
     99         self.inplanes = planes * block.expansion
    100         for i in range(1, blocks):
    101             layers.append(block(self.inplanes, planes, dilation=dilation, BatchNorm=BatchNorm))
    102 
    103         return nn.Sequential(*layers)
    104 
    105     def _make_MG_unit(self, block, planes, blocks, stride=1, dilation=1, BatchNorm=None):
    106         downsample = None
    107         if stride != 1 or self.inplanes != planes * block.expansion:
    108             downsample = nn.Sequential(
    109                 nn.Conv2d(self.inplanes, planes * block.expansion,
    110                           kernel_size=1, stride=stride, bias=False),
    111                 BatchNorm(planes * block.expansion),
    112             )
    113 
    114         layers = []
    115         layers.append(block(self.inplanes, planes, stride, dilation=blocks[0]*dilation,
    116                             downsample=downsample, BatchNorm=BatchNorm))
    117         self.inplanes = planes * block.expansion
    118         for i in range(1, len(blocks)):
    119             layers.append(block(self.inplanes, planes, stride=1,
    120                                 dilation=blocks[i]*dilation, BatchNorm=BatchNorm))
    121 
    122         return nn.Sequential(*layers)
    123 
    124     def forward(self, input):
    125         x = self.conv1(input)
    126         x = self.bn1(x)
    127         x = self.relu(x)
    128         x = self.maxpool(x)
    129 
    130         x = self.layer1(x)
    131         low_level_feat = x
    132         x = self.layer2(x)
    133         x = self.layer3(x)
    134         x = self.layer4(x)
    135         return x, low_level_feat
    136 
    137     def _init_weight(self):
    138         for m in self.modules():
    139             if isinstance(m, nn.Conv2d):
    140                 n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
    141                 m.weight.data.normal_(0, math.sqrt(2. / n))
    142             elif isinstance(m, SynchronizedBatchNorm2d):
    143                 m.weight.data.fill_(1)
    144                 m.bias.data.zero_()
    145             elif isinstance(m, nn.BatchNorm2d):
    146                 m.weight.data.fill_(1)
    147                 m.bias.data.zero_()
    148 
    149     def _load_pretrained_model(self):
    150         # pretrain_dict = model_zoo.load_url('https://download.pytorch.org/models/resnet101-5d3b4d8f.pth')
    151         pretrain_dict = torch.load('/home/huihua/NewDisk/resnet50-19c8e357.pth')
    152         # 直接加载下载好模型预训练的参数,不用再次下载
    153         model_dict = {}
    154         state_dict = self.state_dict()
    155         for k, v in pretrain_dict.items():
    156             if k in state_dict:
    157                 model_dict[k] = v
    158         state_dict.update(model_dict)
    159         self.load_state_dict(state_dict)
    160 
    161 def ResNet101(output_stride, BatchNorm, pretrained=True):
    162     """Constructs a ResNet-101 model.
    163     Args:
    164         pretrained (bool): If True, returns a model pre-trained on ImageNet
    165     """
    166     model = ResNet(Bottleneck, [3, 4, 23, 3], output_stride, BatchNorm, pretrained=pretrained)
    167     # 【3,4,23,3】代表make_layer中的block(bottleneck)的块数。resnet源代码中以此来确定resnet50或者101以及更深的。
    168     return model
    169 
    170 if __name__ == "__main__":
    171     import torch
    172     model = ResNet101(BatchNorm=nn.BatchNorm2d, pretrained=True, output_stride=8)
    173     print(model) #打印模型结构,方便观察如何构造,以及各个参数的含义。
    174     input = torch.rand(1, 3, 512, 512)
    175     output, low_level_feat = model(input)
    176     print(output.size())
    177     print(low_level_feat.size())

    打印出的model结构如下:

      1 /home/huihua/anaconda3/bin/python /home/huihua/PycharmProjects/untitled/pytorch-deeplab-xception-master/modeling/backbone/resnet.py
      2 ResNet(
      3   (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
      4   (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      5   (relu): ReLU(inplace)
      6   (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
      7   (layer1): Sequential(
      8     (0): Bottleneck(
      9       (conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
     10       (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
     11       (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
     12       (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
     13       (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
     14       (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
     15       (relu): ReLU(inplace)
     16       (downsample): Sequential(
     17         (0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
     18         (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
     19       )
     20     )
     21     (1): Bottleneck(
     22       (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
     23       (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
     24       (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
     25       (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
     26       (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
     27       (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
     28       (relu): ReLU(inplace)
     29     )
     30     (2): Bottleneck(
     31       (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
     32       (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
     33       (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
     34       (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
     35       (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
     36       (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
     37       (relu): ReLU(inplace)
     38     )
     39   )
     40   (layer2): Sequential(
     41     (0): Bottleneck(
     42       (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
     43       (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
     44       (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
     45       (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
     46       (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
     47       (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
     48       (relu): ReLU(inplace)
     49       (downsample): Sequential(
     50         (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
     51         (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
     52       )
     53     )
     54     (1): Bottleneck(
     55       (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
     56       (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
     57       (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
     58       (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
     59       (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
     60       (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
     61       (relu): ReLU(inplace)
     62     )
     63     (2): Bottleneck(
     64       (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
     65       (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
     66       (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
     67       (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
     68       (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
     69       (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
     70       (relu): ReLU(inplace)
     71     )
     72     (3): Bottleneck(
     73       (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
     74       (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
     75       (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
     76       (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
     77       (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
     78       (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
     79       (relu): ReLU(inplace)
     80     )
     81   )
     82   (layer3): Sequential(
     83     (0): Bottleneck(
     84       (conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
     85       (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
     86       (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
     87       (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
     88       (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
     89       (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
     90       (relu): ReLU(inplace)
     91       (downsample): Sequential(
     92         (0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
     93         (1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
     94       )
     95     )
     96     (1): Bottleneck(
     97       (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
     98       (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
     99       (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
    100       (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    101       (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
    102       (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    103       (relu): ReLU(inplace)
    104     )
    105     (2): Bottleneck(
    106       (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
    107       (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    108       (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
    109       (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    110       (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
    111       (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    112       (relu): ReLU(inplace)
    113     )
    114     (3): Bottleneck(
    115       (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
    116       (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    117       (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
    118       (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    119       (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
    120       (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    121       (relu): ReLU(inplace)
    122     )
    123     (4): Bottleneck(
    124       (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
    125       (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    126       (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
    127       (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    128       (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
    129       (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    130       (relu): ReLU(inplace)
    131     )
    132     (5): Bottleneck(
    133       (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
    134       (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    135       (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
    136       (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    137       (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
    138       (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    139       (relu): ReLU(inplace)
    140     )
    141     (6): Bottleneck(
    142       (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
    143       (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    144       (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
    145       (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    146       (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
    147       (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    148       (relu): ReLU(inplace)
    149     )
    150     (7): Bottleneck(
    151       (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
    152       (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    153       (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
    154       (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    155       (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
    156       (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    157       (relu): ReLU(inplace)
    158     )
    159     (8): Bottleneck(
    160       (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
    161       (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    162       (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
    163       (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    164       (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
    165       (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    166       (relu): ReLU(inplace)
    167     )
    168     (9): Bottleneck(
    169       (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
    170       (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    171       (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
    172       (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    173       (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
    174       (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    175       (relu): ReLU(inplace)
    176     )
    177     (10): Bottleneck(
    178       (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
    179       (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    180       (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
    181       (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    182       (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
    183       (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    184       (relu): ReLU(inplace)
    185     )
    186     (11): Bottleneck(
    187       (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
    188       (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    189       (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
    190       (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    191       (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
    192       (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    193       (relu): ReLU(inplace)
    194     )
    195     (12): Bottleneck(
    196       (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
    197       (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    198       (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
    199       (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    200       (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
    201       (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    202       (relu): ReLU(inplace)
    203     )
    204     (13): Bottleneck(
    205       (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
    206       (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    207       (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
    208       (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    209       (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
    210       (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    211       (relu): ReLU(inplace)
    212     )
    213     (14): Bottleneck(
    214       (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
    215       (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    216       (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
    217       (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    218       (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
    219       (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    220       (relu): ReLU(inplace)
    221     )
    222     (15): Bottleneck(
    223       (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
    224       (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    225       (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
    226       (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    227       (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
    228       (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    229       (relu): ReLU(inplace)
    230     )
    231     (16): Bottleneck(
    232       (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
    233       (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    234       (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
    235       (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    236       (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
    237       (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    238       (relu): ReLU(inplace)
    239     )
    240     (17): Bottleneck(
    241       (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
    242       (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    243       (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
    244       (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    245       (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
    246       (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    247       (relu): ReLU(inplace)
    248     )
    249     (18): Bottleneck(
    250       (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
    251       (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    252       (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
    253       (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    254       (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
    255       (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    256       (relu): ReLU(inplace)
    257     )
    258     (19): Bottleneck(
    259       (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
    260       (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    261       (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
    262       (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    263       (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
    264       (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    265       (relu): ReLU(inplace)
    266     )
    267     (20): Bottleneck(
    268       (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
    269       (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    270       (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
    271       (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    272       (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
    273       (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    274       (relu): ReLU(inplace)
    275     )
    276     (21): Bottleneck(
    277       (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
    278       (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    279       (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
    280       (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    281       (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
    282       (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    283       (relu): ReLU(inplace)
    284     )
    285     (22): Bottleneck(
    286       (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
    287       (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    288       (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False)
    289       (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    290       (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
    291       (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    292       (relu): ReLU(inplace)
    293     )
    294   )
    295   (layer4): Sequential(
    296     (0): Bottleneck(
    297       (conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
    298       (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    299       (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(4, 4), dilation=(4, 4), bias=False)
    300       (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    301       (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
    302       (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    303       (relu): ReLU(inplace)
    304       (downsample): Sequential(
    305         (0): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
    306         (1): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    307       )
    308     )
    309     (1): Bottleneck(
    310       (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
    311       (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    312       (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(8, 8), dilation=(8, 8), bias=False)
    313       (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    314       (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
    315       (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    316       (relu): ReLU(inplace)
    317     )
    318     (2): Bottleneck(
    319       (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
    320       (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    321       (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(16, 16), dilation=(16, 16), bias=False)
    322       (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    323       (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
    324       (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    325       (relu): ReLU(inplace)
    326     )
    327   )
    328 )
    329 torch.Size([1, 2048, 64, 64])
    330 torch.Size([1, 256, 128, 128])
    331 
    332 Process finished with exit code 0
  • 相关阅读:
    寒假特训——搜索——H
    寒假特训——I
    寒假训练——搜索 K
    three.js 加载STL文件
    three.js 加载3DS 404 文件找不到
    C# 请求数据 方式1
    学习 一个简单的业务处理
    ABP 05 创建Model 以及 相应的增删改查
    ABP 04 用户的创建
    ABP 00 常用知识
  • 原文地址:https://www.cnblogs.com/ywheunji/p/10608791.html
Copyright © 2011-2022 走看看