zoukankan      html  css  js  c++  java
  • torch_07_卷积神经网络案例分析

    1. LeNet(1998)

     1 """
     2 note:
     3 LeNet:
     4 输入体:32*32*1
     5 卷积核:5*5
     6 步长:1
     7 填充:无
     8 池化:2*2
     9 代码旁边的注释:卷积或者池化后的数据的尺寸
    10 """
    11 import torch
    12 import torch.nn as nn
    13 
    14 
    15 class LeNet(nn.Module):
    16     def __init__(self):
    17         super(LeNet,self).__init__()
    18         layer1 = nn.Sequential()
    19         layer1.add_module('conv1',nn.Conv2d(1,6,5,1,padding=0))# 没有填充 ,b,6,28*28
    20         layer1.add_module('pool1',nn.MaxPool2d(2,2))   # 6,14*14 (28-2)/2+1 = 14
    21         self.layer1= layer1
    22 
    23         layer2 = nn.Sequential()
    24         layer2.add_module('conv2', nn.Conv2d(6, 16, 5, 1, padding=0))  # 没有填充 b,16,10*10
    25         layer2.add_module('pool2', nn.MaxPool2d(2, 2))  # 16,5*5
    26         self.layer2 = layer2
    27 
    28         layer3 = nn.Sequential()
    29         layer3.add_module('fc1',nn.Linear(400,120))
    30         layer3.add_module('fc2',nn.Linear(120,84))
    31         layer3.add_module('fc3',nn.Linear(84,10))
    32         self.layer3 = layer3
    33 
    34         def forward(self,x):
    35             x = self.layer1(x)
    36             x = self.layer2(x)
    37             x = x.view(x.size(0),-1)  # 将多维数据排列成一行:1*400(16*5*5)
    38             x = self.layer3(x)
    39             return x

    2.AlexNet(2012):层数更深,同时第一次引入了激活层ReLU,在全连接层引入了Dropout层防止过拟合

    3.VGGNet(2014):有16~19层网络,使用了3*3的卷积滤波器和2*2的池化层。只是对网络层进行不断的堆叠,并没有太大的创新,增加深度缺失可以一定程度改善模型效果。

    4.GoogleLeNet:(InceptionNet)(2014):比VGGNet更深的网络结构,一共22层,但是它的参数比AlexNet少了12倍,同时有很高的计算效率,因为它采用了一种有效的Inception模块,而且它也没有全连接层。Inception模块设计了一个局部的网络拓扑结构,然后将这些模块堆叠在一起形成一个抽象层次的网络结构。具体来说是运用几个并行的滤波器对输入进行卷积核池化,这些滤波器有不同的感受野,最后将输出的结果按深度拼接在一起形成输出层。缺点:参数太多,导致计算复杂。这些模块增加了一些1*1的卷积层来降低输入层的维度,使网络参数减少,从而减少网络的复杂性。

     1 """
     2 GooglNet的Inceoption模块,整个GoogleNet都是由这些Inception模块组成的
     3 nn.BatchNorm1d:在每个小批量数据中,计算输入各个维度的均值和标注差。
     4 num_features:期望输入大小:batch_size * num_features
     5 torch.cat:将不同尺度的卷积深度相加,只是深度不同,数据体大小是一样的
     6 (0)表示增加行,(1)表示增加列
     7 """
     8 
     9 import torch.nn as nn
    10 import torch
    11 import torch.nn.functional as F
    12 
    13 
    14 class BasicConv2d(nn.Module):
    15     def __init__(self,in_channels,out_channles,**kwargs):
    16         super(BasicConv2d,self).__init__()
    17         self.conv = nn.Conv2d(in_channels,out_channles,bias=False,**kwargs)
    18         self.bn = nn.BatchNorm1d(out_channles,eps=0.001)
    19 
    20     def forward(self,x):
    21         x = self.conv(x)
    22         x = self.bn(x)
    23         return F.relu(x,inplace = True)
    24 
    25 
    26 class Inception(nn.Module):
    27     def __init__(self,in_channles,pool_features):
    28         super(Inception,self).__init__()
    29         self.branch1x1 = BasicConv2d(in_channles,64,kernel_size = 1)
    30 
    31         self.branch5x5_1 = BasicConv2d(in_channles,48,kernel_size = 1)
    32         self.branch5x5_2 = BasicConv2d(48,64,kernel_size = 5,padding = 2)
    33 
    34         self.branch3x3dbl_1 = BasicConv2d(in_channles,64,kernel_size = 1)
    35         self.branch3x3dbl_2 = BasicConv2d(64,96,kernel_size = 3,padding = 1)
    36         #self.branch3x3dbl_3 = BasicConv2d(96,96,kernel_size = 3,padding = 1)
    37 
    38         self.branch_pool = BasicConv2d(in_channles,pool_features,kenel_size = 1)
    39 
    40         def forward(self, x):
    41             branch1x1 = self.branch1x1(x)
    42 
    43             branch5x5 = self.branch5x5_1(x) # 核是1
    44             branch5x5 = self.branch5x5_2(branch5x5)  #核是5
    45 
    46             branch3x3 = self.branch3x3dbl_1(x) # 核是1
    47             branch3x3 = self.branch3x3dbl_2(branch3x3)
    48 
    49             branch_pool = F.avg_pool2d(x,kernel_size = 3,stride = 1,padding = 1)
    50             branch_pool = self.branch_pool(branch_pool)
    51 
    52             outputs = [branch1x1,branch5x5,branch3x3,branch_pool]
    53             return torch.cat(outputs,1)

    5.ResNet(2015)

      在不断加深神经网络的时候,会出现准确率先上升然后达到饱和,再持续增加深度会导致模型准确率下降,这并不是过拟合问题,因为不仅在验证集上误差增加,训练集本身误差也会增加,假设一个比较浅的网络达到了饱和的准确率,那么在后面加上几个恒等的映射层,误差不会增加,也就是说更深的模型起码不会使得模型效果下降。假设某个神经网络的输入是x,期望输出值是H(x),如果直接把输入x传到输出作为初始结果,那么此时需要学习的目标就是F(x) = H(x)- x,即残差。ResNet相当于将学习目标改变了,不再学习一个完整的输出H(x),而是学习输出和输入的差别 H(x)-x

     1 import torch
     2 import torch.nn as nn
     3 
     4 
     5 def conv3x3(in_planes,out_plans,stride = 1):
     6     return nn.Conv2d(
     7         in_planes,
     8         out_plans,kernel_size=3,
     9         stride=stride,
    10         padding=1,
    11         bias = False
    12     )
    13 
    14 
    15 class BasicBlock(nn.Module):
    16     def __init__(self,inplanes,planes,stride = 1,downsample = None):
    17         super(BasicBlock,self).__init__()
    18         self.conv1 = conv3x3(inplanes,planes,stride)
    19         self.bn1 = nn.BatchNorm2d(planes)
    20         self.relu = nn.ReLU(inplace=True)
    21         self.conv2 = conv3x3(planes,planes)
    22         self.bn2 = nn.BatchNorm2d(planes)
    23         self.downsample = downsample
    24         self.stride = stride
    25 
    26     def forward(self,x):
    27         residual = x
    28         out = self.conv1(x)
    29 
    30         out = self.bn1(out)
    31         out = self.relu(out)
    32 
    33         out = self.conv2(out)
    34         out = self.bn2(out)
    35 
    36         if self.downsample is not None:
    37             residual = self.downsample(x)
    38 
    39         out += residual
    40         out = self.relu(out)
    41         return out
  • 相关阅读:
    [LeetCOde] Reverse Words in a String III
    [LeetCode] Keyboard Row
    [LeetCode] Number Complement
    [LeetCode] Array Partition I
    [LeetCode] Merge Two Binary Trees
    [LeetCode] Hamming Distance
    FFmpeg在ubuntu下安装及使用
    curl命令备注
    CentOS配置防火墙
    leetcode 21 Merge Two Sorted Lists
  • 原文地址:https://www.cnblogs.com/shuangcao/p/11740645.html
Copyright © 2011-2022 走看看