zoukankan      html  css  js  c++  java
  • nn.Linear nn.Conv2d nn.BatchNorm2d

    conv,BN,Linear
    conv:https://blog.csdn.net/Strive_For_Future/article/details/83240232
    1)conv2d.weight     shape=[输出channels,输入channels,kernel_size,kernel_size]
    
    2)conv2d.bias   shape=[输出channels]
    BN:https://www.cnblogs.com/tingtin/p/12523701.html
    尺寸:输入输出一样
    m = nn.BatchNorm2d(2,affine=True) #2表示输出通道数,affine=True表示权重w和偏重b将被使用学习
    m.weight:tensor([1., 1.], requires_grad=True)
    m.bias:tensor([0., 0.], requires_grad=True)#w,b都是大小维输出通道数的向量
    Linear:https://www.cnblogs.com/tingtin/p/12425849.html
    nn.Linear()用于设置全连接层,输入输出均为二维张量,形状为[batch_size, size]
    def __init__(self, in_features, out_features, bias=True):
            super(Linear, self).__init__()
            self.in_features = in_features
            self.out_features = out_features
            self.weight = Parameter(torch.Tensor(out_features, in_features))
            if bias:
                self.bias = Parameter(torch.Tensor(out_features))
            else:
                self.register_parameter('bias', None)
            self.reset_parameters()
  • 相关阅读:
    docx python
    haozip 命令行解压文件
    python 升级2.7版本到3.7
    Pyautogui
    python 库搜索技巧
    sqlserver学习笔记
    vim使用
    三极管工作原理分析
    串口扩展方案+简单自制电平转换电路
    功率二极管使用注意
  • 原文地址:https://www.cnblogs.com/tingtin/p/13582979.html
Copyright © 2011-2022 走看看