zoukankan      html  css  js  c++  java
  • Pytorch torch.optim优化器个性化使用

    一、简化前馈网络LeNet

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    import torch as t
     
     
    class LeNet(t.nn.Module):
        def __init__(self):
            super(LeNet, self).__init__()
            self.features = t.nn.Sequential(
                t.nn.Conv2d(365),
                t.nn.ReLU(),
                t.nn.MaxPool2d(22),
                t.nn.Conv2d(6165),
                t.nn.ReLU(),
                t.nn.MaxPool2d(22)
            )
            # 由于调整shape并不是一个class层,
            # 所以在涉及这种操作(非nn.Module操作)需要拆分为多个模型
            self.classifiter = t.nn.Sequential(
                t.nn.Linear(16*5*5120),
                t.nn.ReLU(),
                t.nn.Linear(12084),
                t.nn.ReLU(),
                t.nn.Linear(8410)
            )
     
        def forward(self, x):
            = self.features(x)
            = x.view(-116*5*5)
            = self.classifiter(x)
            return x
     
    net = LeNet()

    二、优化器基本使用方法

    1. 建立优化器实例
    2. 循环:
      1. 清空梯度
      2. 向前传播
      3. 计算Loss
      4. 反向传播
      5. 更新参数
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    from torch import optim
     
    # 通常的step优化过程
    optimizer = optim.SGD(params=net.parameters(), lr=1)
    optimizer.zero_grad()  # net.zero_grad()
     
    input_ = t.autograd.Variable(t.randn(133232))
    output = net(input_)
    output.backward(output)
     
    optimizer.step()

    三、网络模块参数定制

    为不同的子网络参数不同的学习率,finetune常用,使分类器学习率参数更高,学习速度更快(理论上)。

    1.经由构建网络时划分好的模组进行学习率设定,

    1
    2
    3
    # # 直接对不同的网络模块制定不同学习率
    optimizer = optim.SGD([{'params': net.features.parameters()}, # 默认lr是1e-5
                           {'params': net.classifiter.parameters(), 'lr'1e-2}], lr=1e-5)

     2.以网络层对象为单位进行分组,并设定学习率

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    # # 以层为单位,为不同层指定不同的学习率
    # ## 提取指定层对象
    special_layers = t.nn.ModuleList([net.classifiter[0], net.classifiter[3]])
    # ## 获取指定层参数id
    special_layers_params = list(map(id, special_layers.parameters()))
    print(special_layers_params)
    # ## 获取非指定层的参数id
    base_params = filter(lambda p: id(p) not in special_layers_params, net.parameters())
    optimizer = t.optim.SGD([{'params': base_params},
                             {'params': special_layers.parameters(), 'lr'0.01}], lr=0.001)

    四、在训练中动态的调整学习率

    1
    2
    3
    4
    5
    6
    7
    8
    9
    '''调整学习率'''
    # 新建optimizer或者修改optimizer.params_groups对应的学习率
    # # 新建optimizer更简单也更推荐,optimizer十分轻量级,所以开销很小
    # # 但是新的优化器会初始化动量等状态信息,这对于使用动量的优化器(momentum参数的sgd)可能会造成收敛中的震荡
    # ## optimizer.param_groups:长度2的list,optimizer.param_groups[0]:长度6的字典
    print(optimizer.param_groups[0]['lr'])
    old_lr = 0.1
    optimizer = optim.SGD([{'params': net.features.parameters()},
                           {'params': net.classifiter.parameters(), 'lr': old_lr*0.1}], lr=1e-5)

     可以看到optimizer.param_groups结构,[{'params','lr', 'momentum', 'dampening', 'weight_decay', 'nesterov'},{……}],集合了优化器的各项参数。

    import torch
    from torch.optim.optimizer import Optimizer, required
    
    class SGD(Optimizer):
        def __init__(self, params, lr=required, momentum=0, dampening=0, weight_decay1=0, weight_decay2=0, nesterov=False):
            defaults = dict(lr=lr, momentum=momentum, dampening=dampening,
                            weight_decay1=weight_decay1, weight_decay2=weight_decay2, nesterov=nesterov)
            if nesterov and (momentum <= 0 or dampening != 0):
                raise ValueError("Nesterov momentum requires a momentum and zero dampening")
            super(SGD, self).__init__(params, defaults)
    
        def __setstate__(self, state):
            super(SGD, self).__setstate__(state)
            for group in self.param_groups:
                group.setdefault('nesterov', False)
    
        def step(self, closure=None):
            """Performs a single optimization step. Arguments: closure (callable, optional): A closure that reevaluates the model and returns the loss. """
            loss = None
            if closure is not None:
                loss = closure()
    
            for group in self.param_groups:
                weight_decay1 = group['weight_decay1']
                weight_decay2 = group['weight_decay2']
                momentum = group['momentum']
                dampening = group['dampening']
                nesterov = group['nesterov']
    
                for p in group['params']:
                    if p.grad is None:
                        continue
                    d_p = p.grad.data
                    if weight_decay1 != 0:
                        d_p.add_(weight_decay1, torch.sign(p.data))
                    if weight_decay2 != 0:
                        d_p.add_(weight_decay2, p.data)
                    if momentum != 0:
                        param_state = self.state[p]
                        if 'momentum_buffer' not in param_state:
                            buf = param_state['momentum_buffer'] = torch.zeros_like(p.data)
                            buf.mul_(momentum).add_(d_p)
                        else:
                            buf = param_state['momentum_buffer']
                            buf.mul_(momentum).add_(1 - dampening, d_p)
                        if nesterov:
                            d_p = d_p.add(momentum, buf)
                        else:
                            d_p = buf
    
                    p.data.add_(-group['lr'], d_p)
    
            return loss
  • 相关阅读:
    array with objects sort
    Vue组件之区域滚动
    ajax跨域请求问题总结
    常见contentType
    Sublime Text 3实用快捷键大全
    具有层级关系的扁平化数组转化成树形结构数组
    阿里云地图选择器
    前端解析二进制文件流并下载
    tool.js日常工具方法
    git操作
  • 原文地址:https://www.cnblogs.com/ranjiewen/p/9240512.html
Copyright © 2011-2022 走看看