zoukankan      html  css  js  c++  java
  • PyTorch固定参数

    In situation of finetuning, parameters in backbone network need to be frozen. To achieve this target, there are two steps.

    First, locate the layers and change their requires_grad attributes to be False.

    for param in net.backbone.parameters():
        param.requires_grad = False
    for pname, param in net.named_parameters():
        if(key_word in pname):
            param.requires_grad = False

    Here we use parameters() or named_parameters() method, it will give both bias and weight.

    Second, filter out those parameters who need to be updated and pass them to the optimizer.

    optimizer = torch.optim.SGD(filter(lambda p: p.requires_grad == True, net.parameters()), lr=learning_rate, momentum=mom)
  • 相关阅读:
    自闭的D7
    D2
    Codeforces Round #531 (Div. 3)
    hello 2019 D
    牛客练习赛36B
    cf954H
    gym102007 E
    Gym 101972
    Gym 101810
    试题 历届试题 青蛙跳杯子(bfs)
  • 原文地址:https://www.cnblogs.com/hizhaolei/p/12535294.html
Copyright © 2011-2022 走看看