zoukankan      html  css  js  c++  java
  • 神经网络学习--PyTorch学习03 搭建模型

    torch.nn

    (1)用于搭建网络结构的序列容器:torch.nn.Sequential 

    models = torch.nn.Sequential(
        torch.nn.Linear(input_data, hidden_layer),
        torch.nn.ReLU(),
        torch.nn.Linear(hidden_layer, output_data)
    )
    from collections import OrderedDict  # 使用有序字典 使模块有自定义的名次
    models2 = torch.nn.Sequential(OrderedDict([
        ("Line1",torch.nn.Linear(input_data, hidden_layer)),
        ("ReLu1",torch.nn.ReLU()),
        ("Line2",torch.nn.Linear(hidden_layer, output_data))])
    )

    (2)线性层:torch.nn.Linear

    (3)激活函数:torch.nn.ReLU

    (4)损失函数:torch.nn.MSELoss(均方误差函数),troch.nn.L1Loss(平均绝对误差函数),torch.nn.CrossEntropyLoss(交叉熵)

    import torch
    from torch.autograd import Variable
    batch_n = 100
    hidden_layer = 100
    input_data = 1000
    output_data = 10
    
    x = Variable(torch.randn(batch_n, input_data), requires_grad=False)  # x封装为节点,设置为不自动求导
    y = Variable(torch.randn(batch_n, output_data), requires_grad=False)
    models = torch.nn.Sequential(
        torch.nn.Linear(input_data, hidden_layer),
        torch.nn.ReLU(),
        torch.nn.Linear(hidden_layer, output_data)
    )
    # from collections import OrderedDict  # 使用有序字典 使模块有自定义的名次
    # models2 = torch.nn.Sequential(OrderedDict([
    #     ("Line1",torch.nn.Linear(input_data, hidden_layer)),
    #     ("ReLu1",torch.nn.ReLU()),
    #     ("Line2",torch.nn.Linear(hidden_layer, output_data))])
    # )
    epoch_n = 10000
    learning_rate = 0.0001
    loss_fn = torch.nn.MSELoss()
    
    for epoch in range(epoch_n):
        y_pred = models(x)
        loss = loss_fn(y_pred,y)
        if epoch%1000 == 0:
            print("Epoch:{},Loss:{:4f}".format(epoch,loss.data[0]))
        models.zero_grad()  # 梯度归零
    
        loss.backward()
    
        for param in models.parameters():  # 遍历节点参数更新
            param.data -= param.grad.data*learning_rate

    torch.optim包

    参数自动优化类:SGD,AdaGrad,RMSProp,Adam

    import torch
    from torch.autograd import Variable
    batch_n = 100
    hidden_layer = 100
    input_data = 1000
    output_data =10
    
    x = Variable(torch.randn(batch_n, input_data), requires_grad=False)
    y = Variable(torch.randn(batch_n, output_data), requires_grad=False)
    
    models = torch.nn.Sequential(
        torch.nn.Linear(input_data,hidden_layer),
        torch.nn.ReLU(),
        torch.nn.Linear(hidden_layer,output_data)
    )
    
    epoch_n = 20
    learning_rate = 0.0001
    loss_fn = torch.nn.MSELoss()
    
    optimzer = torch.optim.Adam(models.parameters(), lr=learning_rate)  # torch.optim.Adam对梯度更新使用到的学习率进行自适应调节
    
    for epoch in range(epoch_n):
        y_pred = models(x)
        loss = loss_fn(y_pred,y)
        print("Eproch:{},Loss:{:4f}".format(epoch,loss.data[0]))
        optimzer.zero_grad()  # 参数梯度归零
    
        loss.backward()
        optimzer.step()  # 节点参数更新
  • 相关阅读:
    自然语言处理3.4——使用正则表达式检测词组搭配
    自然语言处理3.3——使用Unicode进行文字处理
    自然语言处理3.1——从网络和硬盘访问文本
    自然语言处理2.3——词典资源
    自然语言处理2.2——条件频率分布
    自然语言处理——NLTK中文语料库语料库
    自然语言处理2.1——NLTK文本语料库
    【转载】使用LFM(Latent factor model)隐语义模型进行Top-N推荐
    Ajax (jquery)实现智能提示搜索框(in Django)
    python操作mysql数据库
  • 原文地址:https://www.cnblogs.com/zuhaoran/p/11458440.html
Copyright © 2011-2022 走看看