zoukankan      html  css  js  c++  java
  • pytorch(一) 实现一个隐层的全连接神经网络

    torch.nn 实现 模型的定义,网络层的定义,损失函数的定义。

    import torch
    
    # N is batch size; D_in is input dimension;
    # H is hidden dimension; D_out is output dimension.
    N, D_in, H, D_out = 64, 1000, 100, 10
    
    # Create random Tensors to hold inputs and outputs
    x = torch.randn(N, D_in)
    y = torch.randn(N, D_out)
    
    # Use the nn package to define our model as a sequence of layers. nn.Sequential
    # is a Module which contains other Modules, and applies them in sequence to
    # produce its output. Each Linear Module computes output from input using a
    # linear function, and holds internal Tensors for its weight and bias.
    model = torch.nn.Sequential(
        torch.nn.Linear(D_in, H),
        torch.nn.ReLU(),
        torch.nn.Linear(H, D_out),
    )
    
    # The nn package also contains definitions of popular loss functions; in this
    # case we will use Mean Squared Error (MSE) as our loss function.
    loss_fn = torch.nn.MSELoss(reduction='sum')
    
    learning_rate = 1e-4
    for t in range(500):
        # Forward pass: compute predicted y by passing x to the model. Module objects
        # override the __call__ operator so you can call them like functions. When
        # doing so you pass a Tensor of input data to the Module and it produces
        # a Tensor of output data.
        y_pred = model(x)
    
        # Compute and print loss. We pass Tensors containing the predicted and true
        # values of y, and the loss function returns a Tensor containing the
        # loss.
        loss = loss_fn(y_pred, y)
        print(t, loss.item())
    
        # Zero the gradients before running the backward pass.
        model.zero_grad()
    
        # Backward pass: compute gradient of the loss with respect to all the learnable
        # parameters of the model. Internally, the parameters of each Module are stored
        # in Tensors with requires_grad=True, so this call will compute gradients for
        # all learnable parameters in the model.
        loss.backward()
    
        # Update the weights using gradient descent. Each parameter is a Tensor, so
        # we can access its gradients like we did before.
        with torch.no_grad():
            for param in model.parameters():
                param -= learning_rate * param.grad
    

    上面,我们使用parem= -= learning_rate* param.grad 手动更新参数。
    使用torch.optim 自动优化参数。optim这个package提供了各种不同的模型优化方法,包括SGD+momentum, RMSProp, Adam等等。

    optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
    for t in range(500):
        y_pred = model(x)
        loss = loss_fn(y_pred, y)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
    
  • 相关阅读:
    一个简单的CI分页类
    php版给UEditor的图片在线管理栏目增加图片删除功能
    PHP 中获取文件名及路径
    session共享
    linux进阶之Tomcat服务篇
    Linux进阶之环境变量文件/etc/profile、/etc/bashrc、/etc/environment
    shell应用之简单计算器
    Linux进阶之日志管理
    Linux进阶之LAMP和LNMP动态网站搭建
    linux进阶之子网划分
  • 原文地址:https://www.cnblogs.com/leimu/p/13230382.html
Copyright © 2011-2022 走看看