zoukankan      html  css  js  c++  java
  • 动手学深度学习11- 多层感知机pytorch简洁实现

    多层感知机的简洁实现

    import  torch
    from torch import nn
    from torch.nn import init
    import sys
    import numpy as np
    sys.path.append('..')
    import d2lzh_pytorch as d2l
    
    定义模型
    num_inputs,num_outputs,num_hidden =784,10,256
    net = nn.Sequential(d2l.FlattenLayer(),
                       nn.Linear(num_inputs,num_hidden),
                        nn.ReLU(),
                        nn.Linear(num_hidden,num_outputs),
                       )
    for params in net.parameters():
        init.normal_(params,mean=0,std=0.01)
        
    
    print(net.parameters)
    
    <bound method Module.parameters of Sequential(
      (0): FlattenLayer()
      (1): Linear(in_features=784, out_features=256, bias=True)
      (2): ReLU()
      (3): Linear(in_features=256, out_features=10, bias=True)
    )>
    
    读取数据
    batch_size= 256
    train_iter,test_iter = d2l.get_fahsion_mnist(batch_size)
    
    损失函数
    loss = nn.CrossEntropyLoss()
    
    定义优化算法
    optimizer = torch.optim.SGD(net.parameters(),lr=0.5)
    num_epochs =5
    
    训练数据并验证测试集
    d2l.train_ch3(net,train_iter,test_iter,loss,num_epochs,batch_size,None,None,optimizer)
    
    epoch 1, loss 0.0031, train acc 0.707, test acc 0.810
    epoch 2, loss 0.0019, train acc 0.821, test acc 0.810
    epoch 3, loss 0.0017, train acc 0.843, test acc 0.835
    epoch 4, loss 0.0015, train acc 0.858, test acc 0.840
    epoch 5, loss 0.0014, train acc 0.865, test acc 0.853
    

    小结

    • 通过pytorch可以使用Sequential这样的写法简洁的实现多层感知机
  • 相关阅读:
    poj 3378 Crazy Thairs 夜
    1487. Chinese Football 夜
    容斥原理
    Dancing Links
    三角剖分
    模线性方程模板
    模线性方程
    容斥原理 POJ2773
    DNA Sequence [矩阵]
    hdu 2588 容斥
  • 原文地址:https://www.cnblogs.com/onemorepoint/p/11811635.html
Copyright © 2011-2022 走看看