zoukankan      html  css  js  c++  java
  • Pytorch-定义MLP&GPU加速&测试

    Pytorch定义网络结构识别手写数字,可以对网络中的参数w和b进行手动定义的(参考上一节),也可以直接用nn.Linear定义层的方式来定义,更加方便的方式是直接继承nn.Module来定义自己的网络结构。

    1.nn.Linear方式

     1 import torch 
     2 import  torch.nn as nn
     3 import  torch.nn.functional as F
     4 
     5 #模拟一张28*28的图片摊平
     6 x = torch.randn(1, 784)            #shape=[1,784]
     7 
     8 #定义三个全连接层
     9 layer1=nn.Linear(784, 200)         #(in,out)
    10 layer2=nn.Linear(200, 200)
    11 layer3=nn.Linear(200, 10)
    12 
    13 x=layer1(x)                        #shape=[1,200]
    14 x=F.relu(x, inplace=True)          #inplace=True在原对象基础上修改,可以节省内存
    15 
    16 x=layer2(x)                        #shape=[1,200]
    17 x=F.relu(x, inplace=True)
    18 
    19 x=layer3(x)                        #shape=[1,10]
    20 x=F.relu(x, inplace=True)

    2.继承nn.Module方式

     1 import  torch
     2 import  torch.nn as nn
     3 import  torch.nn.functional as F
     4 import  torch.optim as optim
     5 from    torchvision import datasets, transforms
     6 
     7 #超参数
     8 batch_size=200
     9 learning_rate=0.01
    10 epochs=10
    11 
    12 #获取训练数据
    13 train_loader = torch.utils.data.DataLoader(
    14     datasets.MNIST('../data', train=True, download=True,          #train=True则得到的是训练集
    15                    transform=transforms.Compose([                 #transform进行数据预处理
    16                        transforms.ToTensor(),                     #转成Tensor类型的数据
    17                        transforms.Normalize((0.1307,), (0.3081,)) #进行数据标准化(减去均值除以方差)
    18                    ])),
    19     batch_size=batch_size, shuffle=True)                          #按batch_size分出一个batch维度在最前面,shuffle=True打乱顺序
    20 
    21 #获取测试数据
    22 test_loader = torch.utils.data.DataLoader(
    23     datasets.MNIST('../data', train=False, transform=transforms.Compose([
    24         transforms.ToTensor(),
    25         transforms.Normalize((0.1307,), (0.3081,))
    26     ])),
    27     batch_size=batch_size, shuffle=True)
    28 
    29 
    30 class MLP(nn.Module):
    31 
    32     def __init__(self):
    33         super(MLP, self).__init__()
    34         
    35         self.model = nn.Sequential(         #定义网络的每一层,nn.ReLU可以换成其他激活函数,比如nn.LeakyReLU()
    36             nn.Linear(784, 200),
    37             nn.ReLU(inplace=True),
    38             nn.Linear(200, 200),
    39             nn.ReLU(inplace=True),
    40             nn.Linear(200, 10),
    41             nn.ReLU(inplace=True),
    42         )
    43 
    44     def forward(self, x):
    45         x = self.model(x)
    46         return x    
    47 
    48 
    49 net = MLP()
    50 #定义sgd优化器,指明优化参数、学习率,net.parameters()得到这个类所定义的网络的参数[[w1,b1,w2,b2,...]
    51 optimizer = optim.SGD(net.parameters(), lr=learning_rate)
    52 criteon = nn.CrossEntropyLoss()
    53 
    54 for epoch in range(epochs):
    55 
    56     for batch_idx, (data, target) in enumerate(train_loader):
    57         data = data.view(-1, 28*28)          #将二维的图片数据摊平[样本数,784]
    58 
    59         logits = net(data)                   #前向传播
    60         loss = criteon(logits, target)       #nn.CrossEntropyLoss()自带Softmax
    61 
    62         optimizer.zero_grad()                #梯度信息清空   
    63         loss.backward()                      #反向传播获取梯度
    64         optimizer.step()                     #优化器更新
    65 
    66         if batch_idx % 100 == 0:             #每100个batch输出一次信息
    67             print('Train Epoch: {} [{}/{} ({:.0f}%)]	Loss: {:.6f}'.format(
    68                 epoch, batch_idx * len(data), len(train_loader.dataset),
    69                        100. * batch_idx / len(train_loader), loss.item()))
    70 
    71 
    72     test_loss = 0
    73     correct = 0                                         #correct记录正确分类的样本数
    74     for data, target in test_loader:
    75         data = data.view(-1, 28 * 28)
    76         logits = net(data)
    77         test_loss += criteon(logits, target).item()     #其实就是criteon(logits, target)的值,标量
    78         
    79         pred = logits.data.max(dim=1)[1]                #也可以写成pred=logits.argmax(dim=1)
    80         correct += pred.eq(target.data).sum()
    81 
    82     test_loss /= len(test_loader.dataset)
    83     print('
    Test set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)
    '.format(
    84         test_loss, correct, len(test_loader.dataset),
    85         100. * correct / len(test_loader.dataset)))

    Train Epoch: 0 [0/60000 (0%)] Loss: 2.307840
    Train Epoch: 0 [20000/60000 (33%)] Loss: 2.022810
    Train Epoch: 0 [40000/60000 (67%)] Loss: 1.342542

    Test set: Average loss: 0.0038, Accuracy: 8374/10000 (84%)

    Train Epoch: 1 [0/60000 (0%)] Loss: 0.802759
    Train Epoch: 1 [20000/60000 (33%)] Loss: 0.627895
    Train Epoch: 1 [40000/60000 (67%)] Loss: 0.482087

    Test set: Average loss: 0.0020, Accuracy: 8926/10000 (89%)

    Train Epoch: 2 [0/60000 (0%)] Loss: 0.496279
    Train Epoch: 2 [20000/60000 (33%)] Loss: 0.420009
    Train Epoch: 2 [40000/60000 (67%)] Loss: 0.429296

    Test set: Average loss: 0.0017, Accuracy: 9069/10000 (91%)

    Train Epoch: 3 [0/60000 (0%)] Loss: 0.304612
    Train Epoch: 3 [20000/60000 (33%)] Loss: 0.356296
    Train Epoch: 3 [40000/60000 (67%)] Loss: 0.405541

    Test set: Average loss: 0.0015, Accuracy: 9149/10000 (91%)

    Train Epoch: 4 [0/60000 (0%)] Loss: 0.304062
    Train Epoch: 4 [20000/60000 (33%)] Loss: 0.406027
    Train Epoch: 4 [40000/60000 (67%)] Loss: 0.385962

    Test set: Average loss: 0.0014, Accuracy: 9201/10000 (92%)

    Train Epoch: 5 [0/60000 (0%)] Loss: 0.186269
    Train Epoch: 5 [20000/60000 (33%)] Loss: 0.196249
    Train Epoch: 5 [40000/60000 (67%)] Loss: 0.228671

    Test set: Average loss: 0.0013, Accuracy: 9248/10000 (92%)

    Train Epoch: 6 [0/60000 (0%)] Loss: 0.364886
    Train Epoch: 6 [20000/60000 (33%)] Loss: 0.295816
    Train Epoch: 6 [40000/60000 (67%)] Loss: 0.244240

    Test set: Average loss: 0.0012, Accuracy: 9290/10000 (93%)

    Train Epoch: 7 [0/60000 (0%)] Loss: 0.228807
    Train Epoch: 7 [20000/60000 (33%)] Loss: 0.192547
    Train Epoch: 7 [40000/60000 (67%)] Loss: 0.223399

    Test set: Average loss: 0.0012, Accuracy: 9329/10000 (93%)

    Train Epoch: 8 [0/60000 (0%)] Loss: 0.176273
    Train Epoch: 8 [20000/60000 (33%)] Loss: 0.346954
    Train Epoch: 8 [40000/60000 (67%)] Loss: 0.253838

    Test set: Average loss: 0.0011, Accuracy: 9359/10000 (94%)

    Train Epoch: 9 [0/60000 (0%)] Loss: 0.246411
    Train Epoch: 9 [20000/60000 (33%)] Loss: 0.201452
    Train Epoch: 9 [40000/60000 (67%)] Loss: 0.162228

    Test set: Average loss: 0.0011, Accuracy: 9377/10000 (94%)

    区别nn.ReLU()和F.relu()

    nn.ReLU()是类风格的API(大写开头,必须先实例化再调用,参数w、b是内部成员,通过.parameters来访问)

    F.relu()是函数风格的API(自己管理)

    1 import  torch
    2 import  torch.nn as nn
    3 import  torch.nn.functional as F
    4 
    5 x=torch.randn(1,10)
    6 print(F.relu(x, inplace=True))   #tensor([[0.2846, 0.6158, 0.0000, 0.0000, 0.0000, 1.7980, 0.6466, 0.4263, 0.0000, 0.0000]])
    7 
    8 layer=nn.ReLU()
    9 print(layer(x))                  #tensor([[0.2846, 0.6158, 0.0000, 0.0000, 0.0000, 1.7980, 0.6466, 0.4263, 0.0000, 0.0000]])

    3.GPU加速

    Pytorch中使用torch.device()选取并返回抽象出的设备,然后在定义的网络模块或者Tensor后面加上.to(device变量)就可以将它们搬到设备上了。

    1 device = torch.device('cpu:0')                     #使用第一张显卡
    2 net = MLP().to(device)                             #定义的网络
    3 criteon = nn.CrossEntropyLoss().to(device)         #损失函数

    每次取出的训练集和验证集的batch数据放到GPU上:

    1 data, target = data.to(device), target.cuda()     #两种方式

    应用上面的案例添加GPU加速,完整代码如下:

     1 import  torch
     2 import  torch.nn as nn
     3 import  torch.nn.functional as F
     4 import  torch.optim as optim
     5 from    torchvision import datasets, transforms
     6 
     7 #超参数
     8 batch_size=200
     9 learning_rate=0.01
    10 epochs=10
    11 
    12 #获取训练数据
    13 train_loader = torch.utils.data.DataLoader(
    14     datasets.MNIST('../data', train=True, download=True,          #train=True则得到的是训练集
    15                    transform=transforms.Compose([                 #transform进行数据预处理
    16                        transforms.ToTensor(),                     #转成Tensor类型的数据
    17                        transforms.Normalize((0.1307,), (0.3081,)) #进行数据标准化(减去均值除以方差)
    18                    ])),
    19     batch_size=batch_size, shuffle=True)                          #按batch_size分出一个batch维度在最前面,shuffle=True打乱顺序
    20 
    21 #获取测试数据
    22 test_loader = torch.utils.data.DataLoader(
    23     datasets.MNIST('../data', train=False, transform=transforms.Compose([
    24         transforms.ToTensor(),
    25         transforms.Normalize((0.1307,), (0.3081,))
    26     ])),
    27     batch_size=batch_size, shuffle=True)
    28 
    29 
    30 class MLP(nn.Module):
    31 
    32     def __init__(self):
    33         super(MLP, self).__init__()
    34 
    35         self.model = nn.Sequential(         #定义网络的每一层,
    36             nn.Linear(784, 200),
    37             nn.ReLU(inplace=True),
    38             nn.Linear(200, 200),
    39             nn.ReLU(inplace=True),
    40             nn.Linear(200, 10),
    41             nn.ReLU(inplace=True),
    42         )
    43 
    44     def forward(self, x):
    45         x = self.model(x)
    46         return x
    47 
    48 device = torch.device('cuda:0')
    49 net = MLP().to(device)
    50 #定义sgd优化器,指明优化参数、学习率,net.parameters()得到这个类所定义的网络的参数[[w1,b1,w2,b2,...]
    51 optimizer = optim.SGD(net.parameters(), lr=learning_rate)
    52 criteon = nn.CrossEntropyLoss().to(device)
    53 
    54 
    55 for epoch in range(epochs):
    56 
    57     for batch_idx, (data, target) in enumerate(train_loader):
    58         data = data.view(-1, 28*28)          #将二维的图片数据摊平[样本数,784]
    59         data, target = data.to(device), target.cuda()
    60 
    61         logits = net(data)               #前向传播
    62         loss = criteon(logits, target)       #nn.CrossEntropyLoss()自带Softmax
    63 
    64         optimizer.zero_grad()                #梯度信息清空
    65         loss.backward()                      #反向传播获取梯度
    66         optimizer.step()                     #优化器更新
    67 
    68         if batch_idx % 100 == 0:             #每100个batch输出一次信息
    69             print('Train Epoch: {} [{}/{} ({:.0f}%)]	Loss: {:.6f}'.format(
    70                 epoch, batch_idx * len(data), len(train_loader.dataset),
    71                        100. * batch_idx / len(train_loader), loss.item()))
    72 
    73 
    74     test_loss = 0
    75     correct = 0                                         #correct记录正确分类的样本数
    76     for data, target in test_loader:
    77         data = data.view(-1, 28 * 28)
    78         data, target = data.to(device), target.cuda()
    79 
    80         logits = net(data)
    81         test_loss += criteon(logits, target).item()     #其实就是criteon(logits, target)的值,标量
    82 
    83         pred = logits.data.max(dim=1)[1]                #也可以写成pred=logits.argmax(dim=1)
    84         correct += pred.eq(target.data).sum()
    85 
    86     test_loss /= len(test_loader.dataset)
    87     print('
    Test set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)
    '.format(
    88         test_loss, correct, len(test_loader.dataset),
    89         100. * correct / len(test_loader.dataset)))
    View Code

    4.多分类测试 

    下面的例子中logits是一个4*10的Tensor,可以理解成4张图片,每张图片有10维向量的预测,然后对每一张照片的输出值执行softmax和argmax(dim=1),得出预测标签,与真实标签比较,得出准确率。

     1 import  torch
     2 import  torch.nn.functional as F
     3 
     4 logits = torch.rand(4, 10)
     5 pred = F.softmax(logits, dim=1)
     6 
     7 pred_label = pred.argmax(dim=1)                     #tensor([5, 8, 4, 7]) 和logits.argmax(dim=1)结果一样
     8 
     9 label = torch.tensor([5, 3, 2, 7])
    10 correct = torch.eq(pred_label, label)               #tensor([ True, False, False,  True]) 和pred_label.eq(label)结果一样
    11 
    12 print(correct.sum().float().item() / len(logits))   #0.5
  • 相关阅读:
    大四实习几个课题
    Keil 4 与Proteus 7.8联调
    局域网共享
    win 8.1 网卡
    路由器无线桥接 router wireless bridge
    系统对话框alert-prompt-confirm
    处理浏览器兼容性
    pageX--clientX--scrollLeft-clientLeft-offsetWidth
    处理注册事件的兼容性问题
    处理innerText的兼容性问题
  • 原文地址:https://www.cnblogs.com/cxq1126/p/13283675.html
Copyright © 2011-2022 走看看