1.感知机
单层感知机:
1 x=torch.randn(1,10) 2 w=torch.randn(1,10,requires_grad=True) 3 4 o=torch.sigmoid(x@w.t()) #o=tensor([[0.9997]], grad_fn=<SigmoidBackward>) 5 6 loss=F.mse_loss(torch.ones(1,1), o) #loss=tensor(7.9922e-08, grad_fn=<MeanBackward0>) 7 8 loss.backward() 9 print(w.grad) 10 #tensor([[-2.1985e-07, -8.2795e-08, -2.4651e-07, -6.1683e-08, 3.0177e-07, 8.9780e-09, 4.0506e-08, 3.0785e-07, 7.7334e-08, -2.1847e-07]])
多层感知机:
1 x=torch.randn(1,10) 2 w=torch.randn(2,10,requires_grad=True) 3 4 o=torch.sigmoid(x@w.t()) #tensor([[0.8033, 0.1742]], grad_fn=<SigmoidBackward>) 5 6 loss=F.mse_loss(torch.ones(1,2), o) #tensor(0.3603, grad_fn=<MeanBackward0>) 7 8 loss.backward() 9 print(w.grad) 10 #tensor([[-0.0490, -0.0091, 0.0229, -0.0259, -0.0046, -0.0206, 0.0257, 0.0724, -0.0078, 0.0127], 11 # [-0.1873, -0.0348, 0.0875, -0.0989, -0.0176, -0.0786, 0.0981, 0.2766, -0.0297, 0.0484]])
2.链式法则求梯度
$y1 = w1 * x +b1$
$y2 = w2 * y1 +b2$
$frac{dy_{2}}{^{dw_{1}}}= frac{dy_{2}}{^{dy_{1}}}*frac{dy_{1}}{^{dw_{1}}}= w_{2}* x$
1 x=torch.tensor(1.) 2 w1=torch.tensor(2., requires_grad=True) 3 b1=torch.tensor(1.) 4 5 w2=torch.tensor(2., requires_grad=True) 6 b2=torch.tensor(1.) 7 8 y1=x*w1+b1 9 y2=y1*w2+b2 10 11 dy2_dy1=torch.autograd.grad(y2, [y1], retain_graph=True)[0] 12 dy1_dw1=torch.autograd.grad(y1, [w1], retain_graph=True)[0] 13 dy2_dw1=torch.autograd.grad(y2, [w1])[0] 14 15 print(dy2_dy1) #tensor(2.) 16 print(dy1_dw1) #tensor(1.) 17 print(dy2_dw1) #tensor(2.) dy2_dw1==dy2_dy1*dy1_dw1
3.对Himmelblau函数的优化实例
Himmelblau函数:$f(x,y)=(x^{2}+y-11)^{2}+(x+y^{2}-7)^{2}$
有四个全局最小解,且值都为0,常用来检验优化算法的表现。
np.meshgrid(X, Y)函数
生成网格点坐标矩阵,比如:二维坐标系中,X轴可以取三个值1,2,3,Y轴可以取三个值7,8,共可以获得6个点的坐标:
(1,7)(2,7)(3,7)
(1,8)(2,8)(3,8)
首先可视化函数图像:
1 import numpy as np 2 import torch 3 from matplotlib import pyplot as plt 4 from mpl_toolkits.mplot3d import Axes3D 5 6 7 def himmelblau(x): 8 return (x[0] ** 2 + x[1] - 11) ** 2 + (x[0] + x[1] ** 2 - 7) ** 2 9 10 x = np.arange(-6, 6, 0.1) #x.shape=(120,) 11 y = np.arange(-6, 6, 0.1) #y.shape=(120,) 12 X, Y = np.meshgrid(x, y) #X.shape=(120, 120), Y.shape=(120,120) 13 14 Z = himmelblau([X, Y]) 15 fig = plt.figure("himmeblau") 16 ax = fig.gca(projection='3d') 17 ax.plot_surface(X, Y, Z) 18 ax.view_init(60, -30) 19 ax.set_xlabel('x') 20 ax.set_ylabel('y') 21 plt.show()
使用随机梯度下降进行优化,优化目标是找到使himmelblau函数最小的坐标x[0],x[1]:
1 #先对x[0],x[1]进行初始化,不同的初始值可能会得到不同的结果 2 x=torch.tensor([0., 0.], requires_grad=True) 3 optimizer=torch.optim.Adam([x], lr=1e-3) #定义Adam优化器,指明优化目标是x,学习率是1e-3 4 5 for step in range(20000): 6 7 pred = himmelblau(x) 8 optimizer.zero_grad() #将梯度设置为0 9 pred.backward() #生成当前所在点函数值相关的梯度信息,这里即优化目标的梯度信息 10 optimizer.step() #使用梯度信息更新优化目标的值,即更新x[0]和x[1] 11 12 if step % 2000 == 0: 13 print("step={},x={},f(x)={}".format(step, x.tolist(), pred.item()))
step=0,x=[0.0009999999310821295, 0.0009999999310821295],f(x)=170.0
step=2000,x=[2.3331806659698486, 1.9540694952011108],f(x)=13.730916023254395
step=4000,x=[2.9820079803466797, 2.0270984172821045],f(x)=0.014858869835734367
step=6000,x=[2.999983549118042, 2.0000221729278564],f(x)=1.1074007488787174e-08
step=8000,x=[2.9999938011169434, 2.0000083446502686],f(x)=1.5572823031106964e-09
step=10000,x=[2.999997854232788, 2.000002861022949],f(x)=1.8189894035458565e-10
step=12000,x=[2.9999992847442627, 2.0000009536743164],f(x)=1.6370904631912708e-11
step=14000,x=[2.999999761581421, 2.000000238418579],f(x)=1.8189894035458565e-12
step=16000,x=[3.0, 2.0],f(x)=0.0
step=18000,x=[3.0, 2.0],f(x)=0.0
如果第二行初始化改为如下,会得到另一个优化结果:
1 x=torch.tensor([4., 0.], requires_grad=True)
step=0,x=[3.999000072479248, -0.0009999999310821295],f(x)=34.0
step=2000,x=[3.5741987228393555, -1.764183521270752],f(x)=0.09904692322015762
step=4000,x=[3.5844225883483887, -1.8481197357177734],f(x)=2.1100277081131935e-09
step=6000,x=[3.5844264030456543, -1.8481241464614868],f(x)=2.41016095969826e-10
step=8000,x=[3.58442759513855, -1.848125696182251],f(x)=2.9103830456733704e-11
step=10000,x=[3.584428310394287, -1.8481262922286987],f(x)=9.094947017729282e-13
step=12000,x=[3.584428310394287, -1.8481265306472778],f(x)=0.0
step=14000,x=[3.584428310394287, -1.8481265306472778],f(x)=0.0
step=16000,x=[3.584428310394287, -1.8481265306472778],f(x)=0.0
step=18000,x=[3.584428310394287, -1.8481265306472778],f(x)=0.0
Tip:
使用optimizer的流程就是三行代码:
1 optimizer.zero_grad() 2 loss.backward() 3 optimizer.step()
首先,循环里每个变量都拥有一个优化器,需要在循环里逐个zero_grad(),清理掉上一步的残余值。
之后,对loss调用backward()的时候,它们的梯度信息会被保存在自身的两个属性(grad和requires_grad)当中。
最后,调用optimizer.step(),就是一个apply gradients的过程,将更新值重新赋给parameters。
$x[0]=x[0]-lr*alpha x[0]$
$x[1]=x[1]-lr*alpha x[1]$