1.mini-batch size
表示每次都只筛选一部分作为训练的样本,进行训练,遍历一次样本的次数为(样本数/单次样本数目)
当mini-batch size 的数量通常介于1,m 之间
当为1时,称为随机梯度下降
一般我们选择64,128, 256等样本数目
import numpy as np import math def random_mini_batch(X, Y, mini_batch = 64, seed=0): np.random.seed(seed) m = X.shape[1] # 表示X样本的数量 mini_batches = [] # step 1 shuffle(X, Y) np.random.permutation(m) 随机排列一个数字 permutation = list(np.random.permutation(m)) X_shuffle = X[:, permutation] Y_shuffle = Y[:, permutation] num_complete_minibatch = math.floor(m/mini_batch) for k in range(0, num_complete_minibatch): mini_batch_x = X_shuffle[:, k*mini_batch:(k+1)*mini_batch] mini_batch_y = Y_shuffle[:, k * mini_batch:(k + 1) * mini_batch] mini_batches.append((mini_batch_x, mini_batch_y)) if m%mini_batch != 0: mini_batch_x = X_shuffle[:, num_complete_minibatch*mini_batch:m] mini_batch_y = Y_shuffle[:, num_complete_minibatch*mini_batch:m] mini_batches.append((mini_batch_x, mini_batch_y)) return mini_batches
2. 指数加权平均
v0 = 0
v1 = 0.9 * v0 + 0.1 * θ1 v0表示前一次的数值,θ1表示当前的数值
v2 = 0.9 * v1 + 0.1 * θ2
v3 = 0.9 * v2 + 0.1 * θ3
v4 = 0.9 * v3 + 0.1 * θ4
vt = β * vt-1 + (1-β) * θt
举个例子:
v100 = 0.1*θ100 + 0.1*0.9*θ99 + 0.1*0.9^2*θ98 ...
指数加权的偏差修正
vt / (1-β^t) β 通常表示 0.9, t表示时间
(1-ξ)^(1/ξ) = 1/e
3. Momentum 梯度下降法, 加快梯度下降的速度,在横轴方向上进行了加权,因为方向相同,在纵轴上进行了削减,因为方向相反,因此梯度下降前进的方向更快
动量梯度下降法, 前一次的方向与当前次的方向进行指数加权,得到当前此的方向
vdw = β * vdw(forward) + (1-β) * dw
vdb = β * vdb(forward) + (1-β) * db
w: = w - α * vdw
b: = b - α * bdw
def update_parameters_with_Momentum(parameter, grade, v, beta, learning_rate): L = len(parameter) // 2 for i in range(L): v['dW' + str(i+1)] = beta*v['dW' + str(i+1)] + (1-beta) * grade['dW' + str(i+1)] v['db' + str(i+1)] = beta*v['db' + str(i+1)] + (1-beta) * grade['db' + str(i+1)] parameter['dW' + str(i+1)] = parameter['dW' + str(i+1)] - learning_rate * v['dW' + str(i+1)] parameter['db' + str(i+1)] = parameter['db' + str(i+1)] - learning_rate * v['db' + str(i+1)] return parameter, grade, v
4. RMS prop
Sdw = β * sdw(forward) + (1 - β) * dw^2
Sdb = β * sdb(forward) + (1 - β) * db^2
w: = w - α * dw/(sqrt(sdw)+ε)
b: = b - α * db/(sqrt(sdb)+ε)
5. Adam 优化算法,是将动量梯度下降法与RMS prop 结合
vdw = 0
sdw = 0
vab = 0
sab = 0
vdw = β1 * vdw(forward) + (1-β1) * dw
vdb = β1 * vdb(forward) + (1-β1) * db
Sdw = β2 * sdw(forward) + (1 - β2) * dw^2
Sdb = β2 * sdb(forward) + (1 - β2) * db^2
vdw(correct) = vdw / (1-β1^t)
vdb(correct) = vdb / (1-β1^t)
Sdw(correct) = Sdw / (1-β2^t)
Sdb(correct) = Sdb / (1-β2^t)
w: = w - α * vdw(correct)/(sqrt(Sdw(correct))+ε)
b: = b - α * vdb(correct)/(sqrt( Sdb(correct) )+ε)
β1 = 0.9
β2 = 0.999
ε = 10^-8
def update_parameters_with_Adam(parameter, grade, v, s, t, learning_rate, beta1=0.9, beta2=0.999, g=1e-8): L = len(parameter) // 2 for i in range(L): v['dW' + str(i + 1)] = (beta1 * v['dW' + str(i+1)] + (1 - beta1) * grade['dw' + str(i+1)]) / (1-beta1 ** t) v['db' + str(i + 1)] = (beta1 * v['db' + str(i + 1)] + (1 - beta1) * grade['db' + str(i + 1)]) / (1-beta1 ** t) s['dW' + str(i + 1)] = (beta2 * s['dW' + str(i + 1)] + (1 - beta2) * grade['dw' + str(i + 1)] ** 2) / (1-beta1 ** t) s['db' + str(i + 1)] = (beta2 * s['db' + str(i + 1)] + (1 - beta2) * grade['db' + str(i + 1)] ** 2) / (1-beta1 ** t) parameter['W' + str(i + 1)] = parameter['W' + str(i + 1)] - learning_rate*(v['dW' + str(i + 1)]) / (s['dW' + str(i + 1)] + g) parameter['b' + str(i + 1)] = parameter['b' + str(i + 1)] - learning_rate * (v['db'] + str(i + 1)) / (s['db' + str(i + 1)] + g) return v, s, parameter, grade
6. Learning rate decay
根据迭代的次数,加快学习率的降低,使得样本参数更容易发生收敛,但是一般情况下不使用
3种更新α的公式
α = 1 / (1 + decay-rate * epoch-num) * α0 α0表示初始学习率, decay-rate 表示衰减层度, epoch-num 表示迭代次数
α = 0.95^epoch_num * α0
α = k / sqrt(epoch_num) * α0