随机梯度下降

梯度下降

Gradient

  • 导数derivative:一维
  • 偏微分partial derivate:多元函数中按照某一方向的导数
  • 梯度gradient:是多轴组合而成的向量

gradient的方向就代表了函数值增大的方向,loss找到最小值即是向梯度相反的方向更新

AutoGrad

自动求梯度

  • with tf.GradientTape() as tape: 将计算过程包在tape计算环境当中
  • [w_grad] = tape.gradient(loss,[w]) 传入loss函数和需要求解的参数就会返回梯度
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
In [2]: w = tf.constant(1.)
In [3]: x = tf.constant(2.)
In [4]: y = x*w

In [7]: with tf.GradientTape() as tape:
...: tape.watch([w])
...: y2 = x*w
...: grad1 = tape.gradient(y,[w])

In [8]: grad1
Out[8]: [None]

In [9]: with tf.GradientTape() as tape:
...: tape.watch([w])
...: y2 = x*w
...: grad2 = tape.gradient(y2,[w])

In [10]: grad2
Out[10]: [<tf.Tensor: shape=(), dtype=float32, numpy=2.0>]

Persistent GradientTape

  • tape.gradient()只能调用求解一次,求解后就会自动释放掉一大部分资源

  • 如果需要调用两次,则要使用Persistent GradientTape

1
with tf.GradientTape(persistent=True) as tape:
  • 如果要不使用watch的话需要将变量改为tf.Variable

2nd-order 二阶梯度

也就是求二阶导数

1
2
3
4
5
6
7
8
9
10
11
In [47]: with tf.GradientTape() as t1:
...: t1.watch([w,b])
...: with tf.GradientTape() as t2:
...: t2.watch([w,b])
...: y = x*w +b
...: dy_dw,dy_db = t2.gradient(y,[w,b])
...: print(dy_dw,dy_db)
...: d2y_dw2 = t1.gradient(dy_dw,w)
...: print(d2y_dw2)
tf.Tensor(2.0, shape=(), dtype=float32) tf.Tensor(1.0, shape=(), dtype=float32)
None #x对w的导数

Activation Functions 激活函数

  • 神经元机制并不是一个简单的输入加权求和,而是有一个预值响应机制,只有大于某个预值时才会输出,而且输出是固定的。这个预值就是激活函数的产生来源,这种到预值输出的激活函数是一种不可导的激活函数

Sigmod/Logistic

解决激活函数不可导的问题

函数非常光滑,相当于一个压缩功能,将(-∞,+∞)的值压缩到有限的范围当中(0,1)

sigmod当x趋近于无穷时,导数趋近于0,梯度会长时间得不到更新,会造成梯度离散

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
In [4]: a = tf.linspace(-10.,10.,10)
<tf.Tensor: shape=(10,), dtype=float32, numpy=
array([-10. , -7.7777777, -5.5555553, -3.333333 , -1.1111107,
1.1111116, 3.333334 , 5.5555563, 7.7777786, 10. ],
dtype=float32)>

In [7]: with tf.GradientTape() as tape:
...: tape.watch(a)
...: y = tf.sigmoid(a)
...: grads = tape.gradient(y,[a])
#y
tf.Tensor(
[4.5397872e-05 4.1876672e-04 3.8510333e-03 3.4445208e-02 2.4766390e-01
7.5233626e-01 9.6555483e-01 9.9614894e-01 9.9958128e-01 9.9995458e-01], shape=(10,), dtype=float32)
#grads
[<tf.Tensor: shape=(10,), dtype=float32, numpy=
array([4.5395809e-05, 4.1859134e-04, 3.8362027e-03, 3.3258736e-02,
1.8632649e-01, 1.8632641e-01, 3.3258699e-02, 3.8362255e-03,
4.1854731e-04, 4.5416677e-05], dtype=float32)>]

可以看出在|x|>3时会出现梯度离散

Tanh

往往用在循环神经网络当中,它可以有sigmod表示 tanh(x) = 2sigmod(2x)-1

1
2
3
4
5
6
7
8
In [10]: a = tf.linspace(-5.,5.,10)

In [11]: tf.tanh(a)
Out[11]:
<tf.Tensor: shape=(10,), dtype=float32, numpy=
array([-0.99990916, -0.9991625 , -0.99229795, -0.9311096 , -0.5046722 ,
0.5046726 , 0.93110967, 0.99229795, 0.9991625 , 0.99990916],
dtype=float32)>

Relu (Rectified Linear Unit整型线性单元)

Relu导数非常简单,而且保持梯度不变,很大程度减轻梯度离散和梯度爆炸

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
In [12]: a = tf.linspace(-1.,1.,10)
<tf.Tensor: shape=(10,), dtype=float32, numpy=
array([-1. , -0.7777778 , -0.5555556 , -0.3333333 , -0.1111111 ,
0.11111116, 0.33333337, 0.5555556 , 0.7777778 , 1. ],
dtype=float32)>

In [13]: tf.nn.relu(a)
Out[13]:
<tf.Tensor: shape=(10,), dtype=float32, numpy=
array([0. , 0. , 0. , 0. , 0. ,
0.11111116, 0.33333337, 0.5555556 , 0.7777778 , 1. ],
dtype=float32)>

In [15]: tf.nn.leaky_relu(a)
Out[15]:
<tf.Tensor: shape=(10,), dtype=float32, numpy=
array([-0.2 , -0.15555556, -0.11111112, -0.06666666, -0.02222222,
0.11111116, 0.33333337, 0.5555556 , 0.7777778 , 1. ],
dtype=float32)>

Typical Loss

Mean Squared Error 均方差

MSE

Derivative

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
In [3]: x=tf.random.normal([2,4])
In [4]: w=tf.random.normal([4,3])
In [6]: b=tf.zeros([3])
In [7]: y = tf.constant([2,0])

In [10]: with tf.GradientTape() as tape:
...: tape.watch([w,b])
...: prob = tf.nn.softmax(x@w+b,axis=1)
...: loss = tf.reduce_mean(tf.losses.MSE(tf.one_hot(y,depth=3),prob))
...: grads = tape.gradient(loss,[w,b])

In [11]: grads[0]
Out[11]:
<tf.Tensor: shape=(4, 3), dtype=float32, numpy=
array([[-0.11654878, 0.0167967 , 0.09975209],
[-0.03289018, -0.0634746 , 0.09636479],
[ 0.02359655, -0.01446564, -0.00913091],
[ 0.01675941, -0.05286441, 0.03610501]], dtype=float32)>
In [12]: grads[1]
Out[12]: <tf.Tensor: shape=(3,), dtype=float32, numpy=array([-0.00793203, 0.06198916, -0.05405714], dtype=float32)>

axis为求softmax的维度。[b,3],如果是axis求0维无意义,这里softmax函数axis默认为-1

softmax

当n分类的n个输出通过softmax 函数后,便满足了概率分布的要求使得每一个元素的范围都在(0,1)之间,并且所有元素的和为1

CrossEntropy CE

配合softmax

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
In [22]: x=tf.random.normal([2,4])
...: w=tf.random.normal([4,3])
...: b=tf.zeros([3])
...: y = tf.constant([2,0])

In [23]: with tf.GradientTape() as tape:
...: tape.watch([w,b])
...: logits = x@w+b
...: loss = tf.reduce_mean(tf.losses.categorical_crossentropy(tf.one_hot(y,depth=3),logits,from_logits=True))
...:
...: grads = tape.gradient(loss,[w,b])
...: print(grads)
[<tf.Tensor: shape=(4, 3), dtype=float32, numpy=
array([[-0.66708755, 1.213526 , -0.5464386 ],
[ 0.09902969, 0.15032136, -0.24935105],
[-0.49425828, 1.421092 , -0.92683387],
[-0.87035865, 1.703031 , -0.83267236]], dtype=float32)>,
<tf.Tensor: shape=(3,), dtype=float32, numpy=array([-0.49924132, 0.9968054 , -0.49756414], dtype=float32)>]

链式法则

通过使用链式法则可以将最后一层的误差,一层层输出到中间层的权值当中,从而得到中间层的梯度信息,更好的优化权值,达到最优化的效果

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
In [3]: x = tf.constant(1.)
...: w1 = tf.constant(2.)
...: b1 = tf.constant(1.)
...: w2 = tf.constant(2.)
...: b2 = tf.constant(1.)
...:
...: with tf.GradientTape(persistent=True) as tape:
...: tape.watch([w1,b1,w2,b2])
...: y1 = x*w1 + b1
...: y2 = y1*w2 + b2
...: dy2_dy1 = tape.gradient(y2,[y1])[0]
...: dy1_dw1 = tape.gradient(y1,[w1])[0]
...: dy2_dw1 = tape.gradient(y2,[w1])[0]#这里可以自动进行这样的链式运算
...: print(dy2_dy1*dy1_dw1,dy2_dw1)#链式法则结果相同
tf.Tensor(2.0, shape=(), dtype=float32) tf.Tensor(2.0, shape=(), dtype=float32)

FASHION_MNIST数据集 实践

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
import  os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import datasets, layers, optimizers, Sequential, metrics

assert tf.__version__.startswith('2.')

def preprocess(x, y):

x = tf.cast(x, dtype=tf.float32) / 255.
y = tf.cast(y, dtype=tf.int32)
return x,y


(x, y), (x_test, y_test) = datasets.fashion_mnist.load_data()
print(x.shape, y.shape)

batchsz = 128

db = tf.data.Dataset.from_tensor_slices((x,y))
db = db.map(preprocess).shuffle(10000).batch(batchsz)

db_test = tf.data.Dataset.from_tensor_slices((x_test,y_test))
db_test = db_test.map(preprocess).batch(batchsz)

db_iter = iter(db)
sample = next(db_iter)
print('batch:', sample[0].shape, sample[1].shape)

model = Sequential([
layers.Dense(256, activation=tf.nn.relu), # [b, 784] => [b, 256]
layers.Dense(128, activation=tf.nn.relu), # [b, 256] => [b, 128]
layers.Dense(64, activation=tf.nn.relu), # [b, 128] => [b, 64]
layers.Dense(32, activation=tf.nn.relu), # [b, 64] => [b, 32]
layers.Dense(10) # [b, 32] => [b, 10], 330 = 32*10 + 10
])
model.build(input_shape=[None, 28*28])
model.summary()
# w = w - lr*grad
optimizer = optimizers.Adam(lr=1e-3)

def main():

for epoch in range(30):

for step, (x,y) in enumerate(db):

# x: [b, 28, 28] => [b, 784]
# y: [b]
x = tf.reshape(x, [-1, 28*28])

with tf.GradientTape() as tape:
# [b, 784] => [b, 10]
logits = model(x)#前向传播就在这一句话即可完成
y_onehot = tf.one_hot(y, depth=10)
# [b]
loss_mse = tf.reduce_mean(tf.losses.MSE(y_onehot, logits))
loss_ce = tf.losses.categorical_crossentropy(y_onehot, logits, from_logits=True)
loss_ce = tf.reduce_mean(loss_ce)

grads = tape.gradient(loss_ce, model.trainable_variables)

#根据w = w-lr*grad对数据进行原地更新
optimizer.apply_gradients(zip(grads, model.trainable_variables))

if step % 100 == 0:
print(epoch, step, 'loss:', float(loss_ce), float(loss_mse))


# test
total_correct = 0
total_num = 0
for x,y in db_test:

# x: [b, 28, 28] => [b, 784]
# y: [b]
x = tf.reshape(x, [-1, 28*28])
# [b, 10]
logits = model(x)
# logits => prob, [b, 10]
prob = tf.nn.softmax(logits, axis=1)
# [b, 10] => [b], int64
pred = tf.argmax(prob, axis=1)
pred = tf.cast(pred, dtype=tf.int32)
# pred:[b]
# y: [b]
# correct: [b], True: equal, False: not equal
correct = tf.equal(pred, y)
correct = tf.reduce_sum(tf.cast(correct, dtype=tf.int32))

total_correct += int(correct)
total_num += x.shape[0]

acc = total_correct / total_num
print(epoch, 'test acc:', acc)

if __name__ == '__main__':
main()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
(60000, 28, 28) (60000,)
batch: (128, 28, 28) (128,)
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================

dense (Dense) multiple 200960
_________________________________________________________________
dense_1 (Dense) multiple 32896
_________________________________________________________________
dense_2 (Dense) multiple 8256
_________________________________________________________________
dense_3 (Dense) multiple 2080
_________________________________________________________________
dense_4 (Dense) multiple 330
=================================================================

Total params: 244,522
Trainable params: 244,522
Non-trainable params: 0
_________________________________________________________________
0 0 loss: 2.3202524185180664 0.14212539792060852
0 100 loss: 0.7960939407348633 13.527359008789062
0 200 loss: 0.49242451786994934 15.604676246643066
0 300 loss: 0.3477419316768646 15.686136245727539
0 400 loss: 0.2666846513748169 15.745983123779297
0 test acc: 0.8465
1 0 loss: 0.45068711042404175 17.90788459777832
1 100 loss: 0.3727279603481293 15.884428977966309
1 200 loss: 0.4774554371833801 20.82514190673828
1 300 loss: 0.33690470457077026 19.180904388427734
1 400 loss: 0.39581966400146484 20.918468475341797
1 test acc: 0.8679
...
打赏
  • 版权声明: 本博客所有文章除特别声明外,均采用 Apache License 2.0 许可协议。转载请注明出处!

请我喝杯咖啡吧~

支付宝
微信