Tensorflow2(预课程)---1.4.1、自动计算梯度
一、总结
一句话总结:
将变量指定为Variable,就不需要tape.watch([a, b, c])步骤了,tensorflow自动给你做了
import tensorflow as tf x = tf.constant(1.) a = tf.constant(2.) b = tf.constant(3.) c = tf.constant(4.) with tf.GradientTape() as tape: tape.watch([a, b, c]) y = a**2 * x + b * x + c [dy_da, dy_db, dy_dc] = tape.gradient(y, [a, b, c]) print(dy_da) print(dy_db) print(dy_dc) tf.Tensor(4.0, shape=(), dtype=float32) tf.Tensor(1.0, shape=(), dtype=float32) tf.Tensor(1.0, shape=(), dtype=float32)
1、计算梯度的数据要是浮点类型,如何指定?
x = tf.Variable(1.)
二、1.4.1、自动计算梯度
博客对应课程的视频位置:
import tensorflow as tf
x = tf.constant(1.)
a = tf.constant(2.)
b = tf.constant(3.)
c = tf.constant(4.)
with tf.GradientTape() as tape:
tape.watch([a, b, c])
y = a**2 * x + b * x + c
[dy_da, dy_db, dy_dc] = tape.gradient(y, [a, b, c])
print(dy_da)
print(dy_db)
print(dy_dc)
自己试下auto gradient:Variable
In [1]:
import tensorflow as tf
# 将变量指定为Variable,就不需要tape.watch([a, b, c])步骤了,tensorflow自动给你做了
x = tf.Variable(1.)
a = tf.Variable(2.)
b = tf.Variable(3.)
c = tf.Variable(4.)
print(x)
In [2]:
with tf.GradientTape() as tape:
y = a**2 * x + b * x + c
[dy_da, dy_db, dy_dc] = tape.gradient(y, [a, b, c])
print(dy_da)
print(dy_db)
print(dy_dc)
In [ ]:
将变量指定为Variable,就不需要tape.watch([a, b, c])步骤了,tensorflow自动给你做了