import tensorflow as tfimport tensorflow.contrib.opt as optX = tf.Variable([1.0, 2.0])part_X = tf.scatter_nd([[0]], [X[0]], [2])X_2 = part_X + tf.stop_gradient(-part_X + X)Y = tf.constant([2.0, -3.0])loss = tf.reduce_sum(tf.squared_difference(X_2, Y))opt = opt.ScipyOptimizerInterface(loss, [X])init = tf.global_variables_initializer()with tf.Session() as sess: sess.run(init) opt.minimize(sess) print("X: {}".format(X.eval()))
part_X在与X形状相同的单热矢量中变为要更改的值。
part_X + tf.stop_gradient(-part_X +X)在正向传递中与X相同,因为它
part_X - part_X为0。但是在向后传递中,
tf.stop_gradient防止了所有不必要的梯度计算。
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)