是否可以仅在变量存在的情况下恢复该变量?最常用的方式是什么?
例如,考虑下面的最小示例:
import tensorflow as tf
import glob
import sys
import os
with tf.variable_scope('volatile'):
x = tf.get_variable('x', initializer=0)
with tf.variable_scope('persistent'):
y = tf.get_variable('y', initializer=0)
add1 = tf.assign_add(y, 1)
saver = tf.train.Saver(tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, 'persistent'))
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
tf.get_default_graph().finalize()
print('save file', sys.argv[1])
if glob.glob(sys.argv[1] + '*'):
saver.restore(sess, sys.argv[1])
print(sess.run(y))
sess.run(add1)
print(sess.run(y))
saver.save(sess, sys.argv[1])
当使用相同的参数运行两次时,程序会按照预期先打印0\n1
,然后再打印1\n2
。现在假设您通过在persistent
作用域中的add1
之后添加一个z = tf.get_variable('z', initializer=0)
来更新代码,使其具有新功能。如果存在旧的保存文件,则再次运行此命令将中断,并显示以下错误:
NotFoundError (see above for traceback): Key persistent/z not found in checkpoint
[[Node: save/RestoreV2_1 = RestoreV2[dtypes=[DT_INT32],
_device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0,
save/RestoreV2_1/tensor_names,
save/RestoreV2_1/shape_and_slices)]]
[[Node: save/Assign_1/_18 = _Recv[client_terminated=false,
recv_device="/job:localhost/replica:0/task:0/device:GPU:0",
send_device="/job:localhost/replica:0/task:0/device:CPU:0",
send_device_incarnation=1,
tensor_name="edge_12_save/Assign_1",
tensor_type=DT_FLOAT,
_device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]
https://stackoverflow.com/questions/47997203
复制相似问题