在深度学习过程中我们会训练很多的模型,有些模型的训练很费时间。是否可以保存已经训练好的模型应用于后续的图像识别呢?答案自然是肯定的,本节我们来讲述模型的保存与载入。
1.模型的保存 模型的保存有两个步骤:
a.创建saver对象
saver = tf.train.Saver()
b.训练完成后,保存模型
saver.save(sess,’net/my_net.ckpt’)
代码如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 import tensorflow as tffrom tensorflow.examples.tutorials.mnist import input_datamnist = input_data.read_data_sets("MNIST_data" ,one_hot=True ) batch_size = 100 n_batch = mnist.train.num_examples // batch_size x = tf.placeholder(tf.float32,[None ,784 ]) y = tf.placeholder(tf.float32,[None ,10 ]) W = tf.Variable(tf.zeros([784 ,10 ])) b = tf.Variable(tf.zeros([10 ])) prediction = tf.nn.softmax(tf.matmul(x,W)+b) loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y,logits=prediction)) train_step = tf.train.GradientDescentOptimizer(0.2 ).minimize(loss) init = tf.global_variables_initializer() correct_prediction = tf.equal(tf.argmax(y,1 ),tf.argmax(prediction,1 )) accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32)) saver = tf.train.Saver() with tf.Session() as sess: sess.run(init) for epoch in range (11 ): for batch in range (n_batch): batch_xs,batch_ys = mnist.train.next_batch(batch_size) sess.run(train_step,feed_dict={x:batch_xs,y:batch_ys}) acc = sess.run(accuracy,feed_dict={x:mnist.test.images,y:mnist.test.labels}) print ("Iter " + str (epoch) + ",Testing Accuracy " + str (acc)) saver.save(sess,'net/my_net.ckpt' )
2.模型的载入 模型的载入同样有两个步骤:
a.创建saver对象
saver = tf.train.Saver()
b.载入模型
saver.restore(sess,’net/my_net.ckpt’)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 import tensorflow as tffrom tensorflow.examples.tutorials.mnist import input_datamnist = input_data.read_data_sets("MNIST_data" ,one_hot=True ) batch_size = 100 n_batch = mnist.train.num_examples // batch_size x = tf.placeholder(tf.float32,[None ,784 ]) y = tf.placeholder(tf.float32,[None ,10 ]) W = tf.Variable(tf.zeros([784 ,10 ])) b = tf.Variable(tf.zeros([10 ])) prediction = tf.nn.softmax(tf.matmul(x,W)+b) loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y,logits=prediction)) train_step = tf.train.GradientDescentOptimizer(0.2 ).minimize(loss) init = tf.global_variables_initializer() correct_prediction = tf.equal(tf.argmax(y,1 ),tf.argmax(prediction,1 )) accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32)) saver = tf.train.Saver() with tf.Session() as sess: sess.run(init) print (sess.run(accuracy,feed_dict={x:mnist.test.images,y:mnist.test.labels})) saver.restore(sess,'net/my_net.ckpt' ) print (sess.run(accuracy,feed_dict={x:mnist.test.images,y:mnist.test.labels}))