zoukankan      html  css  js  c++  java
  • Using Tensorflow SavedModel Format to Save and Do Predictions

    We are now trying to deploy our Deep Learning model onto Google Cloud. It is required to use Google Function to trigger the Deep Learning predictions. However, when pre-trained models are stored on cloud, it is impossible to get the exact directory path and restore the tensorflow session like what we did on local machine.

    So we turn to use SavedModel, which is quite like a 'Prediction Mode' of tensorflow. According to official turotial: a SavedModel contains a complete TensorFlow program, including weights and computation. It does not require the original model building code to run, which makes it useful for sharing or deploying.

    The Definition of our graph, just here to show the input and output tensors:

    '''RNN Model Definition'''
    tf.reset_default_graph()
    ''''''
    #define inputs
    tf_x = tf.placeholder(tf.float32, [None, window_size,1],name='x')
    tf_y = tf.placeholder(tf.int32, [None, 2],name='y')
    
    
    cells = [tf.keras.layers.LSTMCell(units=n) for n in num_units]
    stacked_rnn_cell = tf.keras.layers.StackedRNNCells(cells)
    outputs, (h_c, h_n) = tf.nn.dynamic_rnn(
            stacked_rnn_cell,                   # cell you have chosen
            tf_x,                      # input
            initial_state=None,         # the initial hidden state
            dtype=tf.float32,           # must given if set initial_state = None
            time_major=False,           # False: (batch, time step, input); True: (time step, batch, input)
    )
    l1 = tf.layers.dense(outputs[:, -1, :],32,activation=tf.nn.relu,name='l1')
    l2 = tf.layers.dense(l1,8,activation=tf.nn.relu,name='l6')
    pred = tf.layers.dense(l2,2,activation=tf.nn.relu,name='pred')
    
    with tf.name_scope('loss'):
        cross_entropy =  tf.nn.softmax_cross_entropy_with_logits_v2(labels=tf_y, logits=pred) 
        loss = tf.reduce_mean(cross_entropy)
        tf.summary.scalar("loss",tensor=loss)
    train_op = tf.train.AdamOptimizer(LR).minimize(loss)
    accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(tf_y, axis=1), tf.argmax(pred, axis=1)), tf.float32))
    
    init_op = tf.group(tf.global_variables_initializer(), tf.local_variables_initializer()) 
    saver = tf.train.Saver()
    

    Train and Save the model, we use simple_save:

    sess = tf.Session()
    sess.run(init_op)
    
    for i in range(0,n):
        sess.run(train_op,{tf_x:batch_X , tf_y:batch_y})
        ...   
    tf.saved_model.simple_save(sess, 'simple_save/model', 
                               inputs={"x": tf_x},outputs={"pred": pred})
    sess.close()
    

    Restore and Predict:

    with tf.Session(graph=tf.Graph()) as sess:
        tf.saved_model.loader.load(sess, ["serve"], 'simple_save_test/model')
        batch = sess.run('pred/Relu:0',feed_dict={'x:0':dataX.reshape([-1,24,1])}) 
        print(batch)
    

    Reference:

     medium post: https://medium.com/@jsflo.dev/saving-and-loading-a-tensorflow-model-using-the-savedmodel-api-17645576527

    The official tutorial of Tensorflow: https://www.tensorflow.org/guide/saved_model

  • 相关阅读:
    ntp时间服务器
    locate 命令
    身份验证器
    centos 6 mysql源码安装
    简单上传下载命令lrzsz
    iptables记录日志
    iptables日志探秘
    du 命令
    Codeforces 1097F Alex and a TV Show (莫比乌斯反演)
    线段树教做人系列(1)HDU4967 Handling the Past
  • 原文地址:https://www.cnblogs.com/rhyswang/p/10971237.html
Copyright © 2011-2022 走看看